►
From YouTube: 2021-05-11 Delivery team weekly rollbacks demo
Description
Testing a rollback on Production
A
B
A
C
But
this
is
this
is
your
account
because
if
you
have
to
drop,
but
you
have
to
start
something
on
your
account,
it
will
not
work
because
it's
just
one
meeting
for
the
recording.
Do
you
mean
no
just
for
so
it
is.
You
can
have
one
only
one
meeting
per
account
running
at
the
same
time.
So
if
the
next
thing
you
have
to
do.
E
F
That's
why
I
don't
use
notification,
you
know
if
I
could
get
rid
of
notifications.
I
could,
but
you
know
gitlab
kind
of
enforces
it.
Linux
and
various
applications
enforce
it,
etc.
So
you
know
it's
it's
kind
of
everywhere.
F
Are
we
at
the
change
steps
now.
C
Yeah,
I
just
give
you
a
quick
recap
because
you
joined
this
is
just
the
beginning
of
your
morning,
so
we
entirely
skipped
seven
utc
package.
I
mean
we
didn't
promote
it
because
it
was
too
close
to
the
the
time
of
this.
So
we
decided
to
just
skip
it.
So
three
a.m
to
utc
am
was
rolled
out
to
production,
post
deployment,
job
is
cancelled,
seven
rolled
out
up
to
canary
and
and
then
it's
just
dead
there
and
11
utc
is
ready,
is
rolling
out
in
to
staging
right
now
and
I
canceled
cannery
and
on
so.
F
F
F
Gotcha
all
right
all
right
so
I'll
begin
make
sure
there's
no
ongoing
deployments.
So
that's
easy.
I
just
checked
the
announcements
page
and
look
for
anything.
That's
currently
running
only
staging
is
reading
canary
is
finished
and
the
last
production
job
looks
like
it
was
finished
as
well.
So
let's
see.
A
Do
do
we
have
a
step
early
on
about
notifying
engineer
on
call,
or
is
that.
A
Just
going
to
post
our
zoom
link,
then
in
in
for
a
lounge,
so
if
people
want
to
join
us,
they
can
but
keep
going
on
that
yeah.
It's
one
of
the
steps,
also
that
one
no,
but
we
should
yeah
like
people
mentioned
they'd
like
to
join
right,
so
have
we
done
it
already.
A
D
D
Yeah
rehab
was
interested
in
joining,
maybe
to
also
look
into
this
because
she
was
doing
the
hot
patch
fire
drill
also
today,
so
we
can
look
into
the
new
world
idea
with
that
one.
So
maybe
she's
training.
D
F
F
All
right,
okay,
so
I
am
on
the
step
to
find
the
packages
to
roll
back
to
I'm
going
to
follow
our
documentation.
F
F
It
currently
tells
me
the
robot
command
is
to
run
the.
We
want
the
2021
511
package.
F
B
C
C
E
C
F
E
F
F
F
F
C
Yes,
I'm
I'm
boosting
your
message
so
marin,
you
posted
the
message
in
the
thread,
but
I'm
going
to
send
it
again
with
also
in
what's
happening
at
gitlab,
so
that
I
don't
think
people
would
just
see
that
in
yesterday
make.
F
F
F
F
F
F
C
H
C
D
B
C
So
I'm
running
in
the
f
upcoming
release
chat
ups
around
algebra
blockers
just
to
show
the
result.
The
result
will
be
that
the
the
production
environment
is
locked
for
deployment
because
of
the
ongoing
deployment,
which
is
the
pos
deployment,
the
post
deployment
job
yeah,
the
the
my
personal
migration
job
that
we
cancelled
exactly.
C
C
F
F
B
C
So
this
is
the
same
problem
yeah,
it's
the
same
problem
so,
but
it
also
happens
one
step
ahead
and
in
this.
D
E
B
C
H
B
G
Please
be
careful
with
revealing
values
in
a
recorded
call.
F
B
F
C
F
B
F
F
B
C
Okay,
oh
right,
sorry,
this
is
a
string.
It's
supposed
to
be
a
string,
because
it's
the
reason
why
you
you're
skipping
it
so
we're
using
ignore
production
checks
during
regular
roll
forward
deployment,
and
so
we
provide
something
like
sri
on
call
allowed.
This
deployment
or
the
blocker
is
not
relevant
to
the
ongoing
deployment.
C
F
F
G
G
B
B
C
F
Yep
does
the
robot
command
also
drop
a
comment
in
the
release
issue.
C
It's
a
promotion
check
for
regular
deployment
in
that
regard,.
F
C
C
F
C
F
C
Yeah
sure,
because
we
cancelled
possibly
migration
to
not
have
to
deal
with
a
package
that
we
know
that
has
no
positive
migration.
So
the
the
simulation
allows
us
not
to
have
to
deal
with
timing
in
creating
a
fake
package
with
the
extra
complexity
that
we
are
running
this
with
an
ongoing
deployment
that
is
cancelled.
So
for
from
the
chef
point
of
view,
we
are
in
the
middle
of
a
deployment.
G
This
the
reason
why
I'm
asking
this
is
because
this
is
something
we
would
probably
want
to
address,
given
that
in
in
the
moment
where
we
need
to
do
a
rollback,
there
is
a
chance
that
there
has
been
an
ongoing
deployment
somewhere
and
so
on.
So
we'll
need
to
either
write
down
the
directions
on
what
to
do
here
or
improve
the
situation.
Yeah.
I
F
I
I
I
I
I
G
Right,
but
you
can
imagine
a
situation
where
you
know
a
production
deployment
failed,
mid
deployment
for
whatever
reason
we're
in
an
inconsistent
state.
We
want
to
roll
back
because
something
has
happened,
which
means
that
our
previous
deployment
actually
failed
again,
which
means
that
our
rollback
will
run
into
this
same
same
situation.
So
it's
not.
I
G
F
C
Yeah,
I
think
that
one
that
is
linked
here
is
just
there's
a
drop
down
where
you
move
from
service
to
service.
There's
not
a
big
overview
of
all
these
yeah.
F
I
F
I
F
F
F
H
F
H
C
C
G
F
D
F
I
C
I
I
see
I
see
so
it's
kind
of
drained
for
an
extended
period.
Yep
yeah
I
mean
we
are.
I
I
To
watch
it,
I
I
thought
an
api
was
a
little
bit
worse
than
red,
for
you
know
capacity
right
now.
We
might
want
to
take
a
look
to
see
if
we
should
expand
those
fleets.
C
A
little
bit
so
what
I'm
thinking
now
is
that
so
we
do
the
fleet
with
in
batches,
so
the
number
of
machines
that
are
out
of
rotation
is
constant
over
the
the
deployment.
This
is
correct
right
because
we
remove.
C
Of
complete
fleet,
we
remove
ten
percent
five
percent,
whatever
it
is,
but
it's
still
the
same
amount.
So
if
we
can
sustain
it
during
the
first
wave,
we
can
sustain
up
until
the
end
of
the
deployment,
because.
I
G
Considering
seeing
this
web
web
drop
at
the
moment-
and
I
just.
C
E
C
C
Yeah
point
taken:
we
have
to
consider
this
because
in
case
of
a
real
incident,
we
are
not
able
to
sustain
our
rollback
with
our
traffic.
E
G
And
when
we
drain,
we
put
those.
I
E
E
G
G
F
J
J
J
J
G
I'm
really
curious,
like
I
can't
wait
to
see
how
this
is
going
to
change
when
these
two
services
run
in
kubernetes.
C
In
terms
of
timing,
I
I
think
that
in
kubernetes
right
now
we
are
something
like
four
times
faster
than
what
we
have
with
vms,
because
we
are
already
on
the
zone
c.
Then
we
have
zone
d
and
we
are
done
it's
how
long
20
minutes
since
we
entered
the
fleet
upgrade
so
it's
impressing
compared
to
I
mean
api
will
take
one
hour
even
one
hour
and.
C
G
So
one
way
we
can
think
about
this
lack
of
capacity
when
we
drink
canary
is
to
not
drink
canary,
so
figure
out
a
way
to
deploy
both
canary
and
production,
or
rather
roll
back
both
canarian
production
in
one
go.
When
we
start
is
it
one.
F
Me
too,
I
think
doing
I
was
doing
that,
would
eliminate
just
the
fact
that
we
need
to
solve
that
problem
for
one
and
two
I
kind
of
want
canary
to
be
a
representation
of
what
we
can
enable
after
we
get
our
next
deploy
rolled
out.
So
theoretically,
when
we
come
back
from
a
fix
canary
could
then
be
enabled
after
the
defects
has
been
deployed,
and
then
we
simply
re-enable
canary
send
traffic
to
it
as
validation.
C
D
Fiddling
with
aj
proxy
roles
and
chef
and
stuff
yeah,
so
that's
the
nice
thing
about
kubernetes
right.
You
can
easily
just
add
containers
instead
of
setting
up
a
full
machine
which
makes
it
slow
and
also
that
fee
and
kunis
can
scale
up
and
then,
after
that,
take
away
some
pots
right,
the
old
parts.
So
we
don't
go
down
some
some
number
of
machines
and
doing
this
in
the
via
m
fleet
would
just
be
too
slow
or
we
would
need
to
start
way
before
we
do
the
deployment,
so
how
they
have
reserve
would
be
better.
F
F
F
We're
going
to
see
it
bounce
around
for
quite
a
while,
since
we're
taking
servers
out
of
rotation
during
the
deploy
okay.
So
it's
going
to
bounce
until
the
deploy
completes
essentially
a
little
worried
about
the
api
being
so
high
in
general.
But
I
think,
if
we're
at
capacity
for
the
api,
that's
going
to
be
kind
of
risky.
C
C
G
G
G
C
H
H
F
E
E
F
F
Job,
it's
not
user!
If
I
look
at
the
diff
job,
web
circuits
get
and
get
web
shell,
so
they
were
all.
C
F
It's
close
to
two
there's
a
bug
in
ansible,
where
it's
not
deploying
serialization
or
calculating
sterilization
properly
open
an
issue
for
it.
I
just
have
a.
I
E
F
I
Somewhere,
I
think
we
should
consider
doing
100.
I
I
I
F
F
Based
on
how
often
this
has
been
happening
and
robert
showed
a
chart
where
employed
times
have
always
been
excruciatingly
high
yeah,
so.
I
I
I
G
This
integration
graph
is
like
so,
although
I
see
something
in
git
service
as
well
so
like.
I
Interesting
during
the
deploy,
so
you
think
it's
like
all
the
fleets.
Maybe
we
should
just
add
capacity.
G
G
F
D
Yeah,
we
have
to
be
careful
with
these
numbers,
so
if
we
fiddle
around
with
our
scaling
strategies
like
how
many
at
the
same
time
will
be
added,
we
obviously
need
to
think
about
how
how
that
works
with
the
database
connection
pooling
and
things
like
that.
D
Same
as
we
saw
with
pre
when
we
added
real
load
to
it
and
overloaded
the
databases
there,
because
they
are
configured
for
only
25
connections
for
registry
and
perfect
and
the
main
database
in
pre
is
only
taking
100
max
connections.
So
if
you
really
run
qa
jobs
there,
you
just
run
out
of
db
connections.
G
G
J
C
Out
so
if
I'm
just
thinking
so
if
adding
cannery
is
enough
to
say,
keep
things
working
during
a
rollout
means
that
we
are
out
of
capacity
for
five
percent
for
peak,
because
now
it's
peak,
so
we
have
the
highest
amount
of
traffic
coming
in
and
I
think
we
mentioned
that
cannery
is
taking
five
percent
of
traffic
yeah.
This
can't
be
true
because
we.
D
Always
changing
all
the
time
right
I
mean
vv,
we
sometimes
adjust
api
or
web
feeds
for
my
traffic.
Then
after
via
traffic
grows
up,
then
we
need
to
adjust
again,
so
it
totally
depends
on
where
we
are
with
scaling
and
adjusting
to
the
traffic
that
we
have.
There
are
weeks
where
we
are
good
with
that,
and
then
it's
slowly
starting
to
get
bad
again
during
deployments,
and
you
see
how
we
reach
saturation.
D
So
I
mean
once
you
end
coordinators.
That
should
be
better
because
we
can
automatically
scale
but
on
our
vm
fleet,
we
manually
scale,
and
then
we
end
up
with
this
one
or
you
need
to
really
look
close
at
saturation
metrics
and
then
adjust
beforehand,
which
is
more
or
less
what
we
do
right.
We
look
during
deployments
or
when
canary
is
joined
at
our
situation
matrix
and
see.
Oh,
we
are
getting
close
and
then
we
need
to
adjust
again.
D
J
C
C
C
C
C
E
C
This
was
preventing
us
from
starting
the
rollback
and
yeah
we
we
we
haven't
this
documented,
so
we
had
to
figure
out
how
to
handle
this,
and
then
we
were
able
to
start
and
then
the
second
problem
was
that
is
this
thing
about
being
able
to
handle
traffic
so
because
we
drained
canary
as
soon
as
machine
got
out
of
the
load
balancer
the
saturation
started
skyrocketing
and
we
had
a
paging
event,
basically
in
the
web
fleet.
C
G
Amy
now
that
you're
back,
I
think
I'm
going
to
drop
off
for
my
next
meeting.
This
looks
good
so
far,
so
fingers
crossed
everything
ends
up
fine
as
well.
A
A
A
So
that's
quite
exciting
right,
because
that
means
that
once
we've
got
api
running
in
kubernetes,
the
rollback
should
actually
be
quite
a
lot
faster
than
we
expected
originally
right,
like
we're,
probably
looking
at
around
an
hour
or
so
no
we
started.
Do
we
start
like
what
time
to
start
we
started
like
no,
we
started
the.
C
C
C
A
Right,
okay,
nice,
so
robots
would
be
considerably
faster.
I
actually
wasn't
expecting
that.
I
was
thinking
that
rollbacks
would
be
slower.
I
was
expected
to
take
the
like
time
of
our
production
deployment
so
like
two
hours
but
be
safer,
that
was
kind
of
in
my
head.
What
the
benefit
of
rollbacks
was
so
great
that
they
are
safer
and
faster.
A
F
Let's,
let's
the
word
safer,
let's
tread
carefully
on
that
since
well.
A
A
One
thing
I
realized
the
other
day
is:
what
do
you
think
about
the
rollback
pipeline
posting
a
message
on
the
release
issue?
Do
you
think
it'd
be
worth
like
useful
to
actually
it's.
C
C
So
we
have,
we
had
a
note
in
the
agenda
we
are
going
to
I
mean
the
proposal
is
that
if
there
is
a
minus
minus
rollback
parameters,
then
we
will,
by
default,
put
skip
production
checks.
This
is
a
rollback
because
yeah,
if
you're
rolling
back
at
least
we
have
an
incident,
so
it
will
never
succeed.
So
yes,.
A
Get
the
yeah:
when
we
get
this
epic
closed
out,
we
should
absolutely
we
should
do
something,
because
this
is
a
huge
milestone.
F
A
I
think
it
might
be
worthwhile
right
like
it
would
be.
Certainly
it
would
be
good
to
not
have
the
questions
at
the
beginning
like
for
us
to
know
like
have
that
documented
and
have
the
like.
We
should
consider
whether
we
do
want
to
do
that
improvement
to
the
rollback
command
so
that
we
don't
have
that.
The
question
about,
like
is
a
deployment.
I
I
almost
wonder
if
there's
a
step
in
our
process,
which
just
says,
run
the
the
deployment
blockers
command.
A
So
we
have
that
information
there.
It
feels
like
there's,
definitely
a
gap
in
like
what
we
need
to
do
so
that
we
have
all
that
information,
and
we
know
like
when
it
says
this.
If
that
happens,
ignore
that
package
and
take
this
package
could
just
be
a
doc
update.
F
A
F
A
A
Well,
actually,
do
you
know
what
you
joke
alessio,
but
actually
it
wouldn't
be
impossible.
I
mean,
when
I
say
apac
time
I
mean
kind
of
like
late
in
apac
time,
but
like
we
have
a
with
the
time
with
the
clocks
being
as
they
are,
we
do
actually
have
a
time
in
the
day
where
me
you,
graham
and
henry,
are
actually
all
quite
frequently
online
and
traffic,
presumably
is
quite
low
at
that
time.
So
we
actually
could
do
one.
A
Yeah
sounds
like
a
plan
because
actually
I
was
chatting
to
graham
this
morning,
and
I
realized
that
when
we
get
him
into
release
management,
he's
potentially
going
to
have
quite
busy
shifts,
because
I
think
he
gets
quite
a
lot
of
stuff
merged
in
that
we'll
deploy
in
that
first
deployment
anyway.
I
stopped
for
it.
Exciting
did.
F
F
F
C
So
because
it
depends
on
how
you
trigger
them,
so
I
know
for
sure
that
they
have
scheduled
pipelines
that
are
running
on
the
production
check.
So
I
don't
even
know
if
the
thing
that
you
started
will
run
a
real
qa
test
on
on
master
and
sorry
on
production.
So
because,
if
you
go
on.
C
I'm
under
the
production,
yeah
yeah.
F
C
F
F
We
want
to
roll
this
package
out
for
at
least
30
minutes.
Qa
will
provide
us
at
least
10
of
those
minutes
or
so,
and
then
we
can
just
continue
forward
with
our
to
deploy
stuff.
F
A
F
I
think
there's
room
for
improvement
as
far
as
node
saturation
goes
yeah
and
I
think
the
steps
that
start
us
off
on
this
process
need
a
little
bit
of
refinement
just
to
make
it
a
little
bit
easier.
I
think
dry
run
was
a
hindrance,
but
that
still
enables
us
to
fully
test
other
things.
We
just
found
new,
not
necessarily
blockers,
just
new
obstacles
that
we
need
to
refine
a
little
bit.
A
Yeah,
I
agree
the
saturation
one's
really
interesting,
though,
isn't
it
because
there
must
be
times
in
our
normal
day
where
we
get
really
close
to
that,
because
we
don't,
we
don't
spend
a
lot
of
time.
Looking
at
the
difference,
like
you
know,
we
just
let
canary
in
production,
roll
kind
of
on
separate
schedules
right,
so
it's
unlikely
that
you
would
have
those
things
running
together
or
impacting
each
other,
but
certainly
we
could
but
yeah.
I
agree
on
the.
A
C
Yeah,
I
think
we
started
with.
We
started
working
a
lot
on
that
one,
and
then
we
reached
the
point
that
we
were
just
refining
that
script
and
and
not
having
progress
on.
Do
we
really
know
how
to
run
the
the
pipeline?
Do
we
really
know
how
to
handle
problems,
and
then
we
moved
on
that
aspect
and
we
never
looked
back
because
we
were
kind
of
okay.
C
We
we
know
how
to
fix
problems
or
quickness
of
that
one,
let's
see
if
it
really
works
so
that
we
can
go
back
and
publish
and
make
this
yeah
say
stress
safe,
because
in
a
stressful
situation,
maybe
you
overlooked
some
of
the
that
information.
You
roll
back
to
the
wrong
package
or
things
like
that
right,
exactly
yeah.
So
you
don't
want
this
to
happen.
A
And
we
have
a,
we
have
got
an
issue
that
we
opened.
A
So
does
this
issue
capture
everything
we
want
it
to
have
so
improvements
to
the
rollback
check,
command,
scumbag
opened
last
time
we
saw
this
so.
A
A
A
Not
going
to
be
the
package
you
actually
want,
I
almost
think
that's.
Maybe
the
thing
that
like
doesn't,
I
don't
know
how
much
extra
complexity,
but
what
do
we
have
something
like
if
the
if
the
deployment
checks
fail
or
if
they
were
going
to
fail
like
if
there
is
an
up
coming,
that's
a
new
package,
it's
called
now,
then,
should
we
maybe
not
have
the
rollback
command
or
something
and
actually
be
like
use?
The
run
book.
F
G
F
C
Yeah,
if
it
has
to
be
wrong,
I'd
rather
prefer
having
package
name
next
to
every
show,
so
that
it's
up,
because
the
the
things
that
I
don't
like
is
that
if
you
go,
we
go
back
to
manual.
Then
there's
this
extra
problem
that
you
have
to
figure
out
the
package
name
which,
because
it's
something
that
is
automated,
usually
then
you
maybe.
A
A
Doing
this,
but
yeah
like
we
should
it's.
That
is
those
extra
steps
where
it's
unusual
on
some
of
the
times.
If
we
have
so
just
to
get
logic
on
this
one
right
that
I'm
going
to
call
it
upcoming,
just
because
that's
what's
visible,
that
upcoming
line
will
only
display
if
there's
an
incomplete
deployment.
Is
that
correct.
B
A
C
Yeah,
the
there's,
I
think,
there's
an
extra
thing
here
that
we
have
to
double
check.
What
happens
if
it
failed
so
because
we,
I
think
we
show
up
the
upcoming,
which
is
the
new
package.
If
something
is
running
or
maybe
also
if
it's
cancelled
it
fails
or
cancel,
is
not
a
right
status
now,
if
it's
running
it
may
be
because
it's
really
running
or
because
we
cancelled,
usually
we
don't
cancel,
but
you
know
right.
You.
C
A
But,
but
just
to
like,
rather
than
skipping
on
I'd,
rather
than
sort
of
talk
about
skipping
on
the
checks
like
is
there
ever
a
case
where
we
would
well,
I
guess,
let
me
phrase
this
differently.
How
would
we
determine
on
that
rollback
command?
How
do
we
determine
what
the
previous,
whether
we
should
be
using
the
previous
or
the
upcoming
package?
Is
that
just
literally
a
case
of
us
knowing
did
the
fleet
complete
running
or
not.
C
Yeah,
because
so
one
step
of
the
of
the
production
rule
of
the
change
request
was
make
sure
there
are
no
ongoing
production
deployment.
C
And
so
that's
that's
the
point
right.
So
I
I
don't
know
I
mean
is
maybe
I
can
let
me
try
showing
something.
So
if
you
don't
mind,
I
can
share
my
screen.
Oh
so
I'm
gonna
quickly
show
something
that
I
was
playing
with.
It's
it's
local
script,
so
it's
not,
but
yeah
just
should
give
you
an
idea.
So
let
me
close
this
and
this
and
all
those
things.
C
So
this
one
check
for
pipeline
in
deployers
that
are
so
basically
no
just
this
checks
for
release
tools,
pipelines
that
are
running
on
tags
and
out
of
that
it
just
goes
to
metadata
omnibus
whatever
and
get
out
information.
So
it
tells
you
this
is
red.
So
it's
a
production
deployment
for
this
thing.
This
is
still
running
here,
because
I
I
can't
is
the
one
that
we
canceled.
So
is
the
320
a.m,
but
disregard
the
fact
that
this
is
still
market
here.
But
the
point
is
that
we
can
start
we
can.
C
What
is
this
one
just
open
this
up?
So
basically,
with
an
api
call,
we
can
get
something
like
this,
which
are
pipeline
for
tags
and
pipel,
and
release
tools.
Python
for
tags
are
deployment
and
navigating
the
output
of
this
we
can
try
to
figure
out
if
something
is
running
as
a
deployment,
but
but
the
things
that
I
like
more
is
this
one
here.
C
So
this
one
shows
you
every
tag
package
with
time
stamp
and
gives
you
links
to
outside
branch
omnibus
whatever-
and
there
is
this,
for
instance,
this
one
is
wait
for
you
to
play
it
in
the
the
promotion
production
promotion.
Now
this
is
because
we
cancelled,
and
so
it's
market
has
failed
if
it's
running
it
shows
you
what
is
running
so
this
gives
you
idea
that,
starting
from
the
tags
on
release
tools,
we
can
navigate
a
lot
of
information.
C
So
maybe
we
can
build
on
top
of
this,
making
it
part
of
our
checks
to
actually
get
useful
information
like?
Is
it
really
running
or
what's
the
difference?
What's
the
real
difference
between
this
package
and
the
next
one,
because
this
this
does
not
rely
on
environments
apis
and
things
like
just
go
from
from
releasing
tags.
C
Because
there
are
insights
there
right
so
if
we
have
a
running
deployment,
then
we
can
check
this
type
of
information
because
we
have
running
deployment.
So
we
have
a
tag
or
whatever-
and
we
say
is
this
really
running
and
we
can
make
it
we
can
make
the
check
smarter
or
even
just
bailing
out
and
say
something
is
running.
So
could
you
please
check
and
links
to
where
we
can
check.
A
E
C
B
F
H
C
C
C
F
So
I
think,
because
we're
going
to
do
that
method,
I'm
going
to
go
ahead
and
just
click
the
boxing
that
we
waited
30
minutes,
because
it's
going
to
take
us
roughly
two
hours
before
bill
to
do
our
next
promotion
and
we're
done
with
this
change
request.
A
Fantastic
nice
work.
It's
exciting,
see
how
to
get
closed
off.
A
Fantastic,
what
do
we
want
to
do
about
planning
next
steps,
then,
for
for
sort
of
future
future
improvements.
F
E
F
F
Reuse,
the
one
that
amy
was
showing
earlier,
but
I
think
we
do
need
a
minor
documentation
change
to
help
us
one.
I
think
disabling
canary
is
not
prominent
in
our
docs.
E
F
It's
in
there
it's
just
not
a
prominent
step,
you
know
and
then
the
other
thing
was
determining
whether
or
not
a
production
deploy
is
happening.
I
think
that
determining
that
is
missing
from
our
documentation.
We.
F
C
F
F
A
Cool
so
alessia,
can
I
let
you
do
something
with
the
issue
1699
and
work
out
like
what
is
feasible
or
we
want
to
do
there
like
if
we
split
it
up
or
what
we
can
do
and
then.