►
From YouTube: 2021-04-06 Delivery team weekly rollbacks demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Well,
in
theory,
europe
was
supposed
to
remove
it
this
year.
I
think
this
was
supposed
to
be
the
last
time,
but
I
think
that
since
kovid
and
pandemic
situation,
they
never
prepared
the
low.
So
I
don't
expect
this
to
be
the
last
time
we
will
switch
hello,
henry.
B
A
Okay,
awesome
so
last
friday,
I
think
I
tested
the
yes
scenario
of
doing
a
rollback
when
the
servers
are
in
maine,
which
is
very
similar
to
what
we
did
last
thursday
testing
when
the
servers
are
in
drain
and
the
rollback
was
successful,
the
notes
were
still
in
main
after
the
rollback
finish
so
jay.
A
The
only
action
item
or
a
slight
inconvenience
is
that
for
some
reason
there
was
an
environment
running
from
the
day
before
so
when
executing
the
chat
ups
command
to
check
if
a
rollback
was
feasible
in
staging
this
one
return
like
oh,
it
is
not
possible
or
it
is
unsafe
to
roll
back,
because
there
is
a
deployment
in
progress.
A
So
on
that
moment
I
was
not
aware
of
what
should
I
do
so.
I
just
continue
with
the
roll
back,
but
I
guess
this
is
a
situation
that
can
come
off
come
up
again
and
I
can
see
that
there
is
already
a
fix
by
robert
that
it
should
only
our
tooling
should
only
consider
a
deployment
that
is
running
if
it
is
the
first
one.
B
Yeah
I
so
I
I
was
looking
at
what
happened
and
we
have
another
way
for
checking
this,
which
is
not
implemented
in
chat
ups,
it's
implemented
only
in
release
tools.
Yay
double
hit
code
base.
By
the
way.
The
thing
is
that
for
production
only,
but
we
can
do
the
same
for
for
staging
we,
we
have
deployment
in
progress
check
based
on
chef
server,
so
we
can
check
our
role
on
chef
and
see
if
there
is
a
if
the
environment
is
locked,
which
is
more,
let's
say,
more
accurate.
B
Yeah,
in
any
case,
I
don't
see
well
in
usually
a
release
manager
should
be
well
aware.
If
there
is
an
ongoing
production
deployment
or
not,
staging
can
be
kind
of,
but
yeah
production,
usually
but
yeah.
We
know
that
someone
else
can
run
it,
but
I
mean
this
is
not
the
goal
for
the
the
first
iteration.
So
I
mean
the
check
is
more
than
more
than
enough
yeah
you're
right,
okay,
so
I
have
an
appoint
based
on
on
the
basically
on
the
slack
message
that
you
linked
myra,
terrible
sorry
for
the
for
the
format
yeah.
B
This
kind
of
I
mean
I
have
the
black
dark
mode.
Is
luck
and
just
yeah
yeah?
I
have
to
remove
the
the
formatting,
but
the
thing
is
the
thing
is
I
think
that
you
wrote
a
message
saying
that
there
was
some
weird
information
about.
What's
the
current
package,
what
does
what's
the
previous
package
did
we
and
by
we
I
mean
either
you
or
robert,
figured
out
what
happened
there.
A
Yeah,
so
it
was
a
tricky
situation,
not
tricky.
It
was
confusing,
because
so
we
I
assume
we
were
using
the
newest
pipeline,
but
we
were
not
using
that
one
because
it
had
a
broken
build.
So
we
never
tasked
from
that
pipeline.
We
were
always
using
the
previous
one
and
in
the
previous
one
the
commits
were
actually
kind
of
right
and
in
the
newest
one
they
were
like
backwards.
A
So
what
we
did
to
solve
it
is
that
I
think
robert
picked
a
commit
into
the
newest
pipeline,
so
we
back
from
that
one,
but
it
also
failed.
So
I
think
we
never
use
that
up.
The
deploy
branch.
B
So
that's
the
thing
right.
So
these
are
the
commit
numbers
and
if
I
go
back
to
here
so
basically
current
is
1.
C3
and
previous
is
dcf.
B
A
A
So
I
think
that
was
the
reason
why
it
was
not.
We
were
not
being
tagged
on
that
branch.
B
B
C
B
C
B
Well,
maybe
I'm
I'm
thinking
loud
right,
so
this
should,
if
there
are
no
changes,
even
in
omnibus
or
cng,
even
if
it's
in
a
new
how
to
deploy
branch,
we
should
not
tag
because
we,
when
we
go
on
to
ta
in
the
tagging
business,
we
just
take
care
of
the
commit
shares
right.
So
there
are
no
changes
here
and
there.
C
B
Okay,
so
no
I
mean
I'm
fine.
What
I'm!
What
I'm
saying
here
is
that
these
are
two
different
packages,
but
because
we
track
deployment
by
project,
they
may
be
the
same.
What
I'm!
What
I'm
saying
here
is
that
the
package,
the
real
package
that
we
installed
here
and
the
one
that
we
installed
here
shares
the
same
git
lab
sha
but
omnibus
or
cng-
must
be
different.
Otherwise
we
would
not
have
tagged
again.
B
B
B
Yeah
so
this
one,
because
then
it's
failed
success
and
also
should
be
here
right,
failed
success
right.
So
it's
going
so
the
previous
one
here
has
the
same
same
commit
here,
but
there's
not
the
same,
commit
here
so
yeah.
B
So
it
was.
There
was
a
diff,
the
nominee
bus
changes.
Basically,
that's
why
we
we
deployed
okay.
That
being
said,
how
was
it
possible
for
those
things
to
be
in.
B
B
B
A
B
B
B
B
This
red
pipeline
makes
no
sense
here,
but
basically,
what
I'm
thinking
is
that
here
for
this
was
read
so
for
some
we
should
check.
Oh,
we
don't!
No.
We
we
check
for
pipeline
status
here
so,
but
we
have
the
new.
So
the
new
code
that
we
have
is
that
if
there
is
a
green
master
pipeline,
then
we
will.
We
will
pick
that
commit.
B
B
Sorry,
no
red
green
master
pipeline.
So
we
picked
this
then
four
hour
laters
we
created
a
new
branch.
So
this
is
this.
Is
a
cherry
pick,
so
we
don't
care.
This
is
not
cherry
picks.
So
back
then.
C
B
B
Original
one
was
and
then
for
some,
oh
because
we
have
scheduled
pipeline
on
master
because
we
we're
around
the
full.
So
it's
not
something
that
we,
I
mean
engineering
productivity.
They
run
scheduled
pipeline
on
master,
so
the
last
semester
they
just
run
a
full
pipeline
on
it
or
ever
every.
I
don't
know
how
many
hours
so
do
we.
E
B
And
and
what
happened
on
top
of
that?
What
happened
is
that
we
ended
up
peeking
even
back
because
for
some
re.
So
this
is
the
12
branch
right,
so
this
pipeline
is
cancelled.
So
what
I'm
thinking
happened
here
is
that
at
certain
point
we
were
looking
for
the
green
pipeline
and
we
were
not
able
to
find
it
so
we
went
back.
B
Yeah
definitely
so
what
happened
here
is
that,
for
some
reason
this
pipeline
was
cancelled,
so
when
the
tagging
happened
again
instead
of
saying
this
was
cancelled,
let
me
check
if
it
was
green
on
master,
which
is
what
happened
on
the
first
tag.
It
say
this
is
cancelled,
so
let
me
go
back
in
the
history.
Finding
the
another
green
master,
another
green
pipeline,
and
it
reached
out
for
this
one
here,
so
we
actually
wrote
it
back.
C
C
B
A
A
B
I'm
quite
sure
we
disabled
this
behavior
for
both
master
and
ultraviolet
branches.
B
E
B
C
B
A
I
think
it
was
automatically
cancelled,
or
I
just
pasted
the
conversation
about
me
and
roberts,
trying
to
figure
it
out
what
happened
with
the
packages
and
the
last
message
is
that
he
picked
a
broken
spec
fix
in
that
of
the
deploy
branch.
B
Well,
I
think
we
can
test
this
right.
We
just
see
when
it's
running
something
we
pick
yeah
we
can
and
we
see
because
for
sure
I
mean
we
should
have
seen
more
cancelled
pipeline
if
this
was
aromatic,
because
we
run
a
peak
in
an
automated
way.
I
mean
it
really
depends
because
maybe
it
was
already
green
before,
but
we
should
have
seen
this
more
often.
B
B
Building
something
more
easily
consumable
from
our
failure
scenario
issue,
so
some
kind
of
say,
executive
summaries
out
of
it,
so
that
we
can
double
check
if
we
are
missing
something.
I
think
we
test
everything.
We
want
every
failure
we
want
to
test
and
when
we,
otherwise
we
discussed
it
and
we
postponed
it.
But
I
think
we
are.
B
We
are
there,
so
let
I'll
take
time
for
doing
this,
then
we
can
see
if
there's
something
obvious
missing.
Otherwise
we
can
move
for
to
drive
around
in
production
and
and
in
the
same
time,
writing
our.
I
mean
it's
already
there,
so
I
just
rewriting
or
finishing
the
production
change
issue
so
that
we
all
we
have
the
green
light
for
testing
it.
A
What
would
be
the
scenarios
for
testing
in
production?
I
understand
the
first
one
is
going
to
be
a
dry
run,
so
I
guess
we
don't
need
to
prepare
a
package
for
it
because
it
will
be
a
dry
run.
So
when
we
are
testing
without
a
dry
run,
should
we
prepare
a
package
like
a
fake
one,
with
some
simple
change
to
roll
back.
B
That's
a
good
question,
so
the
problem
with
creating
a
package
is
that
involved
kind
of
a
perfect
timing
around
it.
So
it's
really
well,
it's
it's
really
hard.
A
Yeah,
I
was
thinking
of
the
downsides
and
of
doing
that.
I
guess
the
downside
is
that
it
is
kind
of
a
fake
package,
so
it
is
not
proving
that
we
can
roll
back,
but
on
the
other
side,
if
we
attempt
to
roll
back
a
production
deployment-
and
if
this
one
has
migrations
and
post
migrations,
then
we
are
not
going
to
be
able
to
do
it.
A
Yeah
I
mean,
I
guess,
preparing
a
package
is
kind
of
difficult
because
we
need
to
be
organized,
but
I
guess
it
is
also
not
that
extremely
complicated.
We
can
just
merge
a
merch
request,
commenting
something
on
a
model
and
then
that
will
trigger
a
complete
package
and
then
just
cherry
pick
it
and
just
try
to
move
it
forward.
B
The
point
is
that
this
is
true,
but
on
one
side
you
may
end
up
not
being
able
to
roll
back
because
of
the
content
of
a
package
on
the
other
side,
adding
up
building
time
deployment
time
rollback
time.
It
just
gives
you
a
test
scenario
that
runs
for
more
than
one
day.
Basically,
and
assuming
there
are
no
production
incidents,
you
are
you
deployed
something
that
is
where
you're
ready
to
pick
a
change
on
top
of
it.
So
it's
kind
of
I
don't
know
right.
B
So
if
we
I
mean
we
can
test
this
or
we
can
just
say
this
is
the
script
we
want
to
run
and
we
we
will
try
to
run
this
change
request
until
we
find
until
we
find
a
good
package.
So
we
are
okay.
If
it's
not
today,
gonna
be
tomorrow,
or
I
mean
not
just
did
we
move
to
the
next
week,
otherwise
kind
of
it
could
take
forever.
A
Yeah,
I
that
could
work,
but
since
since
we
are
getting
closer
and
closer
to
the
22nd,
the
more
closer
we
are,
the
more
changes
are
going
to
be
introduced.
So
that
is
going
to
be
difficult
to
wait
for
a
precise
moment.
B
Yeah,
maybe
something
we
could
do,
no,
I'm
thinking
about,
because
the
problem
is
that
you
want
to
roll
back.
That's
the
real
problem.
Rolling
back
means
that
you
deploy
something
that
is
trivial
like
commenting
something,
and
then
you
go
back
to
the
previous
state
without
getting
into
new
october
branch
getting
created
without
well.
It
just
takes
hours.
B
D
B
Yeah
but
mira
had
a
point
here
which
we
are
getting
closer
to
the
22nd,
so
it
means
that
more
things
will
be
merged.
Usually
things
involving
big
due
to
migrations
tends
to
be
closer
to
20
seconds
because
they
get
reviewed
more
and
more
and
they
kind
of,
and
then
they
feel
the
pressure
that
we
are
getting
closer
and
second,
so
they
get
merged
regardless
of
the
status,
no
man.
This
is
too.
This
is
not
true
but
yeah.
The
point
is
that
we,
we
merge
them
usually
closer
to
the
22nd.
So.
B
So
myra,
maybe
we
can
do
something
like
this,
so
we
prepare
we
test
it
for
now
right.
So
we
prepare
something
like
this
commenting
something
and
we
have
the
that
merge
request
ready
on
security
so
that
we
can
pick
it.
B
And
we
start
picking
it
right,
so
we
don't.
We
don't
have
to
merge
it
on
on
master,
we
pick
so
otherwise
we
get
broken
state
and
things
like
we
pick
it
and
see
if
it
really
create
and
tags
a
new
package,
and
things
like
that,
so
we
know
that
picking
something
directly
on
the
security.
Some,
this
type
of
changes
just
give
us
a
complete
full
package,
and
then
we
know
this
is
something
possible.
B
Then,
when
we
have
a
due
date,
we
start
picking
this
on
every
packages,
so
that
I
mean
not
every
just,
not
necessarily
every
package
just
to
make
sure
that
the
picking
we
can
still
pick
the
change
right
and
so
that
in
theory
we
have
deploy
and
then
we
will
have
another
deploy
that
has
the
the
fake
change.
So
when
we
decide
that
this
is
the
day
we
gonna
do
rollback,
we
can
deploy
one
we
can
deploy
once
stop
the
ultraviolet
branch
creation,
deploy
the
fake
package
and
be
ready
for
drawback.
F
So
a
question
for
all
of
you
in
case
you
considered
it
before,
but
I
might
have
not
followed
so
one
of
the
problems
that
I
hear
you
describing
here
is
that
you
mention
confidence.
You
mention
things
breaking
takes
time
and
so
on,
like
all
of
those
are
items
that
are
forcing
you
to
kind
of
lose
confidence
in
the
in
the
steps.
F
F
F
So
before
I
leave,
maybe
the
the
point
I'm
trying
to
make
here,
you're
trying
to
bite
off
a
huge
piece
of
mountain
instead
try
to
find
like
a
very
small
part
of
it
and
gain
confidence
in
there,
and
if
the
answer
ends
up
being
kubernetes
is
going
to
make
this
super
easy
for
us
faster,
better
blah,
blah
blah
the
answer
might
be
well,
you
know
what
we're
just
going
to
focus
on
the
items
that
are
not
going
to
be
migrated
in
kubernetes
anytime
soon
everything
else
we're
just
gonna
hold
off
everything
else
being
web
api.
F
No,
no
and
all
of
that
stuff
that
we
know
that
we're
gonna
migrate.
Sorry,
I
need
to
leave
for
for
the
incident
just
let
me
know
in
the
delivery
channel
if
that
was
just
like
a
no
go
right
from
the
start
or
you
already
considered
it.
B
B
B
B
Yeah
so
because
you
cancel
that
the
deployment
will
go
on
until
the
post
deployment
migration,
so
we
wait
for
the
deployment
to
happen,
so
we
basically
are
running
in
a
kind
of
situation
where
it's
like
cannery
from
the
database
perspective
is
like
running
cannery
because
you
have
new
code
and
old
schema
only
regarding
possible
migration.
So
this
means
that
no
post
deployment
no
possible
migration,
so
no
background
jobs.
B
The
reason
why
we
wait
is
because
then
we
are,
we
are
in
kind
of
a
clean
state
right,
because
every
every
piece
of
the
infrastructure
reached
out
the
the
final
step.
If
we
want
to
make
it
shorter,
we
could
cancel
even
earlier.
So
I
just
say
we
just
want
to
run
easily
and
api.
Then
we
cancel
everything
else
I
mean
I
would
I
would
probably
consider
doing,
but
it
is,
is
not
a
real
rollback
right
because
basically
you're
just
rolling
back
a
very
just
one
service.
D
E
A
E
Aren't
we
doing
deployments
only
at
specific
days
of
the
time,
so
I
mean
for
the
rollback
test
and
production.
I
would
really
try
to
aim
for
a
time
where
we
don't
usually
do
other
deployments
right,
because
that
gives
us
much
more
time
to
prepare
the
package
and
do
any
kind
of
testing
and
rolling
back
and
forward.
B
We
are
in
full
control
of
production
deployment
time
because
it's
up
to
release
managers,
so
it's
it's
gonna,
be
as
as
a
team.
B
Thank
you
starbuck
for
your
answer
on
martin
question.
I
do
agree,
and
also
I
want
to
double
down
on
this-
that
so
something
like
mail
room
is
not
even
covered
in
our
deployment
and
rollback
pipeline.
So
it's
it's
kind
of
you.
You
change
the
version
in
kate's
deployment
and
then
it
happens.
The
when
it's
run
so
I
mean,
is
not
covered
by
the
goals
of
this
of
this
ocr
here
so
yeah.
Thank
you
for
taking
the.
B
A
Yeah
I
mean
in
my
head,
canceling
deployment
sounds
dangerous,
I
mean,
even
if
it
is
just
the
pole,
deployment
job,
but
technically
it
should
work.
D
D
The
tracking
job
doesn't
run
that
kind
of
thing.
So
I
want
to
make
sure
that
we've
got
all
that
very
well
documented
in
whatever
procedure
that
we're
going
to.
B
D
I
think
your
production
change
issue
is
a
very
logical
place
to
put
this
and
that
could
be
influenced
by
our
run
books
and
vice
versa.
Whatever
learnings
we
get
from
building
this
procedure,
we
could
feed
back
into
our
run
books
as
well.
B
So
I'm
thinking
that
we
should
consider
testing
this
in
our
dry
run
idea.
So
what
I'm
expecting
out
of
this
is
that
we
we
plan
this
with
release
managers.
We
say
next,
if
next
tuesday
or
whatever
we
are
going
to
attempt
this.
So
please,
if
you
start
something
in
the
afternoon,
just
cancel
post-deployment
migration,
so
so
that
when
it's
time
we
can
run
a
dry
run,
roll
back,
and
so
instead
we
just
then
we
replay
positive
migration,
because
in
that
case
we
are
not
re,
there's
no
real
rollback.
B
B
Tapping
okay,
so
I
will
try
to
summarize
this
in
the
issue.
Scarborough.
Do
you
mind
writing
the
answer
for
for
marring
in
this
lag.
D
I
think
maybe
better
I'll
put
this
in
our
issue
associated
with
this.
That
way,
we
could
continue
the
conversation
outside
of
slack
just
to
make
it
better.
Okay,
I'll
do
that.
Yeah.