►
From YouTube: FCL 6072 Closing Ceremony
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
here
we
go
so
this
is
the
closing
ceremony
for
the
fcl
6072
there's
for
those
who
are
watching
the
call
there's
a
section
at
the
top
of
the
agenda
with
items
that
were
completed.
A
A
Let's
start
with
what
went
well,
so
I
have
the
first
item
like
first
of
all,
I
thought
the
response
from
the
team
was
excellent
to
the
fcl.
It
took
around
one
half
day
to
get
create
corrective
actions
and
after
that
people
assigned
to
the
actions
really
quickly,
even
though
there
was
some
disruption
from
people
going
on
pto
and
so
on,
nothing
stalled
because
of
pto,
which
is
great,
and
everyone
was
a
great
dri
for
their
items.
A
B
I
would
say
the
in
the
way,
then
the
issue
was
handled.
I
think
it
was
very
clear.
The
actions
were
very
clear.
It
was
very
easy
to
choose
something
to
pick
and
I
guess
also
having
everything
in
one
issue.
It
was
very
transparent
and
very
easy
to
just
the
first
time
working
on
fcl.
D
Right
so,
like
algina
said,
I
agree,
the
test
was
were
really
clear,
but
I
think
we
just
need
some
kind
of
priorities
over
all
tasks.
So
I
just
speak
some
stuff
randomly
and
yeah.
That's
it
from
my
side,
the
only
feedback
I
have
of
what
didn't
go
well,.
A
A
Yeah,
it
seems
in
in
retrospect
you
can
put
things
in
bold
if
they
would
be
like
really
effective
corrective
actions,
but
when,
at
the
start
of
the
fcl
it
was
kind
of
not
immediately
clear
which
of
the
items
would
be
most
effective
and
so
you're
kind
of
dealing
with
limited
information.
But
that's
a
really
good
point
when
so
charlie
is
thanks
to
her
that
we
had
corrective
actions
so
quickly.
She
did
the
corrective
actions
almost
immediately
after
the
incident
and
and
then
so.
D
A
As
as
we
decided,
we
wouldn't
do
them.
On
the
other
hand,
it's
a
five-day
process,
so
it's
quite
short
from
my
side.
Yeah.
The
the
one
thing
that
didn't
go
well
was
that
the
main
factor
that
would
have
actually
prevented
this
happening
again
in
future
got
blocked
by
discussions.
That's
structured
logging
and
it
was.
A
It
was
kind
of
doubly
bad
because
we
combined
that
with
the
idea
that
well,
it
was
both
the
core
action
to
reduce
the
time
to
resolution
and
the
best
chance
of
preventing
it
happening
again,
and
so
it's
really
important
that
we
make
some
progress
on
it.
That
said,
we
have
a
small
action
we
can
take.
A
That
would
help,
but
it's
only
proposed
today
and
that's
that
we
make
it
so
that
authors
of
mrs
with
data
migrations,
take
responsibility
for
doing
a
dry
run
using
the
replica
and
that
they
include
in
the
description
how
the
migration
could
could
be
reversed.
A
Yeah
they're,
not
they're,
not
as
satisfying
as
having
an
automated
process
to
actually
do
that
for
you.
But
if
we
wait
for
that
we'll
be
waiting,
I
think
a
long
time.
D
D
D
It's
super
hard
is
because
you
want
to
to
improve
the
tooling
right
away,
but
this
is
most
of
the
times
not
in
your
hands,
so
we
had
another
fcl,
for
example,
in
the
italy
team
some
time
ago,
and
the
only
thing
that
the
k2d
team
itself
did
back
then
was
a
merge
request,
which
was
three
lines:
change
in
total.
Everything
else
was
then,
with
the
source
code
team,
which
was
consuming
the
service,
was
changing
the
process
asking
for
quality
engineering
to
ramp
up
some
testing
on
staging
and
so
on.
D
D
We
would
need
this
because
we
don't
want
to
hit
our
head
on
the
table
in
three
months
when
we
hit
exactly
the
same
problem
again
and
we
are
like
okay,
I've
already
at
the
last
fcl
what
we
should
have
done.
So
I
think
that
that's
the
the
most
important
topic
of
this
map
and
creating
simply
clearance
on
where
we
want
to
go.
A
Yeah
and
to
that
point,
the
the
discussion
with
the
database
team
involved
some
debate
amongst
that
team
themselves
about
the
way
to
actually
do
this,
like
there
were
concerns
that
logging
in
the
database
would
create
too
many
transactions
on
the
primary,
which
is
true,
but
that
logging
to
the
file
system
would
create
similar
issues
and
bring
in
the
possibility
of
exposing
personally
identifying
information
of
customers
to
you
know,
what's
the
phrase
like
people
with
like
red
access
to
to
data
so
even
like
it's
not.
D
A
In
all
likelihood,
asking
the
mr
author
to
to
create
to
to
suggest
in
the
mr
description,
how
the
mr
could
be
rolled
back
in
the
event
of
an
emergency
would
likely
cause
the
author
to
consider
more
closely
the
impact
of
the
mr,
and
so
you
could
say
that
the
probability
of
this
recurring
would
be
small,
it's
just
not
as
satisfying
as
having
a
logging
process.
That
puts
it
right
in
front
of
you
and
says:
here's
a
dry
run.
Here's
here
are
the
records
that
would
have
been
affected.
Does
this
look
right?
A
You
know,
and
we
can't
when
we
look
at
this
and
re-run,
run
a
thought,
experiment
and
say
with
the
corrective
actions
that
we
took.
What
would
this
incident
look
like
if
it
happened
again?
We
can't
you
know,
point
at
that
and
say
for
sure
that
we
would
have
shaved
seven
hours
off
the
process.
You
know.
D
But
at
least
adding
it
to
the
process
is,
to
some
extent,
really
like
also
some
sort
of
rubber
duck
debugging,
because
the
person
simply
will
think
about
okay.
What
is
my
mitigation
timeline?
What
is
the
steps
that
simply
instructions?
How
can
I
go
back
in
time
when
this
is
going
wrong
due
to
production
or
whatever
is
happening?
D
There
might
be
also
more
topics
in
the
future,
where
we
are
running
migrations
and
run
into
problems
due
to
whatever
reason,
and
we
need
to
be
able
to
identify
without
getting
that
person
out
of
bed
in
reality
to
figure
out
as
soon
as
possible,
where
we
went
from
so.
A
D
A
C
C
Even
data
migrations
against
clone
within
the
mr.
There
is
a
dedicated
job
that
you
manually
have
to
run
it,
but
yeah
other
than
that.
I
don't.
I
don't
know
if
we
like
allocate
to
all
engineers
access
to
the
clone
in
the
sense
of
being
able
to
run
the
migrations
locally,
that
I'm
not
sure
if
that's
something
that
we
can
do
or
should
do
it
just.
C
I'm
not
sure
how
easy
it
is
to
query
to
know
how
how
effective
the
data
is
right.
You
need
to
know
what
to
query
for
basically
to
figure
out
like
what
the
effects
are
versus
when
you
run.
I
guess
you
can
see
more
of
this
logging
and
and
things,
but
then
again
like
running
the
entire.
It
depends
on
the
migration
itself
right
running
on
the
pg.
Clone
is
basically
the
same
thing
as
running
on
production.
It's
just
a
clone
of
the
production
and
depending
on
the
size
of
the
migration
and
so
on.
C
If
on
production,
it
has
to
take
days
the
neo4jmr,
maybe
it
takes
weeks
just
because
the
pg
clone
runs
on
the
smaller
instance
and
so
on.
So
there
are,
there
are
ways
to
do
it.
I'm
not
like
again,
as
you
said,
it's
probably
to
the
mars
author
to
decide
like
to
wait
to
wait.
The
the
pros
and
cons
of
doing
so.
A
Yeah,
okay,
I
want
to
move
on
just
because
we're
up
against
time,
but
I
think
that
in
that
case,
what
could
be
interesting,
then
is
make
it
the
mr
authors
judy
to
decide
what
query
to
run
in
dblab,
because
in
this
case
you
you
didn't,
run
the
whole
migration
to
know
how
many
records
that
it
would
have
affected.
You
could
just
run
the
query
in
the
migration
and
find
out,
and
that
might
have
given
pause
at
that
point
when
25
000
epics
were
involved.
A
Okay,
let's
jump
to
the
next
action,
so
number
three.
What
can
be
improved
about
the
fcl
process?
A
So
for
me
there
was
a
bit
of
confusion
at
the
start
and
it
may
have
been
caused
by
me,
but
I
got
messages
from
some
of
the
front-end
team
wondering
if,
because
it
was
a
feature,
change
lock,
if
they
could
not
do
any
work
on
epics
for
the
duration
of
the
fcl,
maybe
we
could
make
that
clearer.
I
didn't
see
anything
in
the
handbook,
though
I
think
that
might
just
be
left
over
information
from
an
earlier
iteration
of
the
fcl.
A
A
Maybe
we
should
tell
make
it
explicit
in
the
handbook,
keep
interviews
as
scheduled,
so
we
don't
affect
the
candidate
experience,
but
otherwise
the
handbook's
pretty
explicit,
it's
the
mrs
author,
plus
their
manager,
plus
all
the
managers
reports.
So
I
think,
regardless
of
a
head
count,
reset
that's
pretty
pretty
clear
in
the
handbook.
A
The
remaining
thing
would
just
be
reviews.
Then?
What
do
you
do
with
the
reviews
you
have
in
progress,
because
if
you
stop
them,
you
stall
other
teams
work
as
well,
and
the
action
we
took
was
just
to
run
down,
reviews
that
are
already
started,
but
to
suggest
not
taking
further
reviews,
just
delegate
them
to
other
people.
D
In
my
opinion,
yes
keep
the
the
interviews
running,
especially
as
they
are
already
scheduled.
They
are
as
hard
to
schedule
anyhow
and,
on
the
other
hand,
keep
the
head
count
reset
also
participating.
So
I
think
that's
that's
totally
fine,
as
we
have
the
team
and
yeah
reduce
the
workload,
it's
really
about
creating
a
space
to
focus
on
figuring
out.
Okay,
let's
stop
time,
let's
figure
out
what
we
can
do
to
prevent
this
in
the
future.
D
This
is
the
this
is
the
tool
for
it
because
in
the
past,
yes,
there
was
an
incident
and
yeah
yeah.
Maybe
we
should
do
this,
we
should
do
oh
there's.
Another
thing
happening
10
minutes
later
and
everyone
is
already
focused.
So
this
is
basically
creating
a
little
bit
of
a.
E
Yeah,
my
clarification
would
be:
is
it's
really
about
focusing
on
reliability
but
still
business
as
usual?
So
it's
more
about
it's
more
about
less
feature,
development,
work
and
focus
on
reliability
and
that
reliability
could
be
anything.
There's
a
couple
extreme
versions
of
this,
the
extreme
version
that's
counter-intuitive
would
be.
We
focus
on
fixing
a
bunch
of
reliability,
issues
that
we
view
as
more
important
than
even
the
rca
action
items
as
an
example
just
because
we
view
it
as
higher
priority.
E
That
would
still
be
a
huge
amount,
but
let's
say
we
had
something
that
we
knew
was
like
even
a
bigger
problem
that
was
affecting
50
of
our
customer
base
and
more
likely
to
occur,
the
focus
on
that
where
life
might
make
more
sense
in
this
situation,
so
we
give
it
a
give
it
give
it
a
fair
amount
of
latitude
definitely
keep
interviews
scheduled,
because
I
don't
think
that
that's
that's.
E
You
know
that's
just
like
normal
course
of
business,
so
I
would
view
that
that
aspect
as
far
as
reviews
go,
I
think
you
know
major
containership
is
a
key
component,
so
I
would
say
keeping
other
other
things
moving
along
as
part
of
normal
workflow
makes
sense,
as
well
so
being
able
to
have
maintainers
being
able
to
still
do
reviews
and
people
who
are
trying
to
become
maintainers,
also
doing
reviews
as
well
and
then,
as
far
as
the
reset
that's
an
interesting
one.
E
I
was
originally
the
opposite
way,
which
is
if,
if
by
the
way,
headcount
resets
there's
a
new
term
where
you
use
borrow,
but
if
they're
on
a
borrow,
I
would
actually
say
that
they
continue
on
the
borrower
rather
than
trying
to
pull
it
back,
because
I
just
feel
like
that's
like
a
a
bunch
of
back
and
forth
management
wise.
E
I'm
not
super
strongly
opinionated
on
that
one.
I'm
just
thinking
about
in
terms
of
usually,
if
there's
a
borrow,
there's
already
a
prioritization
call.
So
in
that
particular
case,
it
feels
like
it
feels
like
continue
to
borrow
while
the
fcl
is
running
in
parallel
and
the
team,
the
existing
team
kind
of
does
it
now.
E
E
You
know
just
following
policy
consistently
from
our
perspective.
E
E
They
should
be,
they
should
be
abstracted
away
from
the
problem
in
that
regard.
It's
just
more
of
like.
Do
you
stop
their
work
and
reassign
it
to
the
activity
that
that
feels
like
you're?
Basically,
you
have
in
a
borrower
you're
in
a
temporary
alignment
situation
where
you're,
basically
having
people
focus
on
other,
potentially
other
teams.
So
in
that
situation
I
feel
like
it's
like
they.
A
C
E
We
like
well
definitely
for
the
rca.
That's
definitely
what
you
want
right
from
that
perspective.
As
far
as
like
that's
the
other
thing
is
this
rca
and
fcl
are
are
complementary
things,
so
you
know
rca.
We
should
be
done,
definitely
jumping
on
root,
cause
and
fixing
the
problems
associated
with
root
cause,
it's
all
about
reliability
and
the
intermingle,
and
that's
that's
sometimes
where
people
will
sometimes
complex.
They
think.
Oh,
I
gotta
complete
the
rca
before
the
fcl
is
done.
No,
it's
a
very
fixed
amount
of
time
associated
with
it.
E
Make
a
proposal-
the
one
one
third
proposal
would
be-
is
leave
it
to
the
manager
to
decide
so
manage
your
discretion
that
I
could
I
could.
I
could
be
convinced
of
that
that
using
that
approach
at
this
point
I
know
oftentimes
we
like
to
be
definitive
but
like
in
this
particular
case.
It
feels,
like
judgment,
may
be
necessary.
A
Okay,
thanks
very
good
felipe
the
next
item
here
before
we
move
on
to
looking
at
the
timeline.
A
A
So
maybe
that's
one
thing:
we
should
have
done
differently
like
yeah,
it's
a
difficult
one
because
you
could
spend
you
know
a
decent
portion
of
your
day
on
reviews
which
might
leave
limited
amounts
of
time
to
to
work
on
the
fcl,
but
on
the
other
hand,
by
withdrawing
people
from
reviews,
we
make
it
more
likely
for
other
reliability
issues
with
crop
up.
A
Okay,
I
thought
for
the
final
section.
We
have
like
seven
minutes
left
so
for
the
final
section,
take
a
look
at
the
whole
incident
and
just
go
through
quickly
how
long
it
took
for
each
section
and
then
see
with
the
actions
we've
taken
currently
and
the
actions
we're
proposing
to
take
going
forward
like
what?
How
would
the
incident
have
been
different?
A
So
I
calculated
the
time
to
declare
the
incident
from
the
merging
of
the
mr
was
11
hours.
Although
we
can't
know
exactly
when
the
migration
started,
we
know
it
lasted.
Two
and
a
half
was.
A
No
half
an
hour,
I
think
or
30
minutes
so
with
the
corrective
with
the
current
actions.
Data
loss
is
not
an
instant
severity
one,
so
it's
likely
that
it
would
reduce
the
time
required
to
declare
an
incident
with
all
actions.
I
don't
think
there
are
any
further
actions
that
would
reduce
the
time
taken
to
declare
an
incident.
A
Jake
did
you
want
to
come
in
on
that?
I
thought
you
mentioned
something,
but
maybe
I
missed
it.
A
F
But
I
so
there
was
a
and
I
didn't
get
the
exact
numbers,
but
I
think
it
was
about
six
hours.
Where
had
I,
when
I
first
like,
opened
the
issue
and
noticed
that
it
was
possibly
data
loss.
Had
I
declared
an
s1
there.
The
overall,
like
time
to
resolution,
probably
would
have
been
faster
because
we
would
have
more
people
involved
earlier.
So
I
think
the
I
think
your
corrective
action
of
like
any
possible
data
loss
being
an
instant
s1,
should
fix
that
for
the
future.
F
A
Yeah
and
full
disclosure-
I
promoted
it
to
s1,
but
didn't
declare
an
incident,
so
I
should
have
declared
an
incident
as
well.
It
was
charlie,
I
handed
it
over
to
charlie.
I
think
at
1
30
a.m,
my
time
so
it
was
charlie.
Then
that
declared
it
an
incident
which
was
the
right
call
and
it
was
subsequently
downgraded
to
an
s3
mistakenly.
So
I
think,
with
this
guidance
like
that
shouldn't
happen
again,
should
be
an
s1
until
such
time
as
we
find
that
customer
data
is
not
affected
and
only
then
would
it
be
downgraded.
A
So
the
11r
chunk
at
the
beginning
of
the
well
not
the
beginning
of
the
incident
but
the
beginning
of
the
actual
effect
on
customers
where
we
it
took
us
11
hours
to
declare
an
incident
would
be
different
time
to
diagnosis
like
this
is
hard
to
calculate,
because
we
took
so
long
to
declare
an
incident.
We
had
already
diagnosed
it
pretty
much
at
that
point,
but
it
was
around
one
minute.
That's
it
yeah,
like
I
said,
the
incident
was
already
declared
and
harsh
had
suggested
the
offending
mr
at
23
20.
A
So
around
two
hours
before
the
incident
was
declared.
If
I
recall
correctly
so
you
could
say
times,
diagnosis
was
probably
on
the
region
of
1r.
With
current
actions,
nothing
would
particularly
help
the
time
to
diagnosis
with
all
actions.
That's
those
that
are
longer
running
than
the
fcl
publishing
all
migrations
to
slack
and
requiring
the
author
to
state.
What
could
go
wrong
would
likely
reduce
this.
So,
as
an
incident
manager,
you
would
have
all
the
slack
notifications
of
what
migrations
have
been
run.
A
You'd
also
have
links
to
those,
mrs
and
in
the
mrs
you'd,
have
a
description
from
the
author
about
what
could
happen
if
they
went
wrong
and
how
to
reverse
them.
So
it's
likely
that
it
would
be
much
faster
anyway,
jake
I
don't
know.
Maybe
if
anything's
had
there,
you
were.
You
were
present
during
the
the
diagnosis.
F
No,
I
don't,
I
don't
think
so
I
mean,
I
think
the
main
thing
was
identifying
that
mr
and
then
being
able
to
work
through
it.
So
beyond
that,
I
don't
think
there's
anything
else.
A
All
right
and
then
finally
time
to
mitigation,
was
seven
hours
for
99
of
the
data
and
seven
hours
for
the
remaining
one
percent
of
the
data
like
that
classic
pareto
distribution.
So,
and
even
then,
like
the
final
seminars,
we
didn't,
I
would
argue,
we
didn't
recover
all
the
data.
We
recovered
the
important
data,
but
we
still,
I
think,
lost
some
data
by
positioning
of
issues
within
mrs.
A
We
couldn't
recover
that
so,
with
current
actions
that
have
already
been
taken,
having
a
team
of
reviewers
would
help
yeah
in
advance
having
the
higher
severity,
probably
as
well,
but
the
key
thing
that
would
help
with
this
mitigation
time
would
be
not
to
have
to
have
recovered
data
from
system
notes
at
all,
but
to
have
a
log
from
which
to
recover
data,
and
that
part
is
the
part,
that's
still
in
debate
with
the
database
team.
A
So
I
would
be
hopeful
that
this,
though,
is
not
like
I
put
it
in
the
backlog
item
we
can
actually
advocate
for
this
and
get
it
done.
I
think,
fairly
fairly
quickly
with
the
database
team.
Any
other
thoughts
on
time
to
mitigation.
A
Cool
if
you're
interested
in
that
specific
item,
it's
linked
at
the
top
by
the
way,
and
it's
still
very
much
like
a
live
discussion
just
with
the
amount
of
pto
at
the
minute,
it's
hard
to
get
it
moved
forward
very
quickly.
I
think
the
next
step
is
just
to
open
an
issue
for
the
database
team,
or
even
an
mr
just,
to
propose
the
change
in
guidance
first
and
then
we
can
solve
this
longer
running
thing
of
how
we
automate
logging
like
in
at
a
future
date.
D
Yeah
I'm
doing
the
easy
job
and
the
easy
part.
Thank
you.
Everybody
for
doing
really
good
job
in
being
involved
in
the
fcl
in
all
the
action
items,
and
especially
yeah
really
trying
to
push
for
the
best
solution
forwards
to
have
something
that
we
are
not
running
into
such
problems
any
anymore
and
improving
that
by
a
lot.
So
thanks
everybody,
and
also
on
top
of
that
john
thanks
for
facilitating
that
one
and
yeah
so
and
on
top
of
that
happy
holidays.
Everyone
so.
E
Yeah,
and
just
as
well
from
my
perspective
thanks
everyone
for
the
hard
work
really
appreciate
it.
I
agree
with
all
tips
points
and
this
this
particular
rco
was
executed
for
that
everyone.