►
From YouTube: 2020-08-24 Delivery team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Back
awesome
so.
A
So
a
few
things
going
on
today:
first
up
what
amira
and
uric
he's
having
a
well-deserved
day
off
completing
release
management
duties
and
also
huge
thanks
to
everyone.
I
think
so
far,
you've
all
been
around
helping
me
today
and
having
to
get
involved
in
things.
So
thank
you.
A
Today's
gone
very
smoothly
as
a
result,
so
much
appreciated
to
help
me
recover
from
my
first
week
of
release
management
next
week,
I'll
be
on
pto,
so
marion
will
be
back
and
he'll
be
covering
my
release
management
stuff,
and
this
week
we've
got
scarbeck.
Thank
you
covering
for
robert,
so
robert
will
be
back
as
well
next
monday,
but
yeah.
If
you
need
anything
generally
day
to
day,
obviously
marion
will
be
around
to
help
you
with
that
stuff.
A
So
then
I
saw
that
there
has
been
there's
a
trial
going
on
for
the
new
discretionary
bonus
process.
So
just
a
heads
up
on
that
one.
At
the
moment
it's
running
alongside
the
bamboo
process
and
then,
if
this
one's
like
hugely
more
successful,
the
bamboo
one
will
be
be
removed.
So
just
heads
up
on
that,
so
some
discussion
points
would
I
I
can
put
an
issue
together.
A
A
Yeah,
that's
right,
yeah,
so
I
think
it's
like
we
I
mean
we've
got
jobs,
doing
a
great
job
of
putting
one
together
a
kind
of
one
year
of
kubernetes
and
sort
of
looking
back
on.
You
know
what's
changed
and
what
what
did
we
learn
as
we
went,
and
you
know
how
all
that
stuff
it
seems.
A
Like
I
mean
I
was
chatting
to
someone
earlier
and
it's
like,
even
in
the
time
I've
been
here
like
when
I
joined
it
was
just
moved
away
from
auto
deploy
once
a
week
or
a
couple
of
times
a
week
to
daily,
and
now
it's
like
it's,
it's
quite
straightforward
to
get
out
a
couple,
two
or
three
a
day
to
be
honest,
provided
there's
no
incidents
so.
A
Cool
great,
should
we
open
up
an
issue,
and
then
maybe
what
we
could
do
is
just
like:
try
and
drop
in
like
what
are
some
of
the
milestones.
We
saw
like
any
lessons
learned
or
gotchas
that
we
came
across
along
the
way
and
like
then,
everyone
can
kind
of
just
contribute
without
having
to
write
a
full
post,
and
then
we
can
sort
of
edit
that
up
to
a
post,
if
we
have
enough
stuff
awesome,
sounds.
A
Good
cool,
so
a
few
things
I
wanted
to
so
I've
been
updating
things
around
this
new
deployment.
Blocker
process,
the
just
behind
this
was
really
just
to
have
a
super
simple
one
way
that
all
deployments
could
be
blocked
either
by
ourselves
by
a
developer
if
they
find
a
regression
and
also
by
quality
if
they
have
an
issue
going
on.
So
first
of
all,
it's
a
request.
If
you
come
across
stuff,
hopefully
everything
in
that
documentation
and
the
qa
issue,
and
all
of
these
things
now
just
links
out
to
this
handbook
page.
A
If
you
come
across
anything,
then
please
update
it
or
let
me
know
where
it
is.
So
we
can
get
everything
to
be
consistent
and
then
we
can
just
maintain
this
one
place,
but
then
the
second
bit
was
over
the
last
few
weeks.
Obviously,
we've
been
there's
been
a
couple
of
times.
I
think
where
we've
had
some
incidents.
I
know
I
think
jeff
last
week
I
think
it
was
you
put.
You
opened
one
over
a
staging
deployment.
I
think,
and
then
myra
you
had
one
on
friday
around
staging
a
canary.
C
I'll
start,
I
don't
like
it
very
much
like
I
think
it's
it's
it's
good
for
getting
attention
on
things.
You
recently
had
to
do
this
a
few
times
with
migration
problems,
and
often
they
are
real
issues,
I
would
say
at
least
50
of
the
time.
They
are
real
issues
that
need
to
be
fixed
either
by
reverting
the
change
or
you
know
something
else.
In
some
cases
we
have
to
like
mainly
go
into
the
database
and
make
a
change
or
something
like
that.
C
C
So
I
don't
know,
I
guess
it's,
I
guess
I'm
okay
with
it
now
you
know,
maybe
maybe
it
should
be
s3.
I
guess
one
of
the
problems
with
it
being
an
s2
is
that
for
all
other
s2
incidents,
the
on
call
needs
to
join
the
incident
room
and
we're
like
heavily
like,
like
you're
you're,
heavily
involved
with
the
sre
on
call.
In
all
the
cases
I've
been
on,
it's
the
sra
call.
That's
very
on
call
is
like
do
I
need
to
jump
on
this
and
I'm
like
no
we're.
C
C
A
A
I
think
this
one
will
probably
be
a
bit
of
a
case
of
sort
of
us
just
getting
a
bit
comfortable
with
it,
so
the
process
comes
from
brent,
so
it's
brent's
request
that
we
raise
s2s
on
this
stuff
and
that's
really
because
in
the
worst
case
this
is
absolutely
an
s2
like
the
worst
case
is
that
we
end
up
in
a
situation
where
we
have.
You
know
we
need
to
deploy
something
because
site
stability
and
we
have
a
blocked
pipeline.
So
that's
kind
of
the
reasoning
behind
it
yeah.
A
I
definitely
agree
there
will
be
times
where
it
feels
extreme,
but
I
think
what
worked
well,
the
one
I
saw
you
do.
Job
was
like
having
it
assigned
to
you
just.
I
think
it
means
that
I
think
if
people
feel
like
we've
got
it
under
control.
They'll
probably
be
fine
with
that,
but
hopefully,
as
we
as
we
go
on
it
will
it
will
feel
less
extreme.
A
Cool-
and
I
think
it
will
be
a
case
of
just
as
like
it
will-
it
feels
a
bit
dramatic,
but
I
think
keep
in
mind.
Like
you
know,
although
our
you
know
tests
failing,
may
not
necessarily
be
an
incident,
they
end
up
in
a
situation
where
we
could
be
causing
or
making
an
existing
instance
way
worse
by
not
having
a
pipeline
ready
to
go
so
cool.
A
So
here
we
go
like
if
you,
if
you
come
across
like
times
where
you
really
think
that
really
wasn't
helpful
to
have
this
then
like
shout,
and
we
can
review
things
for
now,
we've
kind
of
gone
for
the.
What's
the
simplest
process,
we
could
have
in
place
that
captures
all
of
these
things,
so
we'll
see
how
we
go,
but
do
do
feel
like
actually,
when
I
chat
to
quality
about
this,
they
really
like
the
idea.
They
really
like
the
fact
that
they
have
a
kind
of
you
know
a
trigger.
A
D
I
do
have
a
question
about
those
incidents.
For
example,
the
one
on
last
friday
was
caused
because
some
css
changes
so
creating
an
s2
incident
about
it
was
like.
Yes,
I
also
felt
a
bit
like
this
might
not
be
necessary
because
the
esri
uncle
like
was
not
needed
there.
So
when
it
comes
to
those
incidents,
do
we
need
to
also
have
a
retrospective,
or
can
we
just
once
is
mitigated
because
the
merchant
was
merged?
Can
we
just
close
it.
A
I
think
we
that's
a
bit
of
a
so
that
one
on
friday
yeah
is
a
bit
of
a
question.
So
I
think,
if
that
one
on
friday
has
literally
just
been
a
css
issue
and
yeah,
I
think
we
can
just
close
it.
The
only
reason
I
haven't
necessarily,
I
think
we
might
want
to
ask
specifically
about
that.
One
mirror
is
just
because
the
infrastructure
issue
that
stan
fixed
in
amongst
that
like
he,
restarted
something
so
just
because
there
was
a
slight
an
additional
element
to
that.
A
I
think
it's
probably
it's
a
little
less
clear
to
me,
but
yeah.
I
think
if
it's
a
case
of
there
was
a
bug,
particularly,
I
think
if
we
find
it
on
staging,
so
that's
the
absolute
best
case
right.
It's
the
there's,
a
regression
it
gets
caught.
Somehow
incident
gets
raised,
which
means
it
never
gets
to
canary.
A
A
But
if
there
is,
there
might
be
other
times
where
we
do
want
to
take
some
action
so,
like
maybe
like,
I
think
we
saw
a
few
times
last
week,
for
example,
things
failing
silently
if
we
came
across
something
like
that,
then
that
might
be
a
useful
corrective
action
for
us.
There
could
be
something
that
quality
wants
to
take
from.
Like
you
know,
we
were
missing
some
automation
or
something
like
that
or
worst
cases.
This
thing
went
to
production.
A
We
didn't
see
it
at
all,
in
which
case
we'll
have
some.
So
I
think
it's
a
big
case
by
case,
but
yeah
we,
the
main
reason
for
it
being
an
s2,
is
simply
so
that
it
definitely
blocks
deployments
and
if
we
do
need
to
pull
people
in.
So
if
we
need
to
escalate
this
up
in
any
way,
it's
pretty
easy
to
get
people
pulled
into
an
s2
incident,
so
if
it
doesn't
make
sense
to
take
a
load
of
corrective
actions
afterwards,
that's
totally
fine.
B
I
don't
know
the
details
about
this
incident,
but
my
take
on
this
is
that
if
it's
a
development
problem,
so
it's
it's
an
error.
It
is
not
a
trivial
one.
Maybe
it's
worth
even
having
a
lightweight
asynchronous
retrospective,
just
to
make
sure
that
the
the
the
corrective
actions
are
back
into
the
development
phase.
B
So
if
there's
something
that
the
developer
were
not
aware
of
infrastructure,
wise
or
whatever
or
just
thinking
about
coding
at
scale,
then
it's
better
than
they
go
through
the
to
the
synchronous,
retrospective
and
maybe
learn
something
out
of
it,
because
it's
otherwise
it's
just
the
same
thing.
I've
done
it's
merged.
I
don't
care
what
happens
next.
A
Yeah,
that's
actually
a
really
great
point
and
again
I
think
it's
it's
a
a
useful
thing
to
have
an
incident
which
all
kind
of
parties
come
into
the
same
issue,
but
also
it
gives
it
some.
You
know
weight
because
it
was
an
incident.
I
think
people
generally
know.
That
means
something
serious
happens.
So
yeah.
Actually
that's
a
really
great
point.
A
E
In
my
opinion,
aside
from
that,
I
do
think
we
need
to
have
some
guidelines
because
I
don't
see
the
only
guidelines
I
see
in
our
documentation
is
the
fact
that
we
need
to
have
a
retro
done.
I
don't
see
anything
that
says
if
this
is
part
of
the
development
process.
It's
okay,
just
you
know,
let's
do
or
not
do
an
rc,
for
example.
I
think
we
need
to
better
eyes
our
documentation
regarding
that
front,
and
the
last
comment
I
have
is
why
and
s2,
if
s3s
also
block
deployments
and
s2s
require
integration.
A
So
I
don't
actually
have
a
full
answer
to
that,
so
that
we
I
we
had
a
period
where
things
were.
Some
things
were
s2
and
some
things
were
s3.
I
believe
it
is
actually
because
some
things
are
definitely
s2s,
so
I
think
the
kind
of
the
general
thinking
is
it
will
always
be
an
s2
or
lower,
so
it's
better
to
just
start
high
and
downgrade
our
our
response
if
needed.
A
A
Oh
one
thing
that
will
be
done
actually
is
brent's
talking
about
updating
the
grid
that
we
have
in
the
handbook.
I
can't
remember:
what's
called
the
the
thing
where
you
go
and
work
out
which
severity
actually
is
because
at
the
moment
that
only
talks
about
external
customers,
but
I
think
his
thinking
of
an
s2
is
it's
catastrophic
for
us
internally,
because
we've
just
blocked
all
of
our
pipelines.
A
So
I
think
he's
going
to
try
and
get
some
extra
wording
in
there
about.
What's
the
internal
impact
of
this
issue
and
then
under
my
assumption
is
based
on
what
he's
planning
to
write
under
an
s2.
It
would
make
a
lot
more
sense
that
we're
using
it
for
this
stuff.
B
A
Yeah
and
I
think
that's
what
we're
feeling
the
reluctance
we're
feeling
is
like.
I
really
don't
want
to
cause
an
st
but
yeah
it's
we
should.
We
should
think
about
our
unblocking
of
everything
and
it
is.
It
is
really
really
effective
when
we
use
it
so
like
myrrh,
it
was
so
awesome
that
you
had
that
incident
on
friday
because
of
saturday
when
I
was
online
with
uric,
I
was
like
oh
cool.
A
I
can
just
kick
off
staging
again,
which
meant
this
morning
we
just
had
regular
staging
and
canary
builds
going
through,
which
was
was
really
helpful,
so
it
is
really
effective.
It
just
feels
dramatic.
I
think
the
way
I
feel
about
it.