►
From YouTube: 2022-08-15 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
work
my
front.
Welcome
back
mckelly!
Welcome
back!
Oh,
I
was
not
here:
hey
welcome
out,
myra
cool,
so
let
us
jump
in
a
couple
of
announcements
which
I'll
let
you
read
what
wanted
to
dive
into
is
michael
and
I
have
been
talking
through
over
the
last
few
weeks,
we've
been
talking
about
trying
to
put
together
a
sort
of
a
better
way
to
articulate
delivery
team
split,
so
we
originally
some
some
probably
months
ago
now
had
the
scaling.
A
The
delivery
team
issue,
where
we
kind
of
talked
a
little
bit
about
things
that
the
teams
may
end
up
owning
and,
as
we've
been
going
through,
q3
we've
been
sort
of
talking
about
this
more
as
well.
What
we
want
to
do
is
kind
of
continue
to
iterate
on
that
and
get
a
sort
of
fuller
description
in
place
on
the
handbooks
to
help
ourselves,
but
also
help
people
outside
the
teams.
A
So
what
I
have
done
is
we've
got
this
visual
that
maybe
explains
things
in
a
slightly
different
way.
I'll
talk
it
through
briefly
and
then.
Hopefully
there
are
questions
and
kind
of
comments
and
things,
so
we
can
continue
to
improve
this.
So
just
share
my
screen.
A
So
this
is
very
much
a
target
state.
It
in
no
way
reflects
how
things
are
today,
but
it
probably
gives
some
sort
of
a
slight
overview
of
probably
auto
deploy
so
auto,
deploy
kind
of
is
all
of
the
things
right
now
and
in
the
future.
Hopefully,
we'll
have
a
bit
more
separation
going
on
so
starting
at
the
bottom.
A
We
have
reliability,
so
reliability
own
a
certain
degree
of
our
environments
and
our
kind
of
servers
and
database,
and
things
like
that
and
that
I'm
sure
will
change
over
time,
but
just
to
kind
of
make
sure
we
reflect
their
kind
of
ownership
and
involvement
in
this
process
as
well
and
then
for
system
thinking
about
this
as
sort
of
systems
that
we
can
grow
around
and
kind
of
that
will
support
our
scaling.
A
So
I
have
I've
been
thinking
about
this
as
sort
of
like
services
that
in
no
way
exists
right
now,
but
which
perhaps,
in
the
future,
give
us
a
way
so
that
you
know
there
is
a
way
to
deploy
to
staging
and
there
is
a
way
to
check
environment
health.
It's
perhaps
a
little
bit
less
coupled
to
auto
deploy
than
it
is
right
now,
but
the
system
team
has
the
ability
to
kind
of
scale
and
grow
the
services
that
we
have
in
place
to
handle
deployment
of
changes
onto
the
environments
we
have.
A
A
Any
of
so
I
think
those
are
things
that
will
hopefully
be
interesting
kind
of
future
directions
for
us
so
and
then
on
the
orchestration
side,
at
the
top
I've
written
this
out
as
a
kind
of
generic
pipeline,
because
actually
it
sort
of
doesn't
matter
or
hopefully,
it
gives
us
a
way
of
scaling,
the
sort
of
like
the
the
deployment
so
growing
into
auto,
deploys
or
whatever
our
next
iterations
of
deployments,
but
orchestration
spending
our
time
focusing
on
what
are
the
individual
pieces
that
we
need
in
order
to
get
code
changes
through
a
deployment
pipeline,
so
we'll
be
very
much
worried
about.
A
How
do
we
track
these
things
accurately?
Do
we
have
tests
running
in
the
right
place
and
are
migrations
running
a
couple
of
points
and
at
the
moment,
this
very
loosely.
This
wasn't
a
sort
of
guideline,
but
at
the
moment
this
very
loosely
kind
of
matches
up
with
the
downstream
trigger
jobs
that
we
do
have
on
auto
deploys.
A
But
at
those
points
where
we
sort
of
say,
okay,
we
have
this
change
and
it's
it's
gone
through
the
tracking
that
we
needed
to
track
we're
at
the
point
of
deploying
it,
and
then
we
hand
over
to
state
to
a
system
and
say
you
know,
deploy
this
image
or
this
package
onto
the
onto
the
servers
as
needed,
and
then
we
get
back
a
sort
of
yes
or
no
or
some
sort
of
status.
On
on
how
that's
gone
before
you
continue.
A
Are
there
any
questions
or
thoughts
around
us
sort
of
thinking
about
teams
in
this
way,.
A
Yeah,
absolutely
what
I'll
do
then
is
I'll
pop
this
up
somewhere
and
well.
You
have
you'll,
have
access
to
this
right.
This
is
in
the
agenda,
so
you
can
all
see
this,
but
are
there
I
guess
to
put
that
back
to
you
scrub
it
like.
What
do
you
want
to
have
like
a
kind
of
discussion
issue
about
how
this
looks
or
how
do
you
want
to
take
this
forwards?.
B
Let's
start
with
the
discussion,
I
don't,
I
think
it's
going
to
be
more
like,
let's
clarify
a
few
things,
maybe
readjust
the
diagram
for
one
at
least
one
thing.
That's
on
my
mind,
but
again
I
want
to
think
about
this
before.
A
A
Cool
okay,
well
I'll,
put
together
a
discussion
issue.
What
I'll
just
quickly
briefly
show
so
on
the
second
slide,
I
have
pulled
out
the
metrics
or
we've
put
out
the
metrics
as
a
kind
of
a
slight,
but
we
went
about
slightly
more
easy
to
think
through
components.
They
will
have
shared
links
between
both
scenes
as
an
example
of
how
this
could
work.
So
we
know
we've
got
lots
of
metrics
we
can
track
in
in
deployment
pipelines.
A
We
know
that
at
the
moment
we
don't
really
have
a
kind
of
centralized
place
or
way
to
do
that.
So
thinking
about
that
kind
of
in
the
same
sort
of
split,
we
end
up
with
system
being
able
to
kind
of
fully
define
and
set
up
and
manage
the.
What
do
we
want
a
metrics
service
to
look
like
and
how
does
that
behave,
and
that's
kind
of
our
central
point
that
all
metrics
from
delivery
live
in
and
then
on
the
orchestration
side.
A
This
gives
us
a
place
where
all
of
our
jobs
in
all
of
our
pipelines
can
just
be
pushing
in
data
that
ends
up
giving
us
a
really
complete
set
of
metrics,
and
both
teams
can
use
the
metrics
service
to
pull
out
dashboards
that
we
need.
So
we
start
to
have
this
thing,
where
we
kind
of
get
orchestration
contributing
into
something
that
system
has
created
and
both
teams
are
able
to
pull
information
out
from
the
other
side.
A
Oh
all,
right,
I
will
get
an
issue
set
up
today
and
jot
down
roughly
some
some
things
that
I
mentioned
here
and
see
if
we
can
get
that
sort
of
into
a
into
a
good
discussion.
For
you
all
to
comment.
A
If
you
don't
want
to
comment
on
the
issue
or
you
think,
you've
got
a
stupid
question,
even
though
no
questions
are
stupid,
feel
free
to
also
ping
on
slack
or
follow
up
with
your
manager
in
a
one-to-one,
or
you
know
we're
happy
to
have
follow-up
conversations
as
needed
so
that
we're
all
kind
of
in
agreement
on
how
this
might
look.
B
Okay,
so
last
week
went
okay
at
the
beginning,
we
did
have
a
couple
of
like
issues
with
you
know
on
the
bus.
There
was
a
change
to
the
way
ci
was
working
so
that
impacted
deploys.
But
you
know
we
were
right
in
alignment
with
deploys
and
tagged
packages
and
promotion.
So,
like
we
were
pretty,
we
were
going.
Okay,
there
was
a
qa
failure.
I
don't
remember
that
that
must
have
happened
during
ruins
time.
B
We
had
an
incident
that
reuben
was
working
when
I
started
and
then
another
one
fired
up
as
soon
as
that.
One
was
pretty
much
finished
and
you
know
we
just
couldn't
get
anything
out
the
door.
So
yesterday
was
a
little
tough,
but
nothing
that
we
could
have
done
better.
I
don't
think
this
stuff
just
kind
of
piled
up,
which
was
unfortunate
and
I
don't
really
know
what
else
to
say
outside
of
that
fact,
like
there's
no
corrective
actions
that
I
could
think
of
that
happened
last
week.
B
That
would
have
helped
us
in
any
way.
I
did
create
a
improvement
issue
related
to
one
particular
issue
where
this
is
a
process
that
all
of
us
are
familiar
with,
where,
if
we
need
to
like
rush
a
particular
fix
into
place,
we
use
that
same
process.
If
we
need
to
pick
something
that
needs
to
get
into
autodeploy
quickly.
B
I
would
like
to
try
to
improve
that
workflow
just
because
last
week
we
did
the
pick
auto,
deploy
on
a
particular
merge
request.
But
then,
due
to
the
scheduling
of
when
auto
deploy,
creates
the
next
auto
deploy
branch
that
pick
never
made
it.
B
So
we
would
have
never
seen
the
fix
for
that
particular
problem
until
the
next
auto
deploy
package,
which
you
know
that
would
have
delayed
us
from
the
standard
three
hours
to
you
know
well
over
six
hours
in
that
particular
case,
and
I
don't
appreciate
that
I'm
trying
to
figure
out
if
we
could
better
use
that
in
some
way
shape
or
form
deployment
frequency
we
did
do
slightly
better.
You
know
we
ramped
up
to
10
deploys
one
day,
but
you
know
friday.
A
B
That
sounded
like
that
looks
kind
of
strange
to
me
because
I
feel
like
we
created
a
few
extras
because
of
the
incidents
and
us
having
to
pick
changes.
So
that
makes
sense
that
we
had
like
a
little
more
than
like
our
standard
cron
schedule.
That
creates
prep
branches,
for
example.
But
I
don't
think
we
deployed
10..
B
B
Lead
time
for
changes,
I
think
I
expressed
last
week
that
I
not
entirely
sure
what
to
do
with
this
chart,
but
you
know
three
days
so
we're
higher
than
we
were.
But
again
we
didn't
deploy
anything
on
friday.
So
I
think
us
being
high
today
completely
makes
sense.
B
But
again
it's
strange
to
me
that
friday,
the
lead
time
is
lower
than
our
balance
by
a
decent
amount,
but
we
didn't
deploy
anything
on
friday,
so
I
suspected
same
thing
as
feeding
into
this
chart,
so
I
think
whatever
investigation
amy.
In
fact,
if
you
want
to
fire
up
an
issue,
I
think
we
should
look
at
both
of
these
trucks
and
figure
out
where
they're
getting
their
data
from.
A
B
Okay,
so
amy,
I
had
one
question
for
you:
you
dropped
in
the
handover
message
today
about
the
blockers
from
last
week.
I'm
curious
if
you're
automatically
gathering
those
in
some
way
shape
or
form,
or
are
you
doing
all
this
manually.
A
I
think
release
managers
should
do
the
gathering
because
then
you're
all
much
closer
to
the
the
data
we
need
and
the
report
we
need.
So
what
we
did
previously
was
we
like
we're.
Not,
I
don't
think
we
were
miles
away,
but
this
was
to
do.
This
is
sort
of
linked
to
the
issue
I
opened
a
couple
of
weeks
ago,
so
we
originally
talked
about.
What's
a
good
way
to
get
the
data
tracking
the
best
version
we've
come
up
with
so
far.
A
A
There's
a
bit
of
a
lag
on
those
so
quite
often
as
as
we
gather
so
as
mckayla,
and
I
gather
up
the
data
to
fill
in
that
epic
is
when
we
notice
oh,
hang
on,
didn't
we
have
this
problem,
or
I
saw
in
a
handover
message
that
we
were
blocked
for
a
problem
or
you
know,
or
we
come
across
something-
and
it's
at
that
point.
So
that's
the
problem.
A
I
think
we
really
need
to
solve,
because
if
we
run
any
automated
result,
like
reports
off
the
labels,
which
is
pretty
easy
to
do,
we're
not
going
to
get
a
complete
set
because
we
don't
always
label.
So
that's
the
piece.
I
think
we
need
to
find
a
way
of
of
handling.
Now
at
the
moment
we
annotate
that
package
graph.
So
I
do
think
there's
probably
some
some
way
of
linking
together
that
what
we
annotate
on
there
is
exactly
the
same
stuff
that
should
get
those
labels.
A
Do
you
mean
the
the
one
so
the
the.
A
B
A
A
So
there's
maybe
a
slight
extension
of
this
one,
but
from
my
my
current
feeling
of
what
I've
seen
or
the
way
I've
kind
of
gone
through,
it
is
that
if
we.
A
Great
thanks
so
yeah
this
one.
I
like
sorry
thanks
myra,
so
I
think
if
we
could
automate
this
piece,
we
would
have
both
like
we'd
have
an
automated
way
to
get
data.
We
need.
A
And
just
as
a
kind
of
you
may
not
see
like
an
immediate
what
happens
from
that,
epic
to
the
life
getting
better,
but
what
we're
using
that
epic
for
is
looking
for
patterns,
so
one
of
the
ones
in
the
past
that
myra
had
great
success
on
was
on
the
the
way
quarantine.
A
Mrs
work-
and
we
got
that
because
we
were
able
to
go
to
quality
and
say
here
are
three
or
four
examples
over
the
last
few
months,
where
we
have
very
long
recovery
times
linked
to
that
stuff,
and
so
they
took
an
action,
and
I
think
what
I'm
seeing
in
the
last
few
weeks
is
almost
all
of
our
blockers
are
coming
either
from
software
changes
or
from
change
requests.
So
those
are
kind
of
big
groups
that
we
can
then
start
to
drill
into
right
like
in
the
future.
B
So
my
goal
today
is
to
look
at
that
exact
table,
since
you
were
curious
as
to
whether
or
not
we've
got
everything
I'll
try
to
look
to
ensure
that
we
do
indeed
have
everything
and
I've
already
got
a
few
ideas
about
how
to
automate
some
of
this
in
some
way,
shape
or
form
so
I'll.
Try
to
chime
in
on
that
issue
as
well.
B
B
One
of
the
incidents
I
ran
into
which
technically
was
not
a
blocker
just
due
to
the
timing
in
which
things
occurred,
but
if
this
were
earlier
in
the
day
and
there
weren't
other
issues,
this
would
have
been
a
blocker
qa
was
failing
in
production
on
the
canary
stage,
because
sidekick
was
overloaded,
but
this
particular
sidekick
chart
doesn't
really
have
the
same
slis
as
other
shards.
This
is
the
imports
shard.
B
B
I
don't
worry
this,
there's
nothing
that
states
hey
we're
waiting
for
an
import
to
happen.
It's
just
further
queued
up
right
now,
so
you
have
to
wait.
So
qa
was
failing
because
it's
expecting
projects
to
exist,
but
those
projects
come
from
a
form
of
an
import.
Those
import
jobs
were
queued.
They
were
queued
because
someone,
this
is
production.
B
Someone
kicked
off
a
bunch
of
imports
that
take
over
five
hours
to
accomplish,
so
we
were
just
stuck
in
a
bottleneck
inside
a
sidekick
and
we
were
never
going
to
see
qa
pass
until
literally
the
next
day.
It
was
saturday
when
it
was
saturday
late
afternoon
like
right
before
my
dinner
time
that
qa
was
finally
at
a
point
where
sidekick
would
process
an
import.
Therefore,
qa
could
be
successful.
B
I
don't
like
this
because
this
means,
if
this
shard
becomes
overloaded
again
in
the
future,
we
may
run
into
failing
qa
and
there's
nothing
for
us
nor
quality
engineers
to
do.
We
could
bring
this
and
raise
it
up
to
the
sre
on
call,
but
in
there's
really
only
two
things
that
I
see
as
a
solution.
One
is
to
artificially
scale
up
the
importers
to
tackle
some
extra
workload.
B
We
kind
of
want
to
shy
away
from
that,
because
doing
that
means
that
we're
going
to
pull
more
data
in
use,
more
pg,
bouncer
connections
potentially
saturate
that
but
we're
also
overloading
the
import
process
in
general.
You
know
we
limit
it
for
a
reason,
so
we
shy
away
from
that
or
two.
We
wait,
which
I
don't
want
to
do,
because
this
is
auto
deploy
and
that's
negatively
impacting
mtdp.
In
my
case,
we
do
not
have
an
issue
for
this,
because
I
don't
know
what
to
do.
A
A
For
now,
I
think,
probably
in
the
release
blocker
issue,
so
it
matches
our
process,
but
you
know
maybe
give
an
account
of
what
happened
impact
we
saw
and
then,
if
you
give
me
a
ping
on
it,
I'll
follow
up
with
with
rachel
we'll
see
what
scalability
can
do,
because
even
if
there's
no
sort
of
a
media
thing,
then
certainly
we
want
to
be
aware
of
that.
This
is
happening
and
and
here's
the
impact.