►
From YouTube: 2022-07-04 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
C
A
Hello,
so
kelly
is
on
its
way,
but
we
are
over
time,
so
I'm
gonna
get
going
so
there's
a
bunch
of
announcements
there
that
you
can
all
read
lots
of
time
off,
which
is
good,
mostly
just
to
be
aware
on
those
things
that,
if
that
means
you're
blocked
on
anything.
So
if
you
find
yourself
without
somebody
to
do
that,
you
need
to
do
something
just
give
michelle
and
I
or
I
a
shout
and
we'll
we'll
do
what
we
can
to
to
help
unblock
things.
A
It
is
just
as
people
are
off.
That's
fine.
We
reduce
our
capacity,
so
we
do
expect
some
things
we'll
be
moving
a
little
bit
slower,
but
even
so
give
us
a
shout
if
that,
if
that
sucks
you
but
on
discussion
things
then
so
a
a
is
really
just
I'd,
love
to
get
some
feedback
right
so
for
the
last
few
months,
some
period
of
time,
we're
posting
team
priorities
and
slack
and
putting
them
on
the
updates
issue.
A
Now
we
will
be
sort
of
actively
trying
to
change
this
stuff
as
we
move
more
into
working
independent
teams,
but
on
how
things
are
going
at
the
moment.
Our
rough
goal
is
to
try
and
make
it
a
little
bit
easier
so
that
in
amongst
all
of
the
projects
going
on
and
the
issues
that
are
in
progress,
we
have
a
way
of
kind
of
highlighting,
like
here's,
the
really
top
priority
stuff.
A
A
Okay,
awesome
so
say
we
would
definitely
be
changing
this
up
in
the
next
like
month
or
two,
because
hopefully
one
of
the
goals
of
the
team
split
is.
We
won't
have
so
much
stuff
in
flight.
That's
kind
of
the
the
intention
behind
it,
so
having
two
smaller
teams
hopefully
means
that
we'll
have
maybe
one
or
two
projects
in
place
and
everybody
would
be
working
on
that
project.
A
So
hopefully,
this
stuff
just
gets
easier
anyway,
but
if
you
have
any
thoughts
before
that
feel
free
to
just
give
give
michaela
a
shout
so
and
then
b
is
a
bit
of
an
updated
one.
I
just
wanted
to
double
check
that
everyone
had
seen
this,
so
I
have
been
rolling
out
a
change
to
alter
the
way
we
we
open
incidents
for
deployment
related
problems.
A
There
are
a
couple
of
pieces
that
have
changed
as
a
result
of
this.
One
is
that
we
will
only
need
to
raise
an
incident
if
we
genuinely
need
the
engineer
on
call
or
if
we
need
to
use
dev
escalation.
A
So
there
will
be
lots
of
other
cases
where
we
need
to
do
something
on
a
deployment
either
it's
broken
and
we
want
to
track
that
it
was
broken
or
the
more
common
one.
Is
the
test
fail
and
we
need
to
work
with
quality
and
both
those
cases
have
been
working
quite
well
to
raise
an
issue
in
our
release.
Issue
tracker
quality
have
been
collaborating
there
with
us.
We've
also
got
examples
of
gitly
working
with
us
there
and
other
teams
as
well.
A
So
that's
been
working
quite
well
and
it
removes
an
awful
lot
of
the
admin
that
goes
with
an
incident.
So
that's
just
easier
for
us
as
well
and
then
the
other
piece
is
we
will
no
longer
when
we
do
need
an
incident.
So
great
example
is,
if
say,
tests
fail.
We
raise
an
issue
in
the
release.
Issue,
tracker
quality,
take
a
look,
they
come
back
and
say:
actually
it's
not
the
tests,
it's
environment.
A
We
would
at
that
point,
raise
an
incident,
but
we
wouldn't
default
that
to
be
a
severity
2
incident
and
that
allows
our
incident
tracking
metrics
to
become
much
more
accurate
than
what
they
currently
are.
So
in
that
case
we
would
raise
an
incident
and
we
would
use
the
handbook
to
choose
an
appropriate
severity.
It's
probably
going
to
be
an
s3
or
an
s4
for
something
that's
affecting
staging.
There's
no
users
affected
on
that,
but
I
have
introduced
the
delivery
impact
label
which
we
can
use
as
an
optional
label.
A
A
If
you
have
any
troubles,
please
give
us
a
shout
right
and
we
can
figure
out.
None
of
these
things
can't
be
adjusted,
so
do
do
shout
if
you're
waging
an
incident
and
you're
not
sure
if
you
got
the
right
labels
or
or
any
any
troubles
at
all
like
that.
A
Okay,
great
stuff,
I'm
sure
there
will
be
lots
and
lots
of
edge
cases
on
that
and
there's
quite
possibly
some
pieces.
I
haven't
managed
to
update.
I
did
update
quite
a
lot
of
the
handbook
and
the
docks,
but
that
information
is
quite
fairly
like
spread
out.
So
if
you
do
see
anything,
I've
missed
please
either
open
an
mr
or
give
me
a
shout.
A
A
So
I
don't
think
we
have
any
release
managers,
so
we
can
skip
three.
I
don't
know.
I
can't
really
drilling.
B
So,
starting
with
the
auto
deploy
packaging,
let
me
minimize
these
stuff.
We
had
like
a
rough
start
of
the
week
last
week.
I
think
because
of
incidents,
but
we
went
through
mid
of
the
week
and
then,
like
everything,
was
fine
until
we
had
the
pcl.
B
B
A
A
No
worries
just
do
you
want
to
stop
sharing
your
screen
and
see
if
that
maybe
helps
yeah
do
is,
I
know
possibly
he's
out
of
it:
okay,
cool,
okay!
Well,
I
put
together
deployment
blockers
for
last
week
and
that
is
looking
quite
healthy.
A
One
thing
which
would
be
great
is
for
us
to
find
a
way
to
pull
all
of
these
things
these
metrics
together.
So
I
think
that's
something
we
can
look
at
in
the
next
few
months
so
that
at
the
moment,
it's
very
manual
gathering
all
these
bits
of
data
together
and
analyzing
them.
So
having
a
way
to
set
that
up,
so
it's
a
bit
more
self-serve
will
be
would
be
a
good
addition.
A
Awesome
so
anyway,
based
on
those
things
in
conclusion:
we're
not
going
to
adjust
direction.
We
only
have
a
couple
of
things
on
our
kind
of
top
priorities
list
and
they
are
responding
to
container
registry
requests
and
post-productions,
so
they
will
continue
to
be
our
focus.
A
So
does
anyone
have
anything
else
they
want
to
bring
up
in
this
recording.