►
From YouTube: 2022-01-17 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
henry
should
we
just
go
straight
to
release
management
stuff,
since
that's
kind
of
like
been
the
highlight
of
the
day
so
far,.
B
Yeah
so
at
least
mentioned
was
quite
painful
over
the
last
two
weeks,
and
mostly
because
death,
the
death
instance
probably
got
slower
or
overloaded
with
more
stuff
over
time,
and
we
just
hit
the
tipping
point
now
in
january
and
combined
with
other
issues
that
we
have
like
those
with
them
skip,
bridge
shops
of
job
fails
and
other
smaller
things
that
led
to
a
lot
of
manual
work,
and
that
also
meant
that,
for
instance,
the
secured
release
failed
building
packages
for
a
long
time
because
they
failed
timed
out,
and
things
like
that.
B
So
overall,
there's
a
nice
comment
from
robert
about
a
summary
of
the
issues
that
we
are
facing
now,
especially
interesting
for
new
incoming
release
managers.
Maybe,
and
the
efforts
that
we
have
going
right
now
are
mostly
around
trying
to
speed
up
depth
short
term
and
then
trying
to
move
it
over
to
gcp.
But
this
is
quite
a
little
bit
more
effort
and
we
shouldn't
do
this
this
week,
probably
before
the
monthly
release.
That
could
be
a
little
bit
too
much
so
currently.
B
The
best
action
that
we
can
take
this
week
probably
is
just
changing
the
disk
setup
on
the
instance,
which
shouldn't
take
a
too
long
down
time.
It
shouldn't
be
too
complicated.
Maybe
that
already
helps
enough
yeah
and.
B
Yeah
so
I
managed
to
create
new
disks
and
I
set
them
up
and
now
I'm
writing
together
the
procedure
to
follow
in
a
cr
to
actually
do
the
switch,
because
we
need
to
copy
over
data
then
need
to
stop
the
instance.
Then
do
an
not
the
instance
but
the
services
on
the
instance.
Then
unmount
do
last
rsung
over
to
the
new
disks
and
then
mount
them
at
the
right
place,
and
this
needs
to
be
done
with
a
little
bit
of
coordination.
B
A
B
What
we
can
see
in
the
graph
funnel
link
here
is
that
our
deployment
slo,
of
course,
suffered
a
little
bit,
but
you
don't
see
it
too
too
much.
Interestingly,
deploy
timings
got
up
and
ours,
it
all
got
down
a
little
bit,
but
we
didn't
always
take
much
longer
than
eight
hours
to
finish
a
deployment.
B
Apparently,
we
still
managed
to
sometimes
get
the
pipelines
through
eventually,
even
though
we
needed
to
retry
most
of
the
jobs
or
most
of
the
pipelines,
but
the
effort
for
that
was
just
too
much
right,
because
you
need
to
babysit
each
of
the
pipelines
and
invade
and
see
if
something
fails.
So
it
was
a
high
price
for
that.
A
Cool,
so
I
mentioned
a
few
things
in
actually:
let's,
let's
go
so
yeah,
so
this
the
fix,
hopefully
for
skip,
drops
just
merged
this,
like
last
minute
so
fingers
crossed.
I
think
the
unknown
at
this
moment
is
whether
this
bug
is
actually
what
we're
seeing.
So,
hopefully
it
is
but
robert
yeah
do
you
want
to
verbalize.
C
Yeah
so
because
we
only
run
official
patch
releases
on
ops,
we're
not
going
to
see
the
benefit
of
this
until
we
either
do
this
14.7
release,
assuming
it's
in
the
147
release
or
dudes
kind
of
14
six
factor
release
just
for
us,
which
I'm
not
opposed
to,
especially
to
fix
this
bug,
because
it's
so
annoying,
but
that
would
kind
of
take
away
from
this
run
up
to
the
20.
Second,
we're
doing
right
now,
so
there's
some
trade
offs.
B
Yeah
anyway,
wanted
to
do
maybe
another
patch
release
just
to
test
if
we
got
a
better
performance
with
dev,
because
we
also
have
another
issue
with
publishing
right
if
we
create
a
release
and
tag
like
we
saw
this
with
the
security
release,
then
building
the
packages,
because
a
lot
of
pictures
were
built
at
the
same
time
really
overloaded
depth.
B
A
Do
you
think
you'll
be
able
to
patch
tomorrow
henry
or
would
that
be
too
much
for
the
release
managers
and
we
should
ask
for
maybe
like
ruben,
maybe
mike
myra,
to
help
if
needed,.
B
I
think
it
should
work,
maybe
not
in
my
first
part
of
the
day,
because
you
need
to
look
into
how
far
we
got
with
the
disks.
We
should
do
the
patching
after
the
disk,
I
think
to
have
a
test
case,
but
I
think
it
should
work
as
we
we
should
reach
out
and
for
air
care,
but
maybe
iowa.
A
A
And
so
that
I'd
say
to
like
both
of
you
really,
it's
like
this
week
is
all
about
this
release
right,
like
after
last
week's
pain.
So
if
we
have
to
stop
all
of
our
projects
and
have
the
whole
team
kind
of
help
out
to
get
14.7,
and
let's
do
that
so
see
how
you
get
on,
but
yeah
after
the
disc,
that's
planner
plan
a
patch
well
to
be
honest
like
even
if
you
don't
get
the
discs
done
like
if
the
discs
would
ever
get
delayed.
A
But
let's
aim
to
patch
tomorrow
and
hope
we
have
the
discs
in
place,
because
it
would
be
really
good
to
know
if
this
bridge
jobs
thing
is
fixed,
because
if
it's
not,
we
need
to
ask
them
to
prioritize
another
bug,
and
I
worry
about
how
long
that
might
take.
A
Cool
okay,
great
stuff,
is
there
anything
else
anna
wants
to
like
mention
about
this
topic
or
any
other
topics.
A
Okay,
so
kind
of
just,
I
guess,
general
viewers.
Over
the
last
few
weeks,
we've
had
like
lots
of
these
kind
of
little
pain
points
and
they
kind
of
combined
together,
like
what
I
want
us
to
be
really
thinking
of,
as
we
go
through
this
year,
is
how
can
we
catch
these
things
and
take
them
away?
So,
like
any
retries,
we
have
any
of
these
little
brittle
things
where
we
see
oddities
like
it
feels
like.
We
have
a
lot
of
those
in
place
at
the
moment.
A
So
that's
the
sort
of
stuff
I'd
love
us
like,
as
we
start
moving
like
if
we're
gonna
do
continuous
deployment
like
properly
it's
a
pipeline
that
just
runs
itself
and
it
goes
to
production
and
we
have
like
metrics
and
things.
So
we
need
to
really
find
a
way
where
we're
actually
able
to
like
identify
these
things,
and
I
don't
know
like
hand
off
to
people
like.
A
So
it's
not
just
release
managers,
kind
of
fire
fighting
all
the
time,
but
start
thinking
about
ways
which
we
could
like
track
and
isolate
these
things,
because
it
feels
like
release.
Management
is
getting
harder
and
harder
over
the
months
and
that's
definitely
not
the
way
we
want
to
be
going
as
we
bring
in
new
people.
It's
quite
tricky
to
onboard
them
because
there's
so
much
like.
Oh
here's,
a
random
error
that
has
happened
and
what
you
now
need
to
do
is
debug
all
of
this
stuff
right
versus
just
staying
within
our
tooling.
A
Awesome
great:
are
there
any
other
topics?
People
want
to
bring
up.