►
From YouTube: 2021-08-02 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
B
A
A
A
Okay,
so
we're
at
time.
So
let's
begin
so
mttp
is
looking
a
little
bit
sad,
but
that's
to
be
expected
actually
from
an
mttp
point
of
view.
When
we
went
into
the
pcr
last
week
we
we
kind
of
like
we
start
tracking
again,
but
we
also
had
quite
a
lot
of
delays
so
completely
expected.
But
hopefully
things
will
be
looking
a
bit
healthier
through
august,
so
we
can
keep
tracking
that
cool
announcements
all
read
only
I
will
do
a
follow-up
I'll.
A
Add
it
in
I'll,
see
what's
already
written
and
see
what
we
need
to
add
for
the
360
feedback,
but
particularly
around
kind
of
like
writing
feedback.
Maybe
I'll
chat
to
you
all
one-to-one
about
that
stuff
so
into
discussion
and
scarbec.
You
have
the
first
point.
B
C
So
pre-pride
there
is
a
configuration
change
to
our
charts
that
I
want
to
push
and
pre-prod
is
failing
because
of
a
conflicting
change
in
our
cng
image
in
our
helm,
where
our
image
has
been
updated,
but
it's
targeting
the
14.2
release
our
helm
chart
is
based
off
a
master
and
because
there's
a
discrepancy
between
requirements.
Here
we
lose
the
ability
to
say
hey.
I
need
this
in
pre-prod,
but
I
need
to
upgrade
our
application
image
at
the
same
time.
C
B
A
Does
it
so
if
q8s
will
continue
to
run,
but
I
suppose
that's
probably
a
good
thing.
A
I,
after
that
yeah
I'm
not
I'm
not
sure
like.
I
think
if
you
feel
like
that's
going
to
be
okay
in
terms
of
it,
doesn't
detract
from
any
release
stuff
coming
up
any
patches
or
things,
I'm
not
sure.
Quite
when
you
do
not
use
pre
now
until
the
monthly
prep
is
that
right,
yeah.
C
It's
only
for
rc's
and
that's
actually
my
concern
is:
are
we
going
to
have
a
versioning
issue?
I
suppose
not
because
I'll
deploy
an
auto
deploy
version
to
it
and
we're
going
to
deploy
14.2,
but
is
the
14.2
auto
deploy
branch
was
14.2
dot
some
date
stamp
going
to
be
higher
than
14.2
release
candidate?
A
C
C
B
I
mean
pre
being
down,
normally
isn't
a
big
issue
right.
Nobody
is
using
it
besides
when
we
start
to
work
on
on
a
monthly
release,
so
killing
it
right
now.
If
we
are
sure
we
can
just
bring
it
up
back
up
shouldn't,
be
an
issue
right.
The
questions
more
if
you
can
bring
it
up
with
a
targeted
version
if
you
need
to,
but
that
should
be
possible
right.
C
So
what
if
I
do
this?
What
if
I
take
an
auto
deploy
package,
send
it
out
the
door
and
then
I'll
modify
the
current
14.2
release
issue
to
say:
hey
keep
an
eye
out
for
pre-pride,
because
we
did
something
earlier
this
month.
B
B
So
if
you
are
able
to
reinstall
pre
with
a
previous
rc
version
that
should
be
fixed
right.
C
But
again
I
can't
go
backwards.
Like
once,
I
deployed
the
auto
deploy
package
and
then
I
proceed
forward
with
the
change
for
the
helm
chart.
If
I
attempt
to
roll
back,
pre
goes
down
because
of
database
and
other
changes
because
of
the
cng
image
not
being
compatible
with
the
helm
chart.
Oh
yeah
so
like.
If
I
go
backwards,
we'll
prevent
future
deploys
because
it's
gonna
well,
is
it
gonna
prevent
a
deploy?
C
If,
like
the
release
candidate,
once
the
release
kennedy,
when
the
release
candidate
is
ready
for
14.2,
we
need
to
somehow
force
that
out
the
door
into
pre-prod,
which
should
be
doing
run.
C
A
We
could
always
create
an
rc
like
a
test
drc
a
little
bit
earlier
right
like
now
super
early,
but
we
could
you
know
like
a
day
or
two
before
the
point
where
you
actually
would
normally
create
your
test
I'll
see
we
could.
We
could
put
one
together
and
at
least
then
we'd
know
if
it's
deploying
or
not
right,
yeah.
B
But
also
from
ordering
right
numbers
are
always
in
a
lower
order
like
like
come
first
and
then
we
have
letters.
So
if
you
have
the
rc
thing,
it
should
be
right,
I
think
even
from
ordering.
So
I
probably
I
think
we
don't
run
into
that
issue
only
if
our
tooling
is
expecting
that
the
current
pre
is
running,
something
which
is
ending
with
rc
and
trying
to
compare
with
that.
But
I
wouldn't
envision
that
this
is
the
case
because
specialty
an.
A
A
The
only
thing
around
that
is,
I
said,
the
only
thing
the
stable
branch
would
get
created,
so
you
need
to
just
think
about
whether
that's
okay
and
then
the
other
thing
is.
We
just
probably
need
to
give
jihoo
heads
up.
I
can
do
that
just
to
tell
them
that
we've
done
that
because
they
pick
up
our
branches,
so
just
as
visibility.
A
A
C
C
A
C
B
C
C
Gotcha,
I
think
this
would
be
a
good
exercise
for
updating
our
documentation.
A
Good
luck
with
change,
so
pcl's
are
now
complete,
so
we
should
back
to
normal
on
pushing
our
changes
and
also
deployments.
We
have
got
the
daily
stand-up
is
taking
place
and
there
are
kind
of
lots
of
people
discussing
ideas
off
the
back
of
the
pcl's.
So
we've
got
a
really
good
opportunity
like.
If
there
are
ideas
you
have
for
how
we
could
make
better
use
of
staging
and
or
like
deployment
or
how
we
can
sort
of
surface
things
up
better,
then
please
do
raise
them
up.
A
I
will
leave
the
database
changes
retro
open
for
a
couple
of
days.
Some
of
the
item
on
there
on
there
went
to
infradev
and
they're
being
worked
on.
They
were
kind
of
things
directly
out
of
various
incidents,
and
the
other
bit
that's
kind
of
interesting
is
andrew,
is
working
on
trying
to
put
like
a
constant
stream
of
traffic
onto
staging
and
seeing
whether
that
might
give
us
more
realistic
alerts
or
not
realistic,
more
usable
alerts.
So
quite
a
lot
of
staging
alerts
at
the
moment.
A
Are
we
don't
have
enough
traffic
so
he's
wondering
if
we're
actually
able
to
eliminate
those
with
a
bit
more
traffic
and
then
therefore
the
alerts
become
like,
hopefully,
more
usable
kind
of
in
line
with
that
conversation,
there
was
a
discussion
about
whether
actually
we
could
use
our
production
health
checks
and
apply
the
same
and
have
staging
health
checks,
and
we
could
just
have
it
as
a
warning
on
deployment.
But
does
that
sound
like
like?
Would
it
be
useful
to
be
getting
more
more
data
out
of
staging
beyond
just
tests
passing
and
failing.
B
Well,
that
sounds
for
me
a
little
bit
like
we
would
have
alert,
fatigue
very
soon
and
ignore
alerts
from
coming
from
staging,
because
mostly
they
wouldn't
affect
us
and
they
are
very
random
and
on
the
other
hand
today's
example
shows
us.
We
had
staging
failing
since
a
while
and
didn't
notice,
and
nobody
really
didn't
care
until.
A
That
would
be
what
we'd
have
to
absolutely
solve.
So
I
think
if
we,
if
we
start
like,
if
we
started
doing
something
like
this,
then
yeah
absolutely
we'd
have
to
like
change
stuff
up.
So
it
wasn't
like
that.
Yeah.
B
A
Yeah,
absolutely
exactly
and
that's
today
we
get
so
many
alerts
at
the
moment.
A
lot
of
them
are
the
low
traffic.
So
therefore
they
get
ignored
and
then
they
build
up
and
to
still
get
ignored
so
yeah.
It
would
be
rolling
back
that
stuff
and
actually
getting
it.
So
we
have
ones
we
care
about
and
they
fire
and
tie
them
to
the
deployments.
A
B
The
thing
is
when,
when
we
run
a
deployment
and
at
work,
then
I
feel
like.
Okay
staging
should
be
fine,
because
it's
in
most
cases
it
is
just
if
something
else
breaks
like
infrastructure
or
something.
Then
we
don't
notice,
because
nobody
is
watching
it
right.
B
A
Okay,
is
there
anything
else,
you'd
like
to
see
happen
like
off
the
back
of
the
pcl's
other
other
things
that
so
kind
of
a
lot
of
what
was
being
talked
about
last
week
was
the
fact
that
we
had
a
number
of
database
like
changes
failing
and
they
were
failing
now
we
were
finding
them
quite
late
on
in
pro
in
deployments
like
after
it
was,
in
fact,
even
after
production
deployments
is
anything
else
that
you
would
like
in
an
ideal
world
like
to
have
in
place
to
help
us
here.
B
I
feel,
like
most
of
things
are
covered
in
the
discussions
around
the
pci
already
like
coming
up
with
a
different
way
of
staging,
which
is
more
production-like
and
things
like
that.
B
C
But
if
we're
under
a
hard
pcl
could
we
identify
a
certain
set
of
changes
or
a
certain
category
of
modifications
toward
code
base
that
might
be
more
risky
where,
when
we
release
the
pcl
we've
already
got,
you
know
500
plus
changes
backlogged
ready
to
go,
but
now
we're
introducing
higher
risk
stuff.
That
would
normally
be
maybe
more
thoughtfully
monitored
on
any
normal
day,
but
because
of
the
pcl
it
was
just
merged.
And
now
it's
just
up
to
us
to
kind
of
watch
for
things.
I
wonder
if
we
could
somehow
classify
certain
changes.
C
C
A
Yeah,
I
think
it's
a
super
good
point.
I
think
one
thing
that
was
interesting
was,
I
think
the
pcl
almost
felt
like
a
little
bit
detached.
I
mean
I
know,
teams
were
held
up,
but
it
wasn't.
It
was
like
very
much
an
inside
infra
kind
of
hands-on
thing,
whether
there's
actually
more.
We
could
do
to
bring
like
developers
into
that
and
like
help
yeah,
identifying
issues
or
finding
ways
to
test
things
additionally
or
something
like
that.
So
I
think
that's
a
super
good
point.
C
Like
realistically
half
the
company
went
into
or
half
of
engineering
went
into
a
pcl,
primarily
infrastructure,
but
I
don't
know
what
development
teams
do
during
a
pco,
I'm
sure
certain
some
of
them
are
probably
helping
towards
some
of
the
stuff
that
we've
been
identifying
as
problematic,
but
for
the
most
part,
they're
just
gonna
keep
chugging
along
with
their
own
priorities
as
necessary.
So
I
think
maybe
we
need
to
shift
some
of
that
focus
into
when
we're
under
pcl.
We
need
to
steal
some
of
the
development
engineering
time
potentially.
B
A
Definitely
contribute
some
people
definitely
delayed
with
like
feature
flags
and
stuff,
but
also
there
are
many
other
people
who
I'm
sure
it
doesn't
really
affect
and
they
can
just
push
code
and
and
keep
going
right
so
and
we
don't
want
to
pause
absolutely
everything,
but
I
agree
like
having
a
bit
more
in,
like
input,
could
help
us
on
stuff.
So
let
me
let
me
see
what
I
can
do
with
that
one.
A
B
I
think
finishing
up
the
security
release.
I
would
like
to
work
on
some
a
little
bit
with
robot,
because
I
still
have
some
questions.
I
think
that
would
help
me
at
least.
B
A
Cool
okay!
Well,
if
you
think
of
things,
then
please,
let
me
know
great
so
q3,
I've
added
the
okr,
that's
what
we
discussed
in
the
issue.
I
put
it
as
it
was
into
ali,
so
you
all
have
access
to
ali,
but
you
may
need
to
sign
up
it's
in
octa
and
you
can
sign
up
through
there.
So
all
the
okr
issues
will
be
in
alley
from
now
on,
not
just
for
infrared
for
all
the
company,
so
you
can
cascade
those
up
and
down
and
kind
of
see
what's
linked
together.
A
So
our
goal,
we
have
quite
a
lot
of
kind
of
aspects
to
our
okr,
but-
and
this
is
the
single
okay,
but
the
main
goal
behind
it
is
reliable.
Rollbacks
and
we've
got
like
kubernetes
work
tied
in
there
we've
got.
You
know
we
can
pick
up
some
stuff
around
coordinate
deployments
as
well
as
like
tooling,
and
you
know,
making
pipelines
faster
and
all
of
that
stuff,
but
curious
to
hear
your
thoughts
like.
What
do
you
think
is
the
biggest
issue
like
if
we
were
to
say
how
can
we
make
rollbacks
more
reliable?
C
C
C
E
A
Yeah
yeah,
okay,
so,
like
scheduling,
is
quite
key,
so
we
talked
in
a
previous
month
about
when
we're
doing
trying
to
do
this
so
that
we
could,
you
know,
be
ready
to
redeploy
straight
afterwards,
but
we
didn't
manage
to
fit
that
in
okay.
A
Okay,
cool
well,
I'm
gonna
I'll,
say
I've
got
a
bit
of
work
to
do
today
tomorrow,
just
to
get
all
the
epics
we've
got.
We've
got
our
work
kind
of
split
across
quite
a
few
epics
at
the
moment,
which
is
fine,
but
what
it
means
I'll
pull
the
issues
together,
add
them
into
the
billboard
and
we
can.
We
can
pick
from
there,
but
certainly
everything
we've
been
working
on
through
q2
contributes
hugely
to
this.
So
I'm
not
expecting
a
great
direction
shift
at
all
awesome
robert
over
to.
D
You
yeah,
so
I
just
wanted
to
discuss
quickly
this
security,
mr,
I
won't
go
into
details
because
we're
still
in
the
recording
phase,
but
there's
a
need
to
invalidate
the
markdown
cache,
which
has
previously
kind
of
had
big
impacts
on
performance.
As
you
know,
cash
is
updated
in
the
database,
so
I
don't
know
if
we
need
to
kind
of
coordinate
this
change
as
a
loan
commit.
It
sounds
like
kind
of
no
matter
when
we
do
it.
It's
not
going
to
matter
because
it's
just
like
the
first
time
an
issue
gets
hit.
D
B
Production
ping
pong
first,
which
maybe
should
be
implemented
first
before
we
merge
this
at
all
right
because,
as
you
will
see,
this
even
more
happening
in
production-
and
I
didn't
see
it
this
morning
to
be
finished-
so
I'm
not
sure
if
we
will
get
this
into
the
current
security
release.
At
least
okay,
that's
a
good
point,
but
I
mean,
after
that
we
still
would
have
the
issue
like
unforeseen
performance
impact
of
watching
this
over
the
day
yeah.
But
there's
no
way
to
just
do
it
right.
A
A
B
C
B
D
B
To
go
through
this,
even
if
we
improve
it,
which
we
should
paid
for,
I
think
anyway,
but
after
that
we
need
to
do
this
anyway,
and
that
we
have
impact
so
maybe
yeah
I
mean
preventing
that
somebody
is
just
assigning
the
bot
and
that
lens
and
an
auto
deploy
is
one
thing,
and
the
other
thing
is,
I
think,
with
a
change
request
or
something
to
roll.
I
think
that
would
be
the
right
way
to
do
that
and
assign
people
to
that.
A
A
Okay,
let's,
let's
see
what
we
can
do,
let's
discuss
maybe
discuss
on
the
issue.
If
how
we
can
deal
with
that,
but
let's
get
the
engineering
call
involved
as
well,
because
I
say
they're
likely
to
to
see
things
if
it
happens.