►
From YouTube: 2020-09-28 Delivery team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
hello,
everyone
happy
monday
welcome
to
another
another
delivery
weekly.
Hopefully
you
all
know
this
already,
but
just
double
checking,
there's
another
friends
and
family
day
coming
up
so
the
9th
of
october.
A
So
for
once
alessio
you
get
to
experience
the
the
friends
and
family
day
as
well,
which
is
great
I'll,
make
a
note
in
the
agenda
so
that
we
should
remember
next
week.
We
should
do
the
whole
pausing
of
branches
stuff
again
that
we
did
last
time
to
stop
just
churning
stuff
out.
So
I'll
make
a
note
to
remember
to
remind
you
next
week,
jav.
B
Oh,
I
just
also
wanted
to
know
there's
a
u.s
holiday
on
monday
that
I
was
planning
to
take,
and
maybe
others
are
well
so
that'll
be
a
nice.
I
guess
for.
A
That's
a
great
point,
robert:
you
is
that
a
bank
holiday
for
you,
a
public
holiday
for
you
as
well.
A
Cool
okay,
great
we'll
cover
you
on
release
management,
then
so
it
should
be
a
quiet
day
anyway.
So
that's
no
problem.
I
can
cover
that
stuff
awesome,
so
we're
heading
into
october
this
week,
so
I
wanted
to
kind
of
just
a
bit
of
a
catch
up.
So
october
is
the
final
month
in
q3.
A
So
first
off
we've
got
loads
in
progress,
but
we're
making
great
progress
so
well
done
on
that
stuff.
For
the
next
month.
That's
really
focused
on
like
finishing
wrapping
things
up
like
I
think.
A
lot
of
stuff
is
very
close
to
being
there,
so
she
got
in
our
key
results
and
also
got
quite
a
lot
of
epics
in
progress.
So
all
stateless
services
in
kubernetes.
This
is
probably
as
expected,
the
key
result
which
we've
made
the
least
progress
on,
but
it
was
always
the
stretchiest
one.
A
So
is
there
anything
that
you
particularly
need
from
all
of
us
as
a
team
to
help
move
this
stuff
forwards.
B
Yeah
I
mean
like
I
need,
I
probably
need
help
for
well,
I
need
probably
most
most
help
from
scarbeck
and
but
I
think
I
think
all
of
the
issues
are
laid
out
in
the
epic
for
good
hdps.
We
could
start
with.
I
guess
we
could
start
with
get
ssh
as
well,
but
about
after
this
meeting
I
kind
of
wanted
to
loop
to
loop
up
with
scarbeck.
Anyway.
Are
you
going
to
be
available
to
talk.
A
It
was
always
known
to
be
a
stretch
so,
and
we
brought
in
some
extra
work,
not
everyone's,
probably
been
following,
but
we
wanted
some
extra
work
to
investigate
multi-clusters,
so
you
know
like
if
well
it
was
known
it
was
additional
stuff.
So
that's
completely
fine.
A
Next
key
result:
we've
got
removing
manual
steps
from
our
deployment
process.
I
have
just
made
a
bit
of
an
edit
on
this
on
the
wording
inside
this
key
result.
So
originally,
when
we
had
this
well,
the
original
intent,
I
think,
is
more
reflective
of,
what's
written
now
around
kind
of
the
manual
removing
manual
steps
around
the
deployment.
A
What
we
know
is
this
leads
really
neatly
into
rollbacks,
which
is
a
big.
A
very
important
thing.
Zalesia
has
already
done
a
lot
of
the
thinking
and
writing
up
on
this
stuff,
but
I
think,
given
where
we
are
in
the
quarter,
it
was
kind
of
mentioned
to
me
was
like
is
this:
was
this
always
part
of
the
key
result,
and
actually
no
that
probably
actually
wasn't
the
original
intent
at
the
beginning,
so
it
makes
a
lot
more
sense.
A
We
know
that
we
need
to
complete
assisted
deployments
and
have
that
fully
validated
before
we
can
move
into
the
rollbacks.
So
it's
not
really
something
we
can
parallelize.
So
I've
actually
just
tightened
up
the
wording
inside
that
key
result
to
more
accurately
represent
what
we,
what
we
kind
of
originally
said
and
then,
as
we
go
into
q4,
we
can
pick
up
some
of
the
the
next
bits.
A
I
do
have
a
couple
of
questions,
so
we
kind
of
have
I've
called
out
the
sort
of
seven
steps
I
believe
for
removing
manual
steps
in
deployments.
What
well
two
questions?
Really
one
is
the
I've
got
there?
Are
there
any
new
application
errors
following
the
canary
baking
time,
so
that's
kind
of
the
sentry
stuff?
Is
that
something
that
we
are
going
to
automate
by
or
sort
of
do
away
with
the
sentry
stuff.
E
E
So
our
bet-
because
it's
a
bet
in
the
in
the
development
of
the
automation,
is
that
errors
reflect
the
number
of
errors
is
what
matters
because
you
may
have
you
may
with
a
new
error.
You
may
broke
something,
but
if
that
something
is
really
yeah,
not
really
frequent,
it
does
not
have.
The
big
numbers
were
are
not
affected,
so
it's
still
something
that
we
have
to
catch,
but
as
it
can't
be
shifted
on
the
right
hand
of
the
deployment
pipeline.
E
A
E
I'm
afraid
that
yeah
or
we
go
in
some
wary
in
some
very
weird
kind
of
analysis,
so
we
can
do
something
like
machine
learning
over
the
name
of
the
thing.
So
we
do
some
inference
statistics
and
we
say
yeah
this.
We
always
see
this
kind
of
error,
so
they
are
not
important
and
we
try
to
get
the
but
yeah.
It's
it's
not
a
boring
solution
and
yeah.
Probably
I
don't
think
it's
even
worth
doing,
because
we
had
this
because
it
was
the
manual
step
and
we
were
doing
a
lot
of
manual
steps.
E
E
But
this
is
what
we
are
doing.
We
are
already
doing
with
our
with
our
metrics,
because
the
basic
idea
was
that
we
want
to
check
error
rates
and
not
the
specific
errors
in
century.
So
talking
with
andrew,
he
told
me
we
are
already
collecting
error
rates
in
prometheus
and
they
are
already
segmented
by
git
workers
by
staging
by
service.
So
this
is
what
we
actually
right.
Now
we
are
looking
at
those
errors.
D
I
guess
those
are
response
codes
more
than
like
specific
exceptions.
Right
I
don't
know
if
there's
a
functional
difference.
E
So
from
the
web,
I
I
think
it's
500s
for
the
sidekick.
It
failed
jobs.
E
E
That's
that's
the
bet
we
are
doing
here,
basically,
because
we
can't
deploy
more
frequent
and
having
to
check
manually
any
one
of
them
and
I'm
quite
sure
that
we
missed
a
lot
of
new
errors,
every
single
release,
even
when
we
were
actively
looking
for
them.
E
So
I
don't
think
it's
something
that
should
be
should
be
part
of
the
of
a
deployment
pipeline.
But
it's
more
of
a
let's
say:
tech,
depth,
analysis
or
refining
the
product,
because
someone
has
to
look
at
those
error
and
open
issues
and
someone
has
to
fix
them,
but
it
if
you
are
breaking
something
very
loudly
that
would
just
prevent
you
to
you
know
just
creating
an
outage
or
something
like
that.
This
will
be
picked
because.
B
I
I
wonder-
and
I
was
thinking
about
this
before
I
wonder
if
we
should
just
draw
an
arbitrary
line
that
says
sentry
errors
that
affect
more
than
100
users
or
a
thousand
events.
And
then
we
have
a
concerted
effort
to
clean
up
what
we
have,
and
perhaps
we
like
have
an
exception
for
statement
timeouts.
B
Those
are
always
happening
because
if
you
do
look
at
sure
it's
very
noisy,
but
I'm
actually
looking
at
it
right
now,
and
I
see
some
things
that
are
kind
of
scary
and
to
be
honest,
like
with
the
new
baking
thing,
I've
been
paying
less
attention
to
sentry,
and
maybe
maybe
this
is
a
bad
thing.
F
A
I
think
we
should
certainly,
I
think,
it'd
be
a
really
good
one
to
have
as
a
an
issue
and
discuss
like
I'd
definitely
be
interested
to
think
about
the
sorts
of
errors
that
I
saw
in
century,
where
I
thought
oh,
we
should
investigate
that,
even
if
the
numbers
were
relatively
low,
some
of
them
were
quite
catastrophic,
but
relatively
low
numbers
it'd
be
great
to
think
about.
How
confident
would
we
be
that
we'd
find
something
along
those
lines
if
we
didn't
have
sentry
or
what
other
options
we
got?
Basically,.
A
I'd
really
like
to
get
away
from
having
a
manual
step
like,
I
think
it's
really
easy
to
overlook
stuff
for
for
loads
of
different
reasons,
but
I
think
it's
it'd
be
good
to
evaluate
how
much
risk
we
feel
we're.
Taking
with
that,
I.
E
Will
insist
that
we
should
be
able
to
roll
back
safely
and
just
not
spend
time
in
looking
every
single
errors,
because
when
I'm
doing
when
I
was
doing
release
management,
I
spent
more
time
digging
every
single
stack
trace
over
there
and
opening
issues
and
having
engineering
managers,
not
even
caring
of
my
request
to
just
take
a
look
at
this
or
just
because
you
open
an
issue,
and
maybe
two
months
later
say:
oh
I'm,
not
the
engineering
manager
of
this
thing.
Maybe
I
will
take
a
look
and
nothing
happens,
and
this
is
already
happening.
E
Unless
you
have
real
outage,
nobody
would
take
care
of
it
and
the
thing
is:
if
you
have
an
outage,
our
metrics
should
detect
it,
because,
basically
you
are
breaking
cannery
at
the
beginning,
so
I
mean
I
would
prefer
being
able
to
go
back
and
having
a
safe
system
instead
of
just
selecting
every
single
edge
case.
But
we
then
we
need
something
else
which
is
not
part
of
delivery,
but
is
part
of
having
a
healthy
product
which
is
yeah.
We
have
this
error.
This
is
technical
depth.
E
A
There's
like
different
types
of
errors
right
like
so,
I
think,
you're
right
on
those
ones
which
we
see
every
time
huge
numbers
like
clear
tech
debt.
I
I
definitely
think
there
are
others
where
it's
like
catastrophic
new
worth,
investigating
on
a
release.
B
I
think
I
think
you
could
maybe
categorize
front-end
exceptions
a
bit
like
you
could
put
more
weight
on
them
than
sidekick
exceptions,
because
those
aren't
always
user,
visible
or
oftentimes
they're
expected
by
sidekick,
so
that
could
be
one
way
and
that
you
could
differentiate
just
by
the
users
affected
in
century
right.
B
A
Should
we
get
an
issue
open,
so
we
can
evaluate
that,
like
I
guess
what
I'm
I
want
to
make
sure
we.
If
we
include
this
in
our
key
result,
then
it's
going
to
be
a
useful
thing
to
include
in
the
key
result
or
or
not
like,
I
think,
you're
right
longer
term,
a
rollback.
I
know
a
rollback's
pretty
long
right,
so
I
think
it's.
A
A
E
Basically
we
check
at
the
beginning
and
then
we
we
go
straight
to
the
end
of
the
deployment
so
being
able
to
check
the
status
and
identify
a
place
where
we
can
check
the
stages
again
and
maybe
halting
the
deployment
or
maybe
is
outside
of
the
scope
of
this.
But
just
warning
that
yeah
we
are
50
of
the
deployment
and
it's
not
looking
good.
E
B
A
Awesome
and
then
we
also
have
lots
of
other
epics
in
progress.
So
first
one
I've
not
put
the
kubernetes
ones
on
they're
all
kind
of
linked
under
the
kia
kr
there,
but
under
the
very
release,
velocity
stuff,
so
blocking
nature
of
security
releases
mira.
So
how?
What
do
you
need
for
us
to
be
able
to
close
out
this
epic.
C
C
We
are
basically
waiting
to
see
how
the
security
release
progresses
to
see
like
the
last
iteration
of
the
experiment,
and
after
that
we
can
close
that
one,
the
one
that
relates
to
security
releases
and
how
to
deploy.
I
get,
I
think
the
one
related
to
that
issue
is
or
where
am
I
yeah?
I
think
no
yeah.
We
can
close
that
one.
Yes,
we
are
referring
to
the
same
one.
A
Amazing,
fantastic
great
work
on
that
stuff,
and
I
should
add
when
I
say
we
need
to
close
the
step
account
like
it
doesn't
mean
we
can't
do
stuff
in
the
future
right.
What
I'm
really
trying
to
reduce
is
the
number
of
things
in
parallel.
We
have
so
the
moment
we
pretty
much
have
a
project
per
person.
It'd
be
great
if
we
had
a
little
bit
less
so
that
we
could
do
a
bit
more
collaboration
and
moving
around
knowledge
sharing,
and
things
like
that,
so
it
doesn't
have
to
be
finished
polished.
A
I
know,
there's
some
tech
debt
that
we
want
to
tackle,
and
we
can
put
that
in
of
course,
as
we
go
along.
So
let's
add
that
awesome
great
work
there
uric
you're,
also
making
great
progress.
How's
the
release
code
going.
G
G
That
will
need
some
discussion
with
policeman
just
to
figure
out
sort
of
when
we're
gonna
do
the
exact
testing.
Specifically,
it
would
be
an
rc
and
a
patch
release,
and
if
that
works,
fine,
we
can
do
an
actual
minor
release
with
it.
G
There's
a
bit
of
a
discussion
in
terms
of
like
oh,
should
we
do
a
patch
release
first
and
then
an
rc
or
an
rcu
and
patch
release,
but
we'll
figure
that
out
in
the
in
the
issue
and
so
from
now
it's
it's
mostly
sort
of
bound
by
when
we
can
do
that
right.
In
other
words,
if
you
could
do
it
all
tomorrow,
for
example,
basically
done,
I
guess
tomorrow,
yeah
apart
from
whatever
proxy
probably
run
into
yeah.
The
rest
is
done
after
that.
G
The
steps
presuming
it
works,
there's
some
functionality
we
introduced
a
couple
of
months
ago
to
automatically
generate
rc's.
I
have
to
take
another
look
at
that.
If
that
is
still
something
we
want
to
do,
and
there
were
some
complications
with
it
specifically
revolving
around
dealing
with
merchant
conflicts
that
we
need
to
look
into.
G
I
don't
remember
exactly
what
the
latest
consensus
on
it
is.
It's
been
like
four
or
five
months
since
we
last
ran
that,
and
so
the
conclusion
might
as
well
be
like
eh.
We
don't
need
it
after
all,
but
we'll
see
so
that's
it.
Basically.
A
Awesome
that
sounds
great
yeah
if
you
want
to
just
like
dust
off
that
issue
or
epic
or
whatever
it
is
and
share
it
out.
We
can
review
where
we're
at
with
that,
like
I'd
love
to
think
about
these
things
kind
of
in
line
with
knowing
we
want
to
move
towards
rollbacks
and
knowing
we
have
an
mttp
of
12
hours,
a
target
of
12
hours
like
how
does
ourself
fit
together
like
we
can
do
more
than
one
thing,
but
let's
make
sure
we
do
things
in
a
good
order
to
get
that
stuff.
A
G
Yeah
yeah,
I
think,
if
pepsido's
not
familiar
like
this,
was
this
system
to
automatically
generate
rc's.
I
think
starting
like
the
seventh
or
so
of
the
month,
and
it
would
then
create
a
stable
branch,
automatically
updated,
etc.
I
we
proposed
this.
I
think,
when
we
were
still
doing
auto
deploys
twice
a
week.
I
think
it
was
and
so
back
then
it
made
a
lot
of
sense.
G
I
think
nowadays,
honestly,
because
we
order
the
ploy
at
least
once
a
day,
perhaps
even
more
I'm
starting
to
sort
of
question
the
value
of
generating
rc's
every
single
day.
You
know
for
like
two
weeks
on
an
end,
because
it's
going
to
do
essentially
duplicate
work
from
all
the
deploys
and
I'll
get
your
video
I'm
going
to
be
very
harsh
and
nobody
checks
it
like.
G
G
I
think
it
was
so
I
have
to
test
it
there
too,
so
you
might
as
well
probably
save
yourselves
a
lot
of
trouble,
letting
not
do
it,
but
we'll
keep
doing
the
issue.
E
G
Right
yeah,
that's
I
I
I
remember
that,
and
I
believe
that
would
indeed
make
it
easier.
I
think
the
issue
still
remains
there,
that,
theoretically,
you
could
have
merged
conflicts.
G
I
don't
think
it
has
ever
happened,
so
it
falls
in
the
category
okay,
do
we
want
to
do
anything
with
that
or
just
deal
with
the
mass
day
arise?
E
We
had
the
same
problem
with
the
actually.
The
emerge
requested
bumps
github
version,
because
it's
something
like
it
should
work
and
just
do
the
thing,
but
it
may
fail.
So
in
that
case
we
ended
up
having
a
special
job
in
the
pipeline.
That
only
runs
if
the
pipeline
fails
detects
the
name
of
the
branch,
so
it
only
runs
in
the
in
the
specific
branch,
and
in
that
case
it
warns
a
gita
team
on
their
own
channel.
G
A
Awesome
sounds
good
and
then
the
other
epic
we
have
is
the
container
registry
one.
Do
you
wanna
update
on
that
one
scrubec.
F
It's
it's
words,
slow
work
in
progress,
but
it
sounds
like
we're
going
to
kind
of
pause
on
this,
as
we
wait
for
some
more
details
from
the
package
team
regarding
implementation.
Details
for
the
database,
as
well
as
a
potential
need
for
a
second
deployment,
but
we're
still
waiting
to
hear
back
from
them
on
that
kind
of
stuff.
So.
A
Yeah
sounds
good
and
this
is
to
support
their
work
migrating
in
november
december,
so
we've
got
a
little
bit
of
time
before
they
fully
need
that
one,
but
that's
gonna,
be
a
good
one.
I
think
if
we
can
see
opportunities
for
experimenting
with
other
things,
we
could
want
to
bring
into
our
own
pipelines
or
anything
like
that.
Then
this
could
be
a
good,
a
good
opportunity
as
well,
so
awesome
so
lots
going
on
there
if
people
need
help
and
stuff
like
feel
free
to.
A
You
know,
obviously
ask
each
other
and
things
like
that,
this
kind
of
ties
into
what
I've
been
thinking
about
with
the
workflow
stuff,
so
I've
just
linked
in
there
like
please
take
some
time
this
week
to
have
a
read
through
my
kind
of
updated
proposal,
thanks
all
to
for
the
feedback.
So
far,
it's
been
really
really
helpful.
What
I'm
hoping
to
get
to
is
something
which,
in
a
situation
where
we
are
like
today,
where
we
go.
Oh,
we
have
lots
of
things
in
progress.
A
If
you
were
able
to
pick
up
a
new
new
issue,
you
would
have
somewhere
where
you
can
go
to,
and
it
would
be
much
more
easily,
like
obvious
to
say:
oh
okay,
if
I
you
know
alessio's
waiting
on
this
task,
that
I
can
do
that,
will
help
move
along
that
key
result.
So
I
should
pick
that
stuff
versus
needing
to
actually
know
all
of
these
epics
in
progress
and
they
have
these
issues
and
go
and
sort
of
hunt
them
down.
A
So
that's
what
I'm
sort
of
leaning
towards
trying
to
find
a
way
where
we
have
probably
a
board
just
because
that
gives
you
some
labels
for
free,
but
basically
a
place
where
you
could
kind
of
easily
find
the
most
important
or
high
impact
things
that
are
ready
to
go.
There's
lots
of
kind
of
haze
around
that
around
like
how
does
stuff
get
there
and
how
do
we
triage
it
and
all
that,
like
don't
worry
about
that
stuff,
we
definitely
want
to
get
something
which
we
can
automate
as
much
as
possible
around.
A
But
the
big
sort
of
thing
I'm
focusing
right
now
is:
how
can
we
make
it
really
easy
to
pick
pick
the
most
high
value
item
when
you
need
to
pick
up
something
basically,.
A
So
feel
free
to
go
ahead.
Add
comments,
make
suggestions
and
go
through
and
things
like
that.
Like
I'm
part
of
this,
I
am
tweaking
some
labels
and
things
on
some
issues
just
feel
free
to
ignore
me
on
that
stuff.
I'm
just
trying
to
work
out
what
a
board
might
look
like
and
it's
a
little
bit
easier
if
I
change
some
of
the
labels
and
some
of
my
issues
to
make
them
a
little
bit
more
consistent.
So
don't
worry
about
that.
Stuff
is
not
an
expectation
that
you're
following
some
mystery
new
process.