►
From YouTube: 2021-10-11 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let
hello
see
so
scarbec
is
out
henry's
out,
graham's,
hopefully
asleep,
and
who
else
I'm
not
sure
if
robert's
working
today
it
is
actually
a
u.s
holiday,
so
we
shall
see
which
mira
you
may
have
a
really
quiet
day
by
myself.
Yeah
you
could
be.
I
mean
I
don't
think
you're
100
alone.
I
think
there
are
some
people
working
today,
but
there
are
also
quite
a
few
out
so.
A
A
I
think
it's
if
there's
anything,
you
feel
like
the
rest
of
the
team
needs
to
be
aware
of
so,
for
example,
it
could
be
that
you're
repeatedly
hitting
upon
a
problem,
that's
hitting
mttp
or
we
have
a
gap
in
the
process.
That's
meaning
mttp
is
higher
or
you
know
you
have
a
great
idea
for
how
we
could
like
slash
it
in
half.
A
B
Okay,
I
don't
think
I
have
anything
myra.
Do
you
have
anything.
D
I
do
I
do
so.
I
have
been
thinking
about
how
to
reduce
the
mttp,
and
one
of
the
things
that
I
have
been
doing
is
to
experiment
with
auto
deploy
branches,
the
ones
that
are
created
in
america,
time
zones-
and
I
am
creating
now-
I
think-
for
america
around
three
or
four
of
the
deploy
branches,
which
means
that
I
am
deploying
packages
or
of
around
20
to
30,
commits
smaller
packages
and
and
what
I
want
is
to
deploy
one
after
another.
D
This
really
depends
on
my
schedule
and
fits
for
my
schedule.
I
don't
know
if
it's
going
to
fit
for
roberts
or
scarpa,
but
it's
something
that
I
am
adjusting
for
it
and
it
has
been
fun.
It
has
been
working
so
far
so
yeah.
That
is
one
thing
that
I
am
doing
and
another
one
that
I
have
been
noticing
is
baking
time,
which
is
something
that
I
sent
a
message
on
friday.
D
So
we
have
canary
q
a
which
lasts
around
20
minutes
or
something
sometimes
it
lasts
more.
But
after
that
we
have
baking
time
of
an
hour,
and
since
I
am
aiming
to
deploy
smaller
packages,
I
wonder
if
we
should
at
some
point
reduce
baking
time
to
not
be
one
hour
but
to
be
30
minutes,
and
I
know
that
the
goal
one
of
the
goals
of
the
single
pipeline
is
to
actually
execute
q
a
canary
and
make
it
time
at
the
same
time.
D
D
A
So
I
think
so
my
my
kind
of
ideal
would
be
the
when
I
mentioned
in
slack
last
week,
which
is
it
would
be
super
for
us
to
do
the
work
to
have
a
visual
of
what
what
value
baking
time
gives
us
and
then,
as
we
change
it,
we'd
be
able
to
assess
like.
Are
we
going
too
far
or
not?
So
one
thing
we've
been
asked
to
do,
which
we
haven't
prioritized,
is
adding
a
change,
failure
rate
measure
and
that
would
be
a
way
of
tracking
how
it's
like
the
counterbalance
to
going
fast.
A
A
The
other
one
I
was
thinking
it
could
be.
Less
work
is
more
of
a
kind
of
retrospective
analysis,
because
what
I
actually
don't
have
any
idea
about
right
now
is
when
we
do
find
problems
in
baking
time,
like
generally,
how
long
into
baking
time
do
we
find
them
like
if
we
always
find
them
five
minutes
in
then
sure
it
probably
doesn't
matter
if
we
do
half
an
hour
an
hour,
but
if
we
actually
have
found
some
fairly
significant
issues
at
like
45
minutes,
then
that
changes
it.
But
I
have
no
data
on
that
whatsoever.
C
So
do
you
mean
tracking
if,
if
we
are
in
baking
time
when
an
incident
arrives
and
if
this
let's
say
incident
inducing
changes
is
included
in
the
things
that
we
deployed
or
if
it
was
already
there
and
yeah.
A
A
This
yeah,
I
mean
it's
possible
for
sure
I
don't
know
yeah
it's
possible
one.
So
around
timing
on
this
on
changes
to
this
stuff.
I
did
one
up.
This
is
what
kind
of
led
me
to
put
my
discussion
point
on
7a
I'll
leave
the
numbers
alone
7a
for
this
agenda,
because
one
thing
I
do
want
to
also
think
about
is
timing.
So,
for
this
one
like
we
have
a
lot
of
pipeline
changes
coming.
A
A
We
know
we
have
a
thing,
that's
entirely
within
our
control
that
we
can
tweak
on
so
and
if
you
want
to
start
an
issue
mira
to
like
think
about
how
will
we
make
this
decision
or
what
sort
of
data
we
could
gather
or
whether
you
want
to
maybe
wait
until
a
little
bit
further
down
the
road
to
another
like
decide?
This.
A
Right
then
open
an
issue.
Oh
sorry,
okay
thanks
great,
thank
you
very
much
everyone.
So,
on
the
announcements,
I've
left
mullet
read
only.
C
A
We
haven't
built
it,
I
haven't,
given
it
any
thought
at
all.
It
was
a
request
that
we
add
one.
So
it's
the
only
dura
metric,
we
don't
have
represented
at
the
moment,
and
it
is
the
one
that
is
the
kind
of
counterbalance
to
moving
fast.
It's
the
the
kind
of
metric
that
says
you're
going
too
fast,
so
it
has
value.
A
There
are
two
ways
we
could
measure
it
and
from
sort
of
reading
up,
there's
no
great
consensus
about
which
one
people
generally
use,
it
seems
pretty
split.
I
have
to
go
back
to
the
actual
book
to
see
which
one
they
said.
I
think
the
the
official
one
is
how
many,
how
many
like
issues
get
through
to
production
as
a
result
of
your
deployment,
I
believe,
is
the
the
official
failure
rate
right,
in
which
case
it's
a
bit
more
of
a
quality
thing.
It
might
be
an
interesting.
D
A
C
So
it's
not
about
incident
inducing
changes
is
actually
shipping
new
issues.
A
A
On
general
failure-
and
you
kind
of-
can
use
it
as
a
an
indicator
that
we
don't
have
enough
other
stuff
in
place
right,
so
you
could
have
any
kind
of
pipeline
failure
could
be
a
a
reason
to
investigate
and
make
decisions
around
pipelines.
A
The
other
one
which
we
could
do
is
on
incidents
and
in
a
way
incidence
is
interesting,
because
those
are
the
things
that
as
say,
infrastructure.
We
really
there's
a
high
cost
of
raising
an
incident
and
having
to
have
people
involved
and
investigating
so
yeah.
I'm
not
sure.
To
be
honest,
I
think
it
depends
what
we
want
to
use
it
for.
C
Yeah
I'd
agree
now.
The
reason
why
I
was
asking
is
because,
in
terms
of
how
fast
we
are
we
move,
we
track
at
the
level
of
our
base
unit
is
the
merge
request,
because
mttp
is
merge,
requests
right,
so
I
was
trying
to
figure
out
if
the
counterbalance
to
this
is
still
merge.
Request
based
on
merge
requests
so
that
they
can
share
the
the
amount
of
changes
right.
So
we
shipped
thousand
changes,
but
two
of
them
were
as
one
inducing.
C
Issues
not
necessarily
something
that
needs
to
be
fixed
in
a
patch
release,
kind
of
definition
right
so
something
that
is
broken
and
is
broken
big
enough
to
deserve
batteries,
maybe
patch
releases
kind
of
it
really
depends.
If
you
are
past
the
22nd
but
kind
of
gives
the
idea
so
something
that
someone
will
quickly
fix,
even
if
it's
not
inducing
an
incident.
C
This
is
very
weird
definition
of
it,
but
at
least
we
are
counting
based
on
the
same
amount
of
stuff,
because
if
we
move
from
merge
request
shift
per
day
to
number
of
packages
that
we
had
rolled
back,
these
are
kind
of
two.
A
Very
different
numbers:
yeah,
that's
a
really
good
point,
so
I'll
add
in
the
I'll
add
in
the
issue,
but
yeah.
That's
a
really
good
point.
Yeah.
A
So
one
thing
kind
of
on
my
I
haven't
prioritized
this
because
at
the
moment
I
can't
it
seems
like
a
reasonable
amount
of
work
for
not
giving
us
too
much
decision
making.
I
think
we're
working
on
right
now
day
to
day
is
more
valuable,
but
that's
kind
of
why
I
think,
like
that's
the
obvious
one,
for
making
a
decision
around
baking
time
and
other
pipelines
sort
of
like
compressing
the
pipeline
decisions,
but
if
there's
an
easier
way
to
also
kind
of
get
an
assessment
of
that,
we
that
would
also
maybe
it'd,
be
interesting.
D
A
A
Cool
okay,
thank
you
for
that.
Like
feel
free,
there's
the
issues
on
there.
If
you
want
to
take
discussion
there,
like
I
said,
I'm
starting
from
kind
of
zero
on
this
stuff,
it
was
just
we
were
requested
to
add
one.
It
will
be
useful
at
some
point.
I'm
sure
what
I
wanted
to
highlight
so
kind
of
very
much
related
to
this
is
the
fact
that
there
is
an
awful
lot
of
discussions
going
on
and
hopefully
a
decision
soon
about
reordering
the
pipeline.
A
So
this
is
very
relevant
to
bringing
in
other
changes,
but
it
will
also
affect
all
of
us
and
future
work.
So
a
lot
of
maybe
the
single
pipeline
things
and
possibly
the
post
deployment
migration
work
that
you've
been
thinking
through
myra
like
this
will
is
quite
likely
to
be
built
on
top
of
some
of
these
changes.
A
So
I
just
wanted
to
make
to
highlight
kind
of
what's
going
on
here
and
some
of
the
the
threats
so
that
you
could
all
kind
of
read
through
and
and
get
up
to
speed
on
these
things,
what
we'll
do
once
we
have
a
agreed
solution.
We
can
actually
walk
through
in
depth,
but
just
so
you
can
see
kind
of
what's
happening
here,
so
I
recommend
reading
through
epic
6401.
A
It
has
all
the
context
behind
the
staging
changes
that
are
coming
in,
and
that
was
the
kind
of
initial
step
before
the
pipeline
changes.
So
we
already
have
two
new
environments:
you'll
only
be
seeing
staging
canary.
You
will
soon
start
seeing
staging
ref
that
will
be
deployed
to
shortly,
and
you
can
see
on
the
plan
that
it's
going
to
be
sitting
in
parallel.
A
So
initially
you'll
see
the
deployment
tasks
going
through
and
then
later
you'll
see
that
they'll
start
to
run
tests,
and
this
will
start
to
become
a
blocking
environment
as
well.
So
we
have
these
kind
of
parallel
stagings.
Now
there
is
kind
of
a
lot
of
discussion
on
epic
6401,
but
the
gist
of
the
idea
is
that
we
eventually
get
to
a
stage
where
staging
environments
are
pretty
flexible.
A
A
It
will
be
set
up,
so
admin
changes
can
be
tested
and
there's
a
whole
new
test,
suite
being
built
that
will
run
on
there.
So
it's
kind
of
a
bit
of
a
a
testing
ground
environment
for
now
in
the
future,
we'll
work
out
like
how
does
it
fit
around
staging
canary
and
staging?
A
So
that's
all
going
on
I've
linked
on
the
epic,
that's
tracking
the
infra
tasks.
That's
five,
five!
Nine!
If
you
want
to
see
what's
happening
there,
but
then
so
I
was
all
going
smoothly
and
then
olefi.
I
quite
rightly
pointed
out
that
this
still
doesn't
actually
fix
all
the
problems.
So
there's
the
comment
that
gives
that
so,
basically
the
problem
is,
we
don't
deploy
every
packet,
we
don't
guarantee.
A
Every
package
will
go
through
production,
so
we
still
don't
by
having
a
staging
canary
staging
canary
and
staging
are
not
guaranteed
to
match
production
canary
in
production.
So,
as
a
result,
we've
been
working
through
scenarios
to
try
and
solve
this
problem.
So
all
of
those
things
are
in
issue
two
zero,
zero
four
and
they
are
not
super
easy
to
wrap
their
heads
up
your
head
around.
So
apologies
in
advance.
We
are
looking
so
far
like
option.
Four
is
probably
the
preferred
option
which
we
believe
solves
all
of
the
scenarios
it
does.
A
It
actually
also
will
help
mttp,
so
that's
good
as
well,
but
what
I
we,
so
we
have
a
meeting
tomorrow,
leslie
and
I
meeting
with
quality,
to
talk
through
and
just
check,
they're
happy
with
this.
But
what
I
well
I'll,
let
you
go
and
have
a
look
through
like
so.
My
main
point
really
was
to
spend
some
time
have
a
look
through
here,
be
aware
of
like
the
different
options
that
have
been
considered
and
have
a
think.
A
If
you
see
any
first,
if
you
see
any
problems
on
those,
if
you
have
any
other
ideas
or
you
can
see
any
concerns,
then
that
would
be
useful,
and
once
we
have
like
insight
from
like
input
from
quality,
we
can
see
what
next
steps
are.
But
ruben
do
you
want
to
verbalize
your
point.
B
I
was
just
wondering
if,
if
we
do
that,
then
people
hitting
stating
can
also
hit
staging
main
rather
than
stating
canary.
So,
for
example,
even
if
we
send
90
traffic
to
canary
10
of
people
will
still
hit
staging
main.
So
if
their
changes
are
only
deployed
to
canary
and
not
to
staging
main,
then
they'll
need
to
be
aware
of
this
fact,
and
also
anyone
like
testing
stuff
by
logging
into
staging
rails.
Console
they'll
also
need
to
be
aware
that
their
changes
are
not
on
staging
main,
actually
they're.
Only
on
staging
canary.
C
C
I
mean
let
me
do
a
step
back
and
explain
why
we
proposed
this,
because
maybe
so
there
are
some
reasons
there
are.
We
talked
about
the
problems,
but
there's
there
are
some
more
deep
why's
that
we
didn't
mention
in
the
thing,
so
the
purpose
of
a
canary
environment,
regardless
of
being
so,
let's
use
the
proper
terms
for
once.
The
purpose
of
a
canary
stage
within
an
environment
is
to
identify
problems
right.
So
when
we
are
talking
about
production
canary,
we
want
to
reduce
the
blast
radius
of
new
changes.
C
So
we
have
a
small
set
of
servers
which
are
running
the
canary
stage
so
that
we,
if
something
goes
badly
wrong,
we
can
gather
signals
out
of
it,
but
we
want
the
signals
to
be
say
they
will
not
induce
our
general
outage
on
the
system.
C
So
this
is
why
we
have
production
canary
right
now,
but
if
you
think
the
same
situation
in
canary
environment,
sorry
making
vision
in
staging
environment,
you
want
to
to
use
the
opposite
analogy,
because
you
want
to
amplify
errors
as
soon
as
possible.
So
in
that
case
staging
canary
as
the
purpose
of
amplifying
errors,
and
so
most
of
the
traffic
should
go
there,
because
the
sooner
it
fail
the
sooner
we
can.
C
We
can
stop
it
and
start
fixing
the
thing
so
that
that's
that's
the
the
reason
behind
this
and
having
much
of
the
traffic
going
through
most
of
the
traffic
going
to
the
staging
canary
environment
allows
us
to
do
production
main
and
staging
main
in
deployed
together,
because
at
that
point
we
don't
care
it's
just
a
tiny
fraction
of
traffic.
That
is
there
only
for
the
purpose
of
testing
the
the
qa
multiversion
test.
C
So
as
a
developer,
you
there
will
be
a
transition
period.
Obviously,
because
this
is
this
thing
changed,
but
basically
what
you
are
going
to
do,
what
you're?
What
you're
going
here
is
that
cannery,
so
canary
staging
for
you
is
canary.
C
So
the
the
other
machine
is
just
there
for
a
specialized
ui
type
of
test,
which
is
why
amy
in
option
three
was
supposed
was
proposing
to
rename
the
thing,
instead
of
so
just
using
canary
for
canary
and
using
canary
multitask
blah
blah
blah.
The
point
is
that
then
it's
harder
to
wrap
your
head
around
because
then
you
have
these
things,
which
has
a
specialized
name,
and
so
we
decided
to
keep
things
together
right.
C
So
with
the
same
order,
you
do
canary
first
because
canary
allows
you
to
test
new
changes
and
then,
in
one
case
you
amplify
the
blasting
radius
in
productions.
In
case
you
take
the
blasting
radius
as
motor
as
possible.
So
that's
the
thing
specific
to
the
problem
of
connecting
to
the
ssh
machine,
even
something
easier.
Like
the
I
you
know,
when
you
log
into
the
ssh
console
you
just
write
you
the
information
about
the
environment.
C
If
you
remember,
I
just
tell
you
postgres
version
and
things
like
if
this
is
what
we
need.
We
can
just
put
something
there
that
just
tell
you,
you
know
what
you
are
probably
in
the
wrong
stage,
very
prominent.
So
when
you
ssh
just
what
is
going
on
here
right,
so
it
would
be
a
matter
of
time
and
then
you
eventually,
we
will
all
learn
how
to
operate
this
but
yeah.
I
agree
this.
This
is
a
drawback.
B
Yeah,
I
guess,
since
we
have
basically
the
same
situation
in
production
developers,
will
be
able
to
sort
of
understand
the
situation
faster.
A
Yeah,
it's
I
mean
it's
a
shame,
it's
a
shame.
We
have
to
change
everyone's
workflow,
but
long
like
it's
one.
I
think
it
is
one
of
those
things
that,
like
in
six
months
time
or
a
year's
time,
it
will
be
logically
more
sensible.
So
we
kind
of
avoid
some
process.
Tech
debt
by
doing
the
upfront
pay,
but.
C
Yeah,
there's
also
another
point
that
we
would
like
to
gain
as
a
delivery
team
out
of
this,
which
is
moving
to
let's
say
a
new
way
of
interacting
with
pipeline
failures,
which
is
checking
metrics
based
on
multiple
multiversions
compatibility
based
just
on
error
rates.
So
right
now
we
can't
really
do
this
because
we,
when
we
deploy
staging
all
the
staging
environment,
is
related
to
the
new
version.
So
we
check
the
metrics.
C
They
are
part
of
the
our
rollout
check,
but
basically
we
never
experience
a
high
failure
rate,
because
maybe
we
merge
something
that
is
breaking
a
side,
kick
cue
processing,
because
maybe
we
are
in
queueing
stuff
with
wrong
keys.
We
are
having
and
then
we
have
an
error
rate
on
sidekiq.
But
if
we
will
have
these
two
versions
running
together,
we
can
try
to
basically
consider
qa
as
a
signal
just
to
see.
C
We
have
a
signal
which
is
qa
smoke
test,
another
signal
which
is
qa
multiversion
tests,
and
then
we
have
another
signal
which
is
how
well
is
behaving
the
environment
in
terms
of
metrics
or
overall
health
of
the
system,
which
I
think
may
help
us
quite
more
than
what
that
single
test
can
do
right,
because
this,
the
qi,
the
qa
test,
may
spot
some
type
of
error,
but
others
are
say
to
spread
across
the
the
application.
It's
really
hard
to
write
tests
for
every
single
use
case.
C
So
as
a
team,
probably,
if
we
start
thinking
more
about
signals
that
we
receive
from
the
application
in
terms
of,
is
it
behaving
or
are
we
having
more
errors?
Because
if
we
have
more
errors,
something
is
wrong.
We
can
later
figure
out
what
is
wrong,
but
there
is
clear,
there's
a
clear
signal
if
we
are
having
a
spike
in
errors
in
canary
in
sorry
in
staging
it's
very
likely
that
we
will
have
the
same
spike
of
error
in
production
amplified
by
the
number
of
traffic.
A
Awesome
I'm
going
to
move
along
because
we've
only
got
a
few
minutes
left,
but
please
keep
asking
questions
about
this
stuff.
We'll
definitely
be
talking
about
this
more
like
once,
we've
decided
on
our
option.
We
can
work
out
next
steps
for
making
the
changes
and
then
the
kind
of
future
iterations
of
adding
traffic
and
bringing
in
these
extra
like
metrics
and
things
so
7b.
Let's
go,
do
you
want
to
jump
start.
C
Yeah
so
today
I
was
having
this
chat
with
andrew
and
I
realized
how
much
I
don't
like
speaking
about
slo
in
general.
Because
of
the
I
mean
it's
just
a
lot
of
shared
vocabulary
that
I
kind
of
grasp
what
we're
talking
about.
But
I
really
don't
understand
right.
So
that's
the
basic
point
and
he
was
making
the
same
point
right
that
he
was
having
troubles
having
conversation
around
the
the
organization,
the
company,
because
of
yeah.
We
refer
about
our
budgets
and
in
certain
area
of
the
of
the
company.
C
We
think
error
budgets
like
something
specific
that
we
are
working
on
for
handling
failures,
while
they
have
a
general
definition
that
is
in
ruins
of
metrics,
and
things
like
that.
So
I
decided
to
give
it
a
try
with
the
slow
book
that
I
linked
there
to
try
to
figure
out
if
we
can
get
to
a
shared
understanding
of
these
things.
C
I
do
believe
these
things
are
really
important,
especially
for
us
I
mean
we
are
neighbors
of
scalability
team
and
these
things
are
basically
what
they
are
working
on,
and
I
now
we
are
all
back
in
engineers
here,
but
I
I'm
quite
sure
that
our
sre
made
in
the
theme
knows
these
things
better
than
us,
but
yeah.
I
would
love
if
we
could
just
be
better
of
this.
Another
thing
that
I
was
struggling
with
is
this:
I
I'll
be
short
here
because
we'll
be
running
out
of
time.
Basically,
we
we
were
defining.
C
We
are
defining
the
compliance
dashboard
right
and
we
were
talking
about.
We
want
to
run
x
number
of
rollback
per
period
of
time,
so
it's
an
absolute
reference,
so
we
want
to
do
this
over
this
period
of
time.
This
is
really
hard
to
model
as
an
slo
because
they
are
designed
to
be
to
operate
on
percentage.
C
So
you
can,
maybe
they
are
not.
They
will
not
be
strictly
an
slo
on
sli
or
you
can
think
about
something
in
reverse
like
I
want
to
have,
let's
say
two
we're
doing
the
math
this
morning.
I
want
to
have
two
percentage
of
rollback
deployment
in
staging
every
month,
which,
at
the
current
rate
of
deployment,
is
roughly
three
deployment
months.
A
One
thing
I'd
like
to
factor
into
this,
which
probably
takes
the
issues
I
do
want
to
use
the
last
couple
of
minutes
on
point.
Nine
is
one
thing:
let's
factor
into
the
slo
stuff
for
release
manager
for
rollbacks
is
that
number
will
change
so
as
we
as
we
get
more
comfortable
or
as
we
start
automating
these,
it
won't
be
necessary
three.
It
may
be
five
or
ten,
so
I
believe,
that's
quite
difficult
to
change
on
an
slo,
because
your
percentage
tracking
is
different.
So
we
might
want
to
review
that.
A
But
let's
take
this
to
the
issue,
I
will
put
7c
on
slack,
I'm
just
going
to
stop
the
recording
and
just
use
the
last
couple
of
minutes
to
run
through
0.9.