►
From YouTube: 2021-12-13 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
E
C
Yes,
a
lot
of
people
do
that,
though,
whenever
I
go
and
get
the
downloadings
of
things
like
the
amas
surprisingly
hard
to
find
the
actual
one,
because
there's
usually
about
10
recordings,
you
have
to
figure
out,
which
ones
would
people
just
dropping
in
on
the
zoom
up
before
the
meeting
awesome.
So
I
think
we're
all
here.
So
let
us
kick
off
henry.
You
have
the
first
point
on
mttp.
E
Yeah,
so
there
are
not
really
big
changes
to
see
on
mttp.
What
we
noticed,
though,
is
that
we
probably
didn't
set
the
blocker
labels
on
incidents
that
we
wanted
to
set
manually
for
now,
because
there's
no
better
way
to
track
how
long
we
are
blocked
by
incidents
right.
E
So
we
should
go
and
look
through
the
last
few
weeks,
maybe
and
see
if
there
are
any
incidents
that
we
are
blocking
the
planets
where
we
didn't
set
this
label,
so
we
can
better
track
it,
and
we
now
also
added
at
the
top
of
the
like
at
point,
one,
the
link
to
deployment
lockers.
This
is
the
epic
where
we
try
to
track
blocking
incidents
so
to
get
a
better
overview
when
we
were
blocked
and
for
how
long
this
is
currently
manually,
taking
still
out
of
the
labels.
E
C
E
Yeah,
I
mean
the
assigning
of
blocker
time.
This
is,
I
think,
really
hard,
because
this
is
totally
manual
right.
You
need
to
a
human
needs
to
decide
yeah
when
to
unblock
something-
and
this
is
not
something
which
is
done
by
an
automation,
normally,
okay,
yeah,
so
this
is
hot,
but
but
getting
at
least
when
you
set
the
labels
to
to
extract
this,
I
mean,
if
I
could
just
search
for
it.
C
Doesn't
take
very
long
right,
I
do
it
it
takes.
It
doesn't.
Take
me
very
long,
I'm
happy
to
continue
doing
that.
It's
just
at
the
moment.
I
I
could.
I
we're
definitely
missing
the
last
few
weeks
because
I
don't
think
I
my
guess
is
usually
three
a
week
is
what
I
generally
expect
thanksgiving
will
be
different,
but
I
think
we
must
be
missing
some.
E
C
E
Okay,
yeah
but
yeah,
probably
at
least
today
we
have
a
seven
hour
blocker
already.
We
had
this
this
morning
that
just
resolved
now,
but
we
still
didn't
do
the
first
g
prod
deploy,
yet
that's
still
running
right,
skavak
yeah,
so,
but
this
should
now
go
through
and
for
last
week
I
was.
I
also
remember
that
at
least
one
seven
hour
blocker
was
was
there
which
was
marked,
but
for
me
to
check
if
there
have
been
even
more
but
yeah.
This
is
something
which
we
really
could
should
keep
an
eye
on.
A
Consider,
oh
sorry,
henry
I
interrupt
you,
no,
no!
I'm
done
okay!
So
thank
you.
So
should
we
consider
doing
something
like
having
a
blank
label
that
say
that
we
need
to
fill
the
blocker
time
and
assign
it
by
default
when
an
issue
with
say
s2
level?
A
So
we
know
that
blocked
something
because
of
the
severity
level,
and
then
we
pre-assigned,
something
like
I
don't
know,
blockers
to
tbd
right
and
then
you
search
by
blocker
tbd
and
then
you
know
these
are
all
the
issues
that
were
connected
to
incidents
that
still
need
the
manual
task
of
getting
the
numbers
out
of
the
of
the
log
and
figuring
out
for
how
for
how
long
they
block
deployment.
E
I
like
this
idea,
could
work
sometimes,
but
not
always,
because
often
we
lower
the
severity
because
of
unblocking
feature
flex
and
deployments.
Sometimes
we
start
with
a
lower
severity
because
we
don't
think
it's
a
blocker
and
then
it
turns
out
to
be
one.
So
it
will
help
a
little
bit,
but
it
will
not
be
complete
right
so,
but
do
we
change
the
severity
manually
or
we
use.
A
Chat
ups
for
changing
severity.
No,
we
do
this
manually.
I
don't
think,
there's
a
chat
up
space
right
now
to
do.
Okay!
No,
because
I
was
thinking
if
the
the
triage
thing,
the
one
that
scans
through
the
the
issue.
I
don't
know
for
how
frequent
the
thing
is,
but
if
we
can
get
into
in
between
the
changes
as
long
as
it's
at
least
as
to
once
it
can
head
it.
A
C
This
so
like,
how
can
we
get
this
away
from
being
manual
for
for
now
and
see
what
we
can
replace.
E
C
Awesome
great
and
then
we've
got
a
few
announcements
read
through
discussion.
Myra
you
have
the
first
one.
F
F
Okay,
so
here
we
go,
I'm
assuming
everyone
can
see
my
screen,
please
scream.
If
you
don't
okay,
so
here
it
is
a
coordinated
pipeline.
This
one
is
for
three
days
ago
it
has
been
completed
and
there
are
a
few
differences
from
the
old
process
in
this
one.
We
have
like
a
deployment
staging
that
has
been
completed
and,
as
you
can
see,
we
have
a
new
stage,
the
one
that
triggers
q,
a
on
staging
the
jobs
that
we
care
about.
F
Are
these
three
like
the
full,
the
orchestrated
and
the
smoke
ones,
which
are
three
die:
three
different
type
of
q,
a
jobs
that
are
triggered
for
staging
and
in
this
case,
for
staging
the
one
that
we
absolutely
care
is
the
smoke
one.
This
job
triggers
the
smoke
and
reliable
test
that
absolutely
cannot
fail.
F
If
some
of,
if,
if
a
test
in
this
pipeline
is
actually
a
bridge
pipeline
fails
well,
we
need
to
either
investigate
retreat,
retry
the
job
because
it
might
be
flucky
and
if
it
continue
failing,
we
need
to
engage
with
someone
from
quality,
the
other
two,
the
full
and
the
orchestrated
one.
F
I
am
not
very
clear
what
kind
of
test
they
execute,
but
these
two
are
allowed
to
fail
and
well
we
can
continue
the
deployment
if
these
two
fail.
F
Okay,
so
this
is
for
staging
and
now
for
canary,
we
have
sort
of
the
same
setup.
We
have
the
deployment
for
canary
and
one
this
once
this
one
succeeds.
We
execute
also
three
types
of
q,
a
the
full
one,
the
smoke
for
canary
and
the
smoke
for
main
or
production
and
on
this
case
is
kind
of
similar
to
staging
the
one
that
can
absolutely
not
fail,
and
the
one
that
we
need
to
continue
with
the
coordinated
pipeline
is
the
smoke
for
canary
this.
F
So,
okay,
the
differences
with
the
old
process
is
that
if
the
smoke
one
fails,
a
slack
notification
is
going
to
be
sent.
I
put
an
example
on
the
on
the
dock
and
then
the
other
one
is
that
on
the
deployer,
the
q,
a
jobs
were
kind
of
were
configured
to
be
retried
automatically.
I
think
three
times
in
this
case
they
are
not
one
is
that
I
am
not
sure
that
the
retry
ci
configuration
is
compatible
with
bridge
jobs.
F
I
think
we
still
need
to
investigate
that,
but
another
one
is
that
I'm
not
sure
if
it
is
necessary,
because
in
this
case,
if
let's
say
that
q
a
smoke
fails
well,
we
are
going
to
open,
basically
the
pipeline
and
well.
We
can
retry
the
jobs
right
here
if
one
of
them
failed
instead
of
retrying
the
whole
pipeline,
which
I
think
it
is
actually
faster.
A
They
are
already
trying
inside.
If
I
remember
so,
we
were
kind
of
retrying,
something
that
was
really
trying
as
well
and
was.
We
were
wasting
a
lot
of
time
before
getting
a
notification
that
something
went
wrong
right
now.
We
we
get
alerted
earlier
because
it
was
retrying
three
times
each
one
of
those
could
retry
two
times
as
well,
and
you
get
notified
only
at
the
end
yep
exactly.
F
F
So
just
for
you
to
know
the
smoke
once
normally
takes
yeah
like
15
minutes
the
full
one
like
16
minutes
like
one
hour,
and
I
think
that's
it.
F
A
D
C
D
There
have
been
a
few
times
where
I'll
report
that
qa
failed
on
a
deployment
and
then
whoever
is
the
qa
on
call
is
like
oh
yeah.
I
already
know
about
this:
we're
preparing
a
quarantine,
because
this
started
failing
a
few
hours
ago
so
like
it
is
helpful
to
some
extent,
but
I
think
for
certain
situations.
It's
still
not
quick
enough.
Perhaps,
but
it
I
do.
I
do
think
it's
helpful
for
qa.
E
A
They
are
building
issues
out
of
it,
so
they
kind
of
tracking
where
something
started
failing.
I
think
that
this
gives
them
some
kind
of
point
in
time
where
they
know
that
code
is
changing
compared
to
is
the
test
flaky.
So,
given
the
same
code
base
is
it's?
Is
it
failing
or
not,
failing
just
by
the
hour
of
the
day
when
it's
running?
So
I
think
this
is
the
point,
give
them
some
extra
data
point
where
it
is
more
helpful.
C
Right
like
so
for
me,
it
feels
like
what
I
think
is
strange
is
that
we,
I
love
a
visual
so
that
actually
we
don't
have
to
remember
like
these
ones
are
allowed
to
fail,
and
these
ones
are
not
allowed
to
fail.
Is
there
a
way
of
moving
the
ones
that
are
allowed
to
fail
into
like
a
downstream
pipeline
so
like
we
trigger
the
pipeline,
but
all
we
care
about
is
the
trigger
and
the
actual,
like
you
know,
like
henry
says
like
we,
we
don't
care
about
the
outcome
of
these
tests
right.
F
Yes,
that's
what
we
are
doing
for
the
full
ones
we
just
trigger
the
pipeline,
and
just
we
let
it
run,
we
don't
wait
for
it
to
continue
with
the
coordinated
pipeline
and
for
the
orchestra.
There's
green
does
green
mean
that
the
trigger
succeeded,
the
the
trigger
succeeded
and
that
the
pipeline
was
succeeded,
but
we
don't
wait
for
it
in
order
to
continue.
C
C
F
F
But
I
think
henry
has
a
good
point.
This
q,
a
suites,
are
executed
on
schedule,
so
I
think
honestly
for
the
purpose
of
auto
deployment.
We
only
need
the
smoke
ones,
I'm
not
sure
if
we
actually
need
to
run
like
the
full
and
the
orchestrated
ones,
because
they
are
actually
executed
every
two
hours
or
every
12
hours
and
another
fun
us.
Another
fun
fact
is
that
the
q,
a
jobs
actually
ignore
the
deploy
version
that
we
send
them.
F
They
only
use
the
deploy
version
to
build
the
q
a
issue,
but
they
use
they
run
q,
a
by
fetching
the
last
sha
from
the
api,
and
they
run
the
test
based
on
that
shot.
Instead
of
the
deploy
version
that
we
use,
so
they
might
be
actually
used
running
q,
a
from
another
version.
If
there
is
another
deployment
that
is
running
like
currently
at
the
same
time,.
F
F
C
You
mind
opening
an
issue
mira
and
actually
getting
quality
to
weighed
in
because
this
is
one
where,
as
with
all
of
the
kind
of
mixed
deployment
stuff,
that
we
kind
of
own
the
trigger,
but
we
don't
really
own
the
risk
around
which
tests
go
with
the
deployment.
So
we'll
need
to
make
sure
quality
is
on
board
with
that.
F
A
We
know
what
is
coordinated,
qhs.
A
C
Of
deployments
a
new
one,
yeah,
so
zeff's,
building
out
that
stuff
yeah,
that's
the
new
mixture.
A
C
C
I
think
it's
fine
like
I
would
probably
rather
not
well,
they
will
always
be
changing
right
because
the
code
is
changing
and
the
tests
are
changing.
I
think
probably
what
we
want
to
find
a
way
of
doing
is
getting
our
health
checks
and
automated
stuff
in
place
so
that
that
becomes
a
better
measure
than
tests,
and
then
we
can
hopefully
start
skimming
down
the
number
of
tests.
We
have
to
run
at
every
stage.
D
Well,
tests
are
always
helpful,
though,
at
finding
potential
issues
before
we
go
to
production
and
need
to
roll
back
right
so
like
before
we're
able
to
look
at
metrics
qa
is
still
a
place
where
we
could
find
problems.
So
I
wonder
if
there's
any
tests
inside
that
full
suite
that
maybe
we
should
be
considering
as
a
blocker
for
us
moving
forward.
A
D
A
Of
a
curated
list
of
tests,
so
smoke
tests
are
curated
and
they
are
supposed
to
verify
core
features
that
are.
They
must
be
working,
no
matter
what
must
be
working
because
and
oftentimes.
Then
we
got
quarantined
because
the
test
is
broken,
or
things
like
that
so,
but
at
least
is,
is
a
subset
and
easier
to
think
about
them.
So
this
is,
I
think,
that
they
are
kind
of
reviewing
stuff
and
gradually
moving
stable
tests
into
the
smoke
test.
But
I'm
not
sure
of
this
how
this
dynamics
works
in
in
the
qa
team.
G
Yeah,
I
think
they
don't
add
tests
directly
to
the
smoke
test
list,
so
it
gets
added
first
to
the
full.
Then
they
let
it
run
check
that
it
keeps
passing
stably
and
then
they
add
it
to
smoke.
C
Awesome
thanks
for
going
through
that
stop
ira.
If
anyone
has
more
stuff
like
let's,
let's
go
async
for
that,
just
interest
of
time!
Reuben
you
have
the
next
discussion
item.
G
I
just
wanted
to
get
some
opinions
about
some
proposals
so,
for
example,
what
do
people
think
about
making
the
deployments
list
page
more
useful
for
us,
for
example,
allowing
more
sort
options
or
adding
a
diff
link
with
the
previous
deployment
stuff
like
that.
F
Yeah,
I
think
it
would
be
super
useful
to
also
allow
us
to
stop
a
running
deployment
through
the
ui,
because
there
is
no
way
for
us
like
to
stop
a
deployment,
and
this
is
actually
a
bug
in
the
rollback.
Well,
not
not
a
block
in
the
rollback
command,
but
the
rollback
command
checks
this
page
to
see.
F
If
there
is
a
running
deployment,
if
by
any
chance
we
have
two
or
three,
because
some
of
them
are
stale,
it
is
going
to
say:
oh,
you
cannot
roll
back
because
there
is
a
running
deployment
and
running
deployment
is
from
three
weeks
ago,
when
we
no
longer
care
about
that.
So
yeah
an
option
would
be
to
actually
mark
one
of
these
deployments
as
failed.
A
Known
as
this
is
is
doable
by
the
ui
right
now,
so
we
I
think
we
are
the
only
user
of
the
api
interface
of
that
thing,
and
api
interface
was
built
by
us
on
top
of
that,
because
that
thing
was
designed
around
deploying
with
the
ci
configuration
when
you
say
this,
this
is
deploying
an
environment,
so
yeah
I
mean
but
yeah.
It
would
be
super
useful.
D
G
Support
for
multi-project
deployments
so
like
a
deployment
object
that
records
a
ref
for,
like
italy,
gitlab
elasticsearch,
multiple
projects.
C
Yeah,
I
think
that
recipe
is
for
like
so
like
you
know
something
like
when
you
configure
like
mirrors,
or
something
like
that,
where
you
know
you
have
the
option
of
actually
saying
this
project
is
related
to
these
other
ones.
I
have
dependencies
or,
or
something
like
that,
which
is
optional
to
set
up,
but
if
you
set
it
up
means
that
you
get
the
deployment
tracking
info.
That
would
be
super
helpful.
A
A
A
I
mean
this
is
what
we
are
asking
during
the
release:
metadata
thing
where
we
are
just
writing
stuff
in
there,
so
that
we
have
commits,
and
in
in
this,
in
the
in
those
files,
we
have
the
information
we
need,
but
on
top
of
that
we
also
can
leverage
the
deployment
tracking
api
of
that
project
to
find
something
that
then
can
be
linked
to
every
single
project.
So,
but
I
totally
agree,
but
we
first
needed
a
good
good
story
around
how
to
represent
a
multi
project.
A
A
I
just
challenge
you
to
find
gitlab.com
just
writing
github
in
the
search
bar
on
the
thing,
because
it
basically
gives
you
every
single
fork
instead
of
the
original
one,
but
that's
fuzzy
search,
but
maybe
I
mean
maybe
it
got
fixed
when
I
tried
it
was
not
really
helpful
and
the
other
thing
is
that
it
shows
only
three
deployments
and
they
select
them.
I
think,
based
on
the
one
that
had
the
the
lowest
a
that
is
something
like
production
or
everything
else
is
sorted
by
when
it
gets
created.
A
A
A
B
A
B
A
A
D
C
And
a
little
context
around
kind
of
related
to
what
robert
has
just
shared
there.
So
you
know
like
one
of
our
big
sort
of
mission
items
is
to
start
dog
fooding
more
stuff
and
not
too
many
features
arrive
fully
kind
of
fully
delivery
spect,
unfortunately,
so
what
we're
trying
to
do
here
is
actually
figure
out
like
what
proposals
should
we
be
going
back
to
product
managers
with?
Where
should
we
be
asking
for
extensions
or
proposing
new
features,
which
you
know
we
could
help
implement
as
well?
C
Awesome
so
I'm
going
to
stop
the
recording.