►
From YouTube: 2022-03-13 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Awesome,
so
let
us
begin
so
I
this
is
a
read-only,
but
just
to
re-share
it
like
so
some
of
you
have
had
your
clocks
changed.
Some
of
us
are
due
in
a
couple
of
weeks
time
and
then
myra
is
in
three
weeks
time.
So,
just
in
this
kind
of
in
between
and
ruben
do
you
change
clocks
as
well.
You
didn't
check.
No,
you
don't.
B
Graham
also
doesn't
change
so
over
the
next
three
weeks,
as
some
clocks
are
changing
and
then
once
we
get
beyond
this,
we
can
reassess,
but
please
speak
up
if
any
meetings
jump
to
times
either
don't
work
for
you
anymore,
or
if
it
gives
you
any
clashes.
I
think
we
will
have
to
just
move
things
around
a
little
bit
for
the
next
few
weeks
until
we
get
through
the
back
into
summertime
for
those
who
switch.
B
Awesome
so
first
discussion
item
is
vlad
has
joined
us,
so
he
is
joining
us
for
a
learning
internship.
B
He
is
one
of
the
senior
support
engineers
kind
of
focused
on
kubernetes,
so
he's
going
to
be
pairing
up
with
henry
and
be
around
with
us
until
early
june,
so
reach
out
and
say:
hi
you'll
probably
see
him
around
in
slack.
I
think
he's
going
to
probably
reach
out
and
set
up
coffee
chats
with
you
all
over
the
next
couple
of
months
as
well.
But
if
he
has
any
questions,
please
do
what
you
can
to
help
him
out.
B
Awesome
and
a
b
is
a
little
bit
updated,
but
just
in
case
we
have
questions
and
things
so
job
just
posted
before
in
slack,
but
we
have
made
a
small
adjustment
onto
incidents
relating
to
deployment,
deployment-related
interns,
but
also
other
types
of
incidents,
so
we're
grouping
deployment-related
incidents
along
with
other
backstage
things
that
will
include
like
failing
backups
or
monitoring,
going
missing
or
all
of
those
risks
which
are
basically
we're
fine
right
now,
but
we
have
increased
the
risk
in
the
future.
B
So
john,
just
posted
before
he's
updated
the
wood
house
ui
for
when
you
declare
a
new
incident
removed
the
deployment
related
incident
sort
of
check
and
replace
it
with
one
which
will
add
the
backstage
label.
B
So
a
few
things
that
kind
of,
I
guess
a
little
extra
context
around
this
one.
So
this
is
only
for
things
that
are
fully
internal
like
if
you
have
something
that's
production
related
like
go
ahead
and
raise
it
as
a
fully
fledged
incident
so
that
it
can
be
assessed
for
whether
it
is
actually
customer
impacting
or
not,
and
then
the
other
thing,
that's
perhaps
not
super
visible
right.
B
Now
is
one
of
the
reasons
behind
shifting
this
is
reliability
are
are
trying
to
figure
out
how
to
help
the
eoc,
because
it's
a
bit
of
a
overloaded
role
right
now,
as
lots
going
on.
There's
often
lots
of
incidents,
so
what
this
backstage
label
will
be
used
for
is
for
the
eoc
to
start
making
some
prioritization
calls.
What
I'm
expecting
to
happen
is
there's
likely
there'll
be
like
a
secondary
on-call
rotation
gets
created
and
the
secondary
on-call
engineer
would
be
the
person
who
would
help
out
on
the
backstage
thing.
D
I
suppose
this
doesn't
change,
feature
flags
being
blocked
and
everything
else.
Okay,.
B
We
will
need
to
go
to
compliance
and
check
what
that
looks
like
because
our
current
process
of
overriding
a
deployment
comes
with
very
explicit
approval
from
the
engineer
on
call
which
then
gets
documented,
and
compliance
are
using
that
to
make
sure
that
kind
of
decisions
about
changes
going
into
production
are
made
by
someone
who
kind
of
has
all
the
information
so
shifting
this
to
labels.
I
think
I
think
should
be
possible,
but
we
just
need
to
like
it's
a
separate
process
to
get
that
kind
of
compliance
approved.
B
Awesome,
great
and
myra,
you
have
the
next
point.
D
Yep
thanks:
I
just
wanted
to
share
the
results
of
the
initial
testing
for
the
new
reorder.
Well,
basically,
everything
is
working
as
expected.
The
ci
and
the
new
jobs
are
working
now
so
j.
D
D
We
are
still
not
going
to
merge
the
changes
to
master
that
will
be
done
later
and
then,
as
a
reminder,
there
are
multiple
changes,
but
the
main
ones
is
the
deploy
order.
We
are
going
to
deploy
to
the
canaries
first.
That
means
that
it
is
staging
canary,
then
q,
a
then
production
canary,
the
q,
a
then
waiting
time
after
the
minutes,
then
staging
and
deploy
staging
and
production
are
going
to
be
deployed
roughly
at
the
same
time,
with
a
separation
of
30
minutes.
D
Then
it
is
going
to
be
yeah
the
opposite
migrations
on
staging.
Then
it
is
going
to
be
q
a
on
staging
and
then
the
post
deployment
migrations
on
production
and
other
changes
like
making
time
only
taking
30
minutes
instead
of
one
hour
is
something
that
we
should
also
notice
and
one
interesting
one
that
I
wasn't
quite
aware
is
that
the
coordination
pipeline
duration
is
going
to
be
shorter
around
five
hours
instead
of
the
regular
eight
hours,
which
is
great
and
well.
D
If
someone
has
any
questions
outside
of
the
team,
we
have
like
a
issue
dedicated
to
this
and
with
an
announcement
and
with
questions.
Please
point
it
on
this
person
or
persons
to
this
issue,
and
of
course,
if
you
have
any
questions
well,
I'm
going
to
be
around
and
grains
is
also
going
to
be
around
or
you
can
also
ask
them
right
now.
If
you
have
any
so.
A
D
Yeah
mainly
that
one,
because
now
staging
normally
lasts
one
hour
one
hour
and
a
half
with
q,
a
and
production
is
two
hours
so
yeah.
We
are
doing
those,
though
simultaneously,
and
we
are
also
executing
a
shorter
breaking
time
of
only
30
minutes.
D
Of
course,
the
time
really
depends
on
the
deployment
it
might
be
longer.
If
we
have
same
like
like
a
long
post
deployment
migrations,
then
the
timing
is
going
to
increase,
but
once
we
also
start
working
by
with
separating
the
post-deployment
migrations
from
the
coordinated
pipeline,
then
the
timing
should
be
more
stable
and
lost
and
like
decrease
or
increase
oftenly.
D
Yes,
thank
you.
I
still
need
some
sort
of
confirmation
by
grains
to
say
that
everything
is
ready
to
be
like
switched
over,
but
yeah
it
might
be.
Today.
I
still
need
to
confirm
with
grains,
but
probably
tomorrow
or
this
week,
it
should
be
this
week.
D
B
And
I
think
that's
it
yeah,
probably
just
one
thing
that
sort
of
like
perhaps
for
added
awareness
is.
We
have
a
few
things
that
don't
have
a
canary,
so
kaz
is
one
of
those
sidekick
is
one
of
those
there
possibly
are
others.
So,
just
whilst
we
make
this
shift,
we
perhaps
just
be
aware
of
that,
which
is
right,
go
back
on
your
mr.
B
I
just
think
graeme
in
on
there
as
well,
just
that
we
make
a
plan
basically
about
when,
when
how
are
we
going
to
get
these
things
back
so
that
we
don't,
we
don't
lose
sort
of
test
coverage
for
them.
B
Awesome
great
stuff,
thanks
for
the
update
mira
and
exciting
stuff,
thank
you
for
helping
get
us
to
this
the
stage
and
everyone
else
who's
been
involved
as
well.
Actually,
this
is
going
to
be
huge,
like
I
think,
we've
kind
of
adapted
to
having
stage
and
canary
in
there
as
an
additional
environment,
so
it'll
be
super
exciting
to
like
get
back
to
to
not
having
an
additional
duplicated
environment
again.
So
nice
work.
B
Awesome
so
release
managers,
deployment
blockers
looked
pretty
low
last
week,
so
hopefully
that
means
it
was
a
good
week.
But
what's
your
kind
of
assessment
is
there
anything
we
need
to
adapt
based
on
last
week.
E
B
Did
you
hit
the
refresh?
I
did.
E
Like
our
mtp
in
hours
stops
in
february,
like
it
says,
march
2022
but
like
I
can't
tell
and
then
like
for
the
mtp
mttp
daily,
it
doesn't
look
like
there's
any
data
after
february
28th.
E
A
B
E
B
E
Well,
you
know
it
unless
there's
something
new
that
we
learn.
I
don't
think
we
need
to
adjust
anything
despite
a
few
incidents.
Last
week
I
was
still
able
to
see
at
least
two
deployments
out
during
my
time,
which
was
pretty
good
for
the
most
part
that
might
change
today,
because
I'm
four
in
an
hour-
and
I
don't
know
math
and
calculators
and
date
stamps.
I
don't
know
what
that
does
to
me,
but
we'll
see.
B
Awesome
and
once
we
have
done
the
switchover,
you
may
need
to
adjust
the
schedule,
because
things
will
be
a
little
faster.
B
F
Should
we
go
ahead
and
make
the
release
metadata
repository
available
to
all
engineers,
because
we
have
a
go
ahead
from
compliance?
So,
oh.
B
Awesome
yeah,
I
think,
as
long
as
that's
all
that's
in
there
like
it's,
there's
no
access
to
other
things,
then
yeah.
I
think
that
makes
like
a
lot
of
sense.
B
Once
we've
done
this
to
help
people
understand
kind
of
what
to
use
that,
for
so
it
could
be
worth
also
having
like
a
update
in
engineering
review,
which,
just
a
little
bit
more,
you
could
put
another
comment
on
the
issue
as
well,
but
just
here's.
What
people
get
access
to
here
are
the
sorts
of
questions.
You'll
now
be
able
to
answer
with
this
data
so
that
people
can
actually
make
use
of
it.
B
Yeah
exactly
yeah.
It
also
reminds
me
of
the
issue
which
I
don't
know
if
you
raised
robert,
but
you
mentioned
about
the
fact
that
the
way
we
structure
that
data
is
a
little
tricky
to
now
we're
doing
so
many
auto
deploys
like
the
actual
folder
structure,
is
a
little
bit
tricky.
B
B
There
is
a
dedicated
issue
to
like
what
was
that
is
to
group
them
by
milestone,
I
think,
was
the
proposal
rather
than
package
right.
It
was
to
put
in
a
little
bit
more
structure,
I
think,
but
I'll
dig
out
the
issue
I'll
show.
A
B
Because
that's
going
to
get
increasingly
bigger
problem
right
as
we
increase
the
number
of
deploys
we're
doing,
it
gets
a
little
tricky
to
find
what
you
need.
So
I
said,
let's
see
if
I
can
I'll
see
if
I
can
find
that
we
could
try
and
get
our
schedule
so
that
it'd
be
awesome
to
do
it
in
like
14
10
right,
so
that
we
just
have
an
inconsistent
layout
just
for
a
little
bit
before
we
hit
15.0
and
then
all
the
15s
would
be
like
correctly
organized.
D
I
do
have
a
question
about
this
item
when
you
say
available
to
all
you
mean
available
to
all
gitlab
team
members
or
to
all
the
public
like
in
general,.
F
D
B
Yeah,
okay,
awesome,
I
mean,
I
think
it
ops,
that's
fine,
but
anyway,
let's
take
a
review
on
the
issue
and
see
if
there's
any
final
concerns
before
we
go
ahead.
F
Okay,
I'll
put
a
comment
over
there,
maybe
I'll
ping
that
infrastructure
security
group
again.