►
From YouTube: 2023-06-19 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
B
Great
we're
at
time
so
welcome.
This
is
June
19th,
it's
Emir,
America's
time
delivery
group
skarbox
super
interested
to
see
you
as
it's
a
U.S
public
holiday,
but
hi
good
to
see
you
so,
as
always,
please
have
a
quick
scan
through
the
upcoming
office
notifications,
so
you
get
a
sense
of
who
is
around.
We
have
family
and
friends
on
Friday,
so
a
bunch
of
people
around
are
out
around
that
date.
B
Just
as
a
reminder.
Hopefully
this
is
actually
relevant
for
you
scubex.
So
there
is
a
family
of
friends
on
Friday,
but
I
will
be
around
so
you
can
leave
deployments
running
and
if
anyone
else,
if
you
want
to
take
an
alternative
day,
you
can
do
that
whatever
day
you
take.
If
you
want
to
take
a
family
of
friends,
please
just
stick
it
into
the
slack
app
and
last
year
you
have
the
first
discussion.
C
Thank
you.
So
that's
the
thing
so
we
now
the
weekend
to
also
deploy
triggers
are
packaged
immediately,
so
there's
no
longer
need
to
set
anything
like
the
ocean
boy
tag
latest
official
flag
or
things
like
that.
Basically,
everything
properly
labeled
get
tagged
in
no
more
than
15
minutes
and
so
documentation
and
pending.
But
there
is.
This
is
another
discussion
thing.
C
This
is
just
random
idea,
because
we
have
been
discussing
this
in
the
past.
There
is
a
running
tests
in
parallels
and
we
were
unsure
about
the
real
rate
of
not
really
flakiness,
because
flakiness
can
be
fixed.
You
can
rerun
the
the
pipeline
and
if
it's
green,
then
the
the
the
deployment
can
continue.
This
is
more
about
the
master
stability,
so
it's
kind
of
almost
a
real
master
stability
experiment,
which
is
how
much
the
monster
monster
instability
will
affect
our
office
deploy
schedule.
If
we
choose
to
play
this
game
and
just
blindly
talk
things.
A
I'm
curious:
if
we
want
to
go
that
route,
could
we
tag
two
different
things?
At
the
same
time,
one
would
be
the
package
like
we
do
today
and
the
other
would
be
that
latest
Master
sha
at
that
point
in
time
and
the
reason
I
say
that
is
because
if
we
do
have
a
red
Master,
we
probably
don't
want
to
publish
that
or
try
to
continue
that
down
the
line.
But
if
we
still
tag
the
original
one
that
does
have
the
green
pipeline,
we
can
Rel
on.
You
know
our
existing
procedures
for
that.
A
C
C
I
was
thinking
about
removing
that
feature
flag.
When
I
was
doing
this
work
and
then
say,
let's
skip
it
here
for
a
while,
because
it
may
still
have
a
value,
because
maybe
there
will
be
reasons
why
I
was
trying
to
figure
out
why
we
wanted
to
tag
directly
from
top
of
Master
and
I.
Think
I
think
that
this
Behavior
has
always
been
a
side
effect
of
that
implementation,
so
that
feature
flag
was
always
designed
to
tag
on
top
of
the
auto
deploy
Branch,
but
because
of
the
shared
code
between
how
do
we
generate
algebraic
branches?
C
And
how
do
we
pick
something
for
tagging
this?
This
Vision
flag
always
had
the
side
effect.
That
was
the
origin
of
why
we
decided
to
change
this,
that
if
you
leave
it
on
the
next
time
it
generates
an
autoplay
Branch.
It
will
start
using
the
wrong
thing,
and
now
it
could
be
interesting
to
see
if
the
wrong
thing
is
really
wrong,
because
there
was
an
issue
that
has
been
repurposed
into
running
tests
in
parallel
for
peeking
tools
with
employees.
That
was
originally
designed.
C
As
can
we
run
tests
in
parallels
with
for
every
package
and
back
then
Robert
was
trying
to
collect
some
numbers
about
the
real
level
of
flakiness.
But
I
mean
it's
probably
two
years
old
data
and
it's
completely
different
from
what
is
happening
today.
But
yeah.
C
So
mttp.
B
Okay,
it's
certainly
sounds
interesting,
I
think.
If
it's
a
small
test,
we
could
try.
Otherwise
I'd
suggest,
let's
try
and
get
it
into
an
issue
like
I
guess.
My
concern
is
the
downside
sounds
like
possibly
more
release
Manager
work,
because
if
we
end
up
with
packages,
we
can't
use
that
feels
kind
of
against
our
current
okrs.
D
B
Awesome
sounds
good
and
you
passed
every
quite
quickly,
let's
see,
but
I
do
want
to
kind
of
also
point
out
to
everyone
else
like
that.
Change,
for
the
pick
label
is
also
going
to
be
huge
and
Graham
I
hope
you're
watching
this
recording,
because
Graham
in
particular
will
get
lots
of
benefits
from
this,
because
the
schedule,
timing
on
the
branches
through
APAC
mean
that
the
previous
problems
of
the
pick
and
the
and
the
Gap
was
much
more
common
in
APAC.
B
It
happens
in
every
time
zone,
but
it
was
particularly
common
in
the
APAC
time
zone,
so
yeah
super
that
we've
got
that
Improvement.
D
C
So
because
we
used
to
have
three
timers
one
for
our
Branch
creation,
one
for
picking
in
total,
deploy
and
one
for
tanking
right.
So
we
no
longer
have
so
we
have
only
one
timer
that
runs
every
15
minutes.
That
is
doing
all
those
three
things,
one
after
each
other.
So
every
time
it
kicks
in
a
check,
is
it
time
to
create
a
new
Branch?
Yes,
no
and
then
eventually
create
it
after
creating
the
branch
say.
Is
there
anything
to
pick
in
top
of
that?
C
Yes,
no
and
then
it
picks,
and
then
finally
is
tagging,
and
because
one
of
the
things
that
we
have
done
here
is
that
if
it's
so
attacking
time
for
gitlab,
only
so
gitlab
rates
only
if,
unless
it
failed,
we
can
tag
because
we
are
delaying
the
check
and
and
that's
the
key
part
right,
because
in
order
to
generate
and
also
deploy
Branch,
we
need
a
green
pipeline.
C
So
we
have
this
kind
of
Baseline
assumption
that
we
are
building
on
top,
that
on
the
very
basic
level,
if
it's
just
Auto
deploy,
Branch
generation
was
green,
and
so
we
had
a
full
test
run
and
if
it's
a
big
into
the
deploy,
we're
just
putting
something.
On
top
of
that,
that
was
run
in
its
own
pipeline.
That
in
theory
was
run
on
Master
Branch
as
well,
and
so
we
are
willing
to
parallelize
and
taking
the
risk
here,
because,
basically
that's
what
what
is
happening
right.
C
So
the
worst
case
is
15
minutes
from
every
action
included
in
picking
to
a
deploy
and
there's
no
longer
the
problem
of
I'm
picking,
but
but
then
five
minutes
later
I'm
creating
a
new
branch
and
then
tagging
20
minutes
later,
because
this
is
what
we
had
before.
There
was
always
a
gap
of
20
minutes
between
autograph
Branch
creation.
The
first
stack
as
well
as
the
pick
controller
plot
that
you
never
know
when
it
landed
and
so
basically
right
now
we
are
removing
20
minutes.
C
Always
every
time
then
45
minutes
on
every
pick
in
12
to
deploy
so
there's
a
lot
of
improvements
in
all
the
steps
and
it's
more
linear
as
soon
as
we
under
all
of
us
learn
how
it
works.
It's
much
easier
to
think
about
what
is
happening
when
the
auto
deploy
is
running,
because
now
it's
just
one
thing
that
is
doing
all
this
step.
C
Something
that
was
close
to
the
numbers
that
we
had
and
basically
the
other
thing
is
the
the
configurable
thing
is
on
which
UTC
hour
do
you
want
to
have
a
package,
and
this
is
always
at
the
beginning
of
a
shift.
You
have
this
thing
about.
You
can
tweak
the
timings
is
no
longer
a
timer
or
now
it's
a
timing.
You
can
click.
C
You
can
tweak
the
timing
according
to
your
shift
so
that
it
works
for
you,
but
now
so
before
it
used
to
be
the
first,
it
runs
on
5
PM
and
then
it
creates
the
package.
Now
is
more
about
every
time
it
runs
within
the
5
PM,
so
from
five
zero
zero
to
559.
If
that
Branch
does
not
exist,
create
it.
D
B
No
awesome
well
once
this
gets
measured,
whoever
mergers,
please
share
this
documentation
in
the
slide
Channel,
because
I
think
calessio
makes
it
a
really
good
point,
which
is
once
everyone
has
kind
of
wrapped
their
head
around
it
once
we
all
understand
this
timeline
and
what's
going
on
inside
there,
lots
of
other
things
to
be
more
straightforward.
B
So
please
do
plan
to
spend
a
little
bit
of
time,
trying
to
understand
this
and
ask
any
questions
if
anything's
not
clear,
because
this
is
the
beginning
of
all
kind
of
all
deployment
and
all
release
tasks
that
follow.
B
C
A
C
C
Here
it
is
Okay,
so
this
has
been
an
interesting
week,
interesting
because
it
started
with
from
the
previous
one
with
the
registry
DB
upgrade.
So
we
were
starting
with
a
long.
Everything
is
blocked
since
Friday
and
we
have
been
experiencing
a
lot
of
embedding
database
issues.
So
we
will
see
the
numbers
later.
C
The
first
thing
is
that
you
can
see
what
I
call
the
Myra
effect,
which
is
when
Myra
was
helping
the
the
basically
didn't
the
number
of
ways
to
the
package:
just
dropped
down
immensely
I
mean
it
was
it's
just.
It
happened,
so
I
I
I
like
to
think
about
this,
but
yeah.
C
In
any
case,
we
have
been
doing
other
two
patch
releases
this
week
as
well,
and
there
is
something
that
was
interesting,
I
think
here
also,
so
we
see
the
effect
of
not
having
grain
helping
out
during
Apec
time,
because
basically,
this
days,
it's
an
interesting
one.
So
I
would
just
cover
this,
which
is
the
14th
to
the
15th.
C
So
was
the
first
day
that
we
had
no
production
database
problems
and
things
like
that,
so
we
decided
to
schedule
a
rate,
seven
test
and
if
I,
if
I
remember
correctly,
there
was
also
a
patch
release
happening
at
the
same
time,
not
entirely
sure.
C
So
to
what
happened
is
that
when
we
do
a
patch
release
now
we
clearly
see
the
effect
on
how
to
deploy
so
the
build
Runner
cannot
hold
the
amount
of
package
that
are
required
that
are
requested
for
doing
a
regular
Apache
release
so
something
they
used
to
take.
80
minutes
now
takes
for
four
to
five
hours,
and
basically
there
is
everything
that
is
related
to
targeting
things.
Building
Omnibus
packages
start
failing.
Nothing
really
fails,
it's
just
timeouts
right.
C
So
when
we
do
regular
autoplay
during
this
time,
we
see
a
lot
of
failures
like
packagers
are
still
running,
and
so
you
just
have
to
figure
out
re-run
wage
and
all
those
things
accumulate.
C
Then
here
we
have
this
problem.
The
during
Jenny
shifts
when
it
was
a
QA
filler
due
to
a
feature
official
flag,
something
like
this,
but
it
took
five
hours
and
then
we
basically
we
learned
in
apoc
time,
so
nothing
happens
right
and
things
start
going
down
again
an
email
morning.
C
The
interesting
thing
of
this,
though
so
the
good
thing
of
this
problem
was
that
we
were
testing
a
rail
7
package
running
through
production,
and
so
what
was
supposed
to
be
two
hour
test
ended
up
being
a
22
hour
test,
because
it
was
the
only
package
that
we
managed
to
deploy
so
yeah.
This
was
the
the
highlights
of
this
and
yeah.
That's
it
again.
We
see
here,
I
think
the
same
thing.
When
things
start
going
up,
we
were
doing
another
patch
release.
I
think
was
on
the
16th,
so
it's
things
start
failing.
C
Your
attention
is
posed.
It
is
put
elsewhere
because
you
have
to
to
to
keep
the
patch
release
process
running
and
the
site
at
the
net
effect
is
that
we
are
not
deploying
as
much
as
we
want
same
thing.
I
mean
it's
visible
here,
but
this
is
not
the
thing
that
I
wanted
to
show
it's
this
one.
So
that's
the
number
of
deployments,
and
this
ties
back
to
my
idea
of
experimenting
with
tagging
from
head.
So
if
we
look
at
the
past
month,
so
this
is
basically
everything
we've
been
doing
in
this
16.1.
C
C
So
this
is
why
I
was
thinking
right,
I
hope
this
is
just
unfortunate
month.
We
have
been
doing
there
is
the
PG-14
upgrades.
Do
we
got?
We
got
incidents,
there's
a
lot
of
things
that
happened
but
yeah,
it's
still
a
fact
right,
so
we
only
deployed
once
five
package
and
all
over
the
time,
no
more
than
four.
So
that's
the
thing
and
to
the
deployment
blockers.
This
is
I've.
Just
did
a
first
pass
on
this
today
on
the
things
that
are
that
I
could
figure
out.
C
So
this
is
still
estimate
numbers,
not
not
the
final
one,
but
basically
with
six
production
Blocker.
We
have
been
counting
30
hours,
which
is
quite
a
lot
and
yeah,
and
then
this
is
interesting
as
well,
so
because
we
are
doing
many
patch
releases,
there
are
also
those
things
that
are
not
affecting
directly
the
blocking
the
the
environments,
but
still
we
we've
spent
time
trying
to
figure
out
why
those
stable
branches
were
not
running
or
something
else
was
just
not
working
right.
So
that's
all
I
think
yep
question.
B
Just
a
comment,
particularly
for
you,
scuba
coming
in
and
Ahmad,
hopefully
he's
going
to
watch
this
recording
just
that
we
are
working
on
a
sort
of
a
an
alternative
or
an
updated
release
plan
to
combine
security
and
Patch
releases
together.
One
of
the
big
motivators
for
that
is
to
cut
back
on
patch
releases.
So
twofold
motivation
for
that
one
is
from
our
side,
they're
a
lot
of
work
and
actually
yeah.
B
That's
not
a
great
use
of
our
time
to
be
putting
out
lots
and
lots
of
patch
releases
with
small
numbers
of
changes,
but
also
we're
hearing
from
customers.
They
don't
like
having
lots
and
lots
of
packages
being
put
out
that
they
have
to
keep
on
upgrading,
so
we're
going
to
try
and
pull
these
together.
B
But
it's
I
want
people
to
kind
of
know
that
that
is
the
direction
we're
heading
in,
because
if
you
see
opportunities
when
your
release
manager
to
push
a
patch
release
out
a
bit
in
the
hope
of
combining
it
with
something
else-
or
you
know
doing
something
that
reduces
the
overall
number
of
packages,
then
that's
that's,
probably
a
good
way
to
go.
C
B
Yeah
yeah
yeah,
exactly
it's
huge
numbers,
so
the
the
general
guideline
from
York
is
that
we
should
be
putting
out
three
patches
like
three
releases
but
like
monthly
plus
three
a
month.
So
yeah
we
are
we're
well
over
and
I,
know
we're
getting
lots
of
requests
and
we're
trying
to
figure
out
how
to
handle
those.
But
it's
it's
a
huge
amount
of
work
for
sure.
B
A
B
So
only
for
the
security
stuff,
but
yeah
you're
right
so
for
like
a
security
release,
you're
thinking
out,
like
one
I
know,
we
count
that
as
three
but
yeah
exactly.
A
B
No
awesome,
thanks
for
sharing
that
Alessio
I,
also
just
remembered
I
forgot
to
share
just
quickly
mentioned
on
point
four
earlier:
I
merged
a
change
to
add
a
new
step
onto
this
change,
request
template
and
this
actually,
hopefully,
is
helpful
for
everyone.
It
was
driven
by
the
fact
that
we
don't
have
Graham
on
release
management
at
the
moment,
so
reliability
Engineers
were
getting
a
bit
stuck
where
there's
a
step
on
the
change
request,
template
that
says
before
you
start
your
change
coordinate
with
the
release
managers.
B
There
wasn't
always
a
release
manager
if
you're
in
APAC,
but
it
also
made
me
realize,
as
I
was
thinking
about
this
and
chatting
with
it,
reliability
that
the
way
the
current
template
was
set
up
was
actually
meant.
It
was
always
quite
reactionary
for
us,
because
the
first
point
that
release
managers
were
brought
into
these
things
was
they,
when
you're
ready
to
start
paying
the
release
managers,
which
is
like
not
always
great
from
our
point
of
view.
B
So
what
I
did
to
cover
these
things
is
I've,
basically
flipped
it
up
a
bit
so
that
in
the
approval
stage
there
is
a
step
now
which
says,
coordinate
with
the
release,
managers
and
basically
figure
out
a
plan,
and
what
I'm
expecting
that
will
look
like
is
for
the
intended
use
case
that
this
change
came
about
from
is,
if
somebody
reaches
out
and
says,
I
want
to
make
this
change
during
APAC
hours,
probably
doesn't
really
matter.
For
our
point
of
view,
you
may
want
to
give
guidance
on.
B
Please
don't
start
before
this
time
or
please
make
sure
you're
finished
before
this
time
for
America
an
email
release
manager
times,
but
we
should
also
at
that
point
tell
them
like
there
is
no
release
manager
in
APAC.
You
know
you're
good
to
start
whenever
you're
ready
within
this
window,
so
they're
not
waiting
on
us,
but
the
other
thing
we
can
use
it
for
is.
If
you
have
release
plans
and
you
suddenly
get
a
ping
which
is
like
hey
on
Tuesday
I'm
planning
to
take
staging
out
for
four
hours
Etc
or
whatever
the
plan
is.