►
From YouTube: 2023-07-05 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
a
quick
reminder:
if
you
have
any
upcoming
out
of
office,
please
add
it
onto
the
list
main
things
for
this.
One
is
so
that
we,
if
you're
needing
something
from
someone
specific,
you
can
be
a
bit
aware
of
of
possible
time
out
coming
up
what
we
may
need
to
do.
Just
a
heads
up,
no
change
right
now.
A
What
we
may
need
to
do
in
the
future
is
do
a
bit
more
alignment
and
just
make
sure
we
have
got
enough
people
to
cover
things
so
visibility
for
now
is
that
kind
of
goal
we'll
see
how
things
go,
so
we
have
an
announcement.
A
This
is
not
a
call,
but
please
be
aware
of
the
change
to
the
review
apps,
which
hopefully
is
a
good
thing
for
us.
So
we'll
see
about
that.
One
check
out
engineer
weekend
review.
If
you
need
more
details
and
Vladimir,
you
have
the
first
discussion
item.
B
Yeah
I'm
not
really
sure,
should
I
bring
it
here,
or
maybe
systems
demo
or
systems
sync,
but
we
discussed
the
persistency
level
of
of
pre-environment
and
since
we
started
we
built
like
the
second
cluster
and
we
instantiated
gitlab
instance
there
playing
around
all
these.
You
know,
deployment,
rollouts
and
Progressive
rollouts
gonna
bring
inconsistency
in
both
clusters.
I
would
say,
therefore,
scarves
suggested
to
spin
up
yet
another
persistent
layer
for
pre-environment
or
yet
another
re-environment
clock
instance.
B
Yeah,
the
question
is:
should
we
do
that
how
we
can
handle
if
we
build
a
new
cluster,
how
we
can
switch
it
to?
You
know
our
release
tool
into
the
new
cluster,
or
maybe
we
can
just
you
know,
since
we're
touching
pre-environment
only
once
a
month.
We
can
just
leave
it
like
this
and
then
bring
it
back
to
the
actual
State
when,
like
two
days
before
the
release.
A
What
would
it
be
a
third
option
for
like
infra
testing
to
take
place
on
a
different
environment?
So
pre
goes
back
to
being
for
releases.
D
But
technically
speaking,
if
we
talk,
if
you
think
about
the
name
of
the
environment
overseas,
has
nothing
to
share
with
production
because
we
never
deploy
earthy.
The
Earth
is
the
production.
So
as
a
pre-prod
environment,
it
makes
more
sense
for
testing
configuration
changes.
But
this
is
just
a
naming
thing:
okay,.
A
About
how
much
tooling
change
we
may
need
to
do
yeah
to
propagate
your
I
was
just
a
question:
I
I
don't
have
I,
don't
have
a
strong
opinion.
D
I
think
there's
no
Tooling
in
our
side,
because
there's
a
number
of
us
integration
I
think
this
is
something
I
was
discussing
with
abalu
this
morning,
where
it
was
actually
questioning
why
we
have
deployments
starting
from
release
tools
and
deployments
starting
from
Omnibus.
Who
was
asking?
Is
it
actually?
The
code
in
use
can
I
remove
it
and
we.
B
F
And
deployers,
what
reaches
I
think
it's
deployed
that
reaches
out
to
QA
and
executes
a
bunch
of
QA.
So
there's
a
lot
of
setup.
That's
already
been
done
on
the
existing
pre-prod
instance
to
enable
QA
to
work
as
effectively
as
it
does
so
I
linked
to
the
issue.
One
of
the
two
issues
that
I've
created
related
to
this
and
Amy
I
saw
that
you
had
created
an
issue
where
we
just
need
to
update
our
steps
to
make
sure
pre-pride
is
a
running
step
ahead
of
a
release
candidate
being
deployed.
F
So
that's
probably
worth
mentioning
here
as
well.
It's
my
opinion
that
we
should
probably
try
to
return
pre-prod
back
to
a
state
where
delivery
could
use
it.
As
is
that
way,
it's
just
not
interfering
with
release
procedures,
and
that
way
we
could
test
unfettered
without
worrying
about
just
accidentally
destroying
pre-prod.
F
This
is
just
my
opinion,
but
if
others
would
like
to
voice
their
concern,
there's
some
issues
that
we
could
continue.
This
conversation
onto
I
do
think
at
some
point.
We
do
need
to
make
a
decision
kind
of
soon
and
go
ahead
begin
executing
on
it.
That
way,
we
keep
Vladimir
unblocked
on
the
work
that
we're
working
on
there
and
that
we
make
sure
that
we
are
on
block
come
closer
to
the
22nd.
A
I
agree,
I
agree,
so
I
have
a
few
things.
Some
of
them
might
be
questioned.
One
is
I,
believe
quality
use
pre
for
nightly
testing.
A
C
B
I
will
the
the
only
concern
I
see
is
for
me
personally.
It's
like
the
building
the
whole
new
persistent
layer
might
take
in
indefinite
amount
of
time,
so
I
don't
know
how
long
it's
going
to
take
to
create
new
persistent
layer.
B
F
It's
really
not
terribly
difficult
and
depending
on
how
we
want
to
go
through
it,
I
kind
of
listed
off
one
potential
solution
for
that,
but
that's
just
one
potential
solution.
You
know
we
can
handle
this
in
a
multitude
of
ways.
B
F
B
Terms
of
in
terms
of
Global
endpoint,
that's
going
to
be
easy.
We
can
call
this
environment
whatever
integration
or
something
like
that,
and
we
can
create
cloudflare
load
balancer
like
int,
Dot
gitlab.com,
and
then
connect
both
clusters.
Both
newly
created
clusters
to
this
environment
and
keep
free
as
it
is,
but
my
again
like
my
biggest
concern
is,
is
is
persistency.
A
I
suggest
that
what
would
be
worth
doing
is
I
think
the
question
I
I
do
agree
with
Scott
back
I.
Think
we
at
some
point.
We
probably
need
to
do
this.
I,
don't
know
when
we
do.
This
probably
depends
on
how
much
longer
do
we
want
to
have
a
pre-environment
for
testing
like
intensively
for
testing
I?
Think
it's
been
fine,
for
we
want
to
do
something
for
a
few
weeks
and
then
we
come
off
it.
But
if
this
is
a,
we
want
to
have
pre-environment
available
for
testing
extensive
changes
for
the
next
quarter.
A
We
should
probably
do
something
about
that,
so
that
might
be
one
way
of
helping
make
a
decision
of.
Should
we
do
something
or
not,
and
then
I
think
it's
a.
It
will
probably
be
a
case
of
evaluating
the
options
and
doing
some
estimating
on
which
one
is
going
to
be
the
cheapest,
like
I,
think
some
of
the
work
Graham
has
done
on
release
environment
could
help
get
release
prep
off
pre,
so
that
might
be
a
simpler
way
to
go
where
we
need
to
evaluate
those.
A
A
You
want
to
validate
about
me.
Do
you
want
to
ask
quality,
firstly
about
whether
what
they
expect
from
pre?
Do
they
use
it
pre
and
if
they,
if
they
are
using
it,
what
are
they
like?
What
are
they
running?
What
are
they
expecting
to
have,
and
then
that
might
answer
that
bit
and
then
we
can
validate
the
options
but
I
think
if
we,
if
we
want,
if
we
need
to
be
in
this
state,
for
you
know
another
quarter,
then
doing
the
work
to
separate
this
out
would
be
a
good
thing.
E
D
So
yeah
I
was
thinking.
I
was
partially
inspired
by
when
we
used
to
have
the
fire
drills
and
back
in
the
days
here
in
our
team.
I
was
thinking
about.
Maybe
we
should
do
something
like
that,
with
some
sort
of
focus
on
drum
books
and
checklists.
So,
instead
of
coming
with
a
idea
on
scenarios
like
we
did
last
time
when
we
did,
that
I
mean
it
was
I.
D
D
And
everyone
can
try
to
say
this
is
how
it
would
do
it
I'm
thinking
about
the
things
that
are
not
trivial
right,
so
things
are
not
not
always
happen
and
they
may
have
may
require
a
deeper
knowledge
in
how
religious
tools
behaves,
or
things
like
that
so,
and
this
is
basically
the
idea.
So
my
my
intention
on
this
is
maybe
we
can
build
refresher
on
how
release
tools
behaves
today,
because
we
changed
many
things
in
the
past.
D
So
something
like
an
explanation
on
the
process,
which
is
a
bit
more
deeper
than
the
overall
General
documentation
that
we
have
and
then
based
on
that
we
say:
okay,
we
know
this
is
how
it
works.
This
is
our
the
checklist.
We
have
all
the
tasks
everyone
can
find,
what
they
want
to
discuss
and
and
and
run
as
an
experiment
through
all
the
teams
so
that
we
find
bugs
we
improve
our
run
books
and
things
like
that,
I
think,
Vladimir.
You
have
a
question.
B
Yeah
well,
I
I
do
have
a
suggestion
that,
coming
from
my
long
term,
on-call
rotations
in
the
past-
and
we
did
as
a
part
of
Incident
Management
in
multiple
places,
we
did.
We
played
Wheel
of
Misfortune
game
where
we
are
replaying
incidents.
B
And
since
we
are
talking
about
like
sharing
the
knowledge
and
understanding
how
things
work,
probably
once
in
a
while
do
like
Wheel
of
Fortune
game,
where
we
are
re-executing,
our
Andrea
revaluating
our
run
books,
and
we
can
just
learn
from
that
and
improve
on
the
on
the
runbooks
as
well.
D
Yeah
I
think
it's
a
good
idea.
What
I
was
unsure
about
is
that
many
of
our
procedures
and
round
book
leaves
are
say,
are
permanent
marks
in
the
git
history
of
the
product
itself,
so
and
some
of
those
will
actually
take
hours
and
hours
to
run
and
so
how
to
approach
this
I
mean
we
can
start
with
a
with
an
idea.
So
I
have
this
problem.
D
How
can
we
solve
that
and
we
see
as
a
team
if
we
agree
on
a
solution
if
we
have
competing
solution,
multiple
solution
and
then
maybe
based
on
that
we
decide,
let's
schedule
our
tests
run
and
then
we
think
a
bit
more
truly
about.
How
can
we
test
that
scenario
that
seems
controversial
or
that
it's
someone
or
that
doesn't
work
as
we
expected
without
affecting
github.com
or
yeah
I
mean,
instead
of
just
doing
this
on
the
spot?
D
We
plan
a
bit
more
how
we
want
to
simulate
that
that
problem
or
another
option,
which
is
still
good,
I,
think,
is
kind
of
reviewing
what
what
really
happened
in
the
past.
So
as
a
release
manager
I've
been
run
through
this
as
being
really
stressful.
It's
been
really
difficult,
whatever.
Maybe
I
can
just
try
to
to
write
a
summaries
and
try
to
replay
this
together
as
a
team.
D
So
there
are
many
opportunities
like
how
to
debug
failure
in
the
pipeline,
things
that
are
kind
of
really
hard
to
write
down
in
a
round
book
right,
because
every
problem
may
be
different
and
so
yeah,
but
seeing
how
others
will
approach.
Something
like
this
together
may
help
just
building
confidence
and
helping
with
debugging
those
failures
and.
D
B
In
this
wheel
of
Misfortune
game
game
is
like
we
should
have.
If
we
are
trying
to
you,
know,
replay
the
old
incidents
or
run
books
or
something
someone
should
be
not
like.
Okay,
that
was
the
incident.
That's
how
I
run
it
and
that's
how
how
I
fixed
it,
but
apparently
there
should
be
one
person
who
is
who
is
leading
this
game,
he's
saying
like
Okay,
that
actually
happened
and
the
the
person
who
is
kind
of
who
is
running
this
this
this
game
like
who
is
the
the
player
the
player?
B
Yes,
so
he
is
actually
suggesting
like
looking
for
the
for
the
runbook
suggesting
the
steps,
and
then
it
can
be.
You
know
it.
It
can
be
absolutely
like
the
the
data
can
be
absolutely
synthetic.
You
know
like
we.
We
can
pretend
that
we
push
something
or
something
like
that,
but
it's
important
that
it's
not
the
person
who
actually
had
this
issue
or
who
wrote
this
run
book
actually
plays
it,
but
the
you
test
it
on
someone
else.
D
Yeah
I
think
it
makes
a
lot
of
sense.
So
this
is
why
I
said
if
someone
wants
to
test
something
that
found
in
checklist
or
in
rum
books,
I
mean
I'm
kind
of
expecting
that
they
didn't
wrote
it
right,
because
if
you,
if
you
have
wrote
something
down
in
the
runbooks,
probably
you
experienced
it,
and
so
maybe
you
have
a
more
context
at
what
you
wrote
down
in
in
the
thing
and
but
but
again
yeah.
If
we
say
instead,
we
are
replaying
something
that
happened.
D
A
Cool,
so
it
sounds
like
we
have
enough
agreement
to
give
it
a
shot.
Would
somebody
mind
taking
an
action
to
identify
something
that
we
could
use
to
talk
through
our
first
one,
so
something
in
a
process,
a
template
or
a
run
book
that
we
could
use
to
kick
this
off
or
an
error
message,
or
just
something
that
was
a
what
is
going
on.
This
would
be
worth
digging
into
further.
A
Accurate,
thank
you
Superstar
and
I
think
it
would
be
a
really
good
one
like
it's
not
like,
not
at
all
I.
Don't
think
this
should
be
a
one-off,
I
hope
it's
something
which
we
can
figure
out
a
way
to
make
this
a
really
nice
repeatable
regular
process
so
that
we
can
just
constantly
iterate
on
this.
So
for
those
of
you
who
didn't
volunteer,
your
turn
will
come
start
looking.
We
should
build
it
back,
look
because
I
think
it'd
be
a
good
one
to
rotate
through,
but
yeah.
Thank
you.
Let's
figure
out
logistics.
A
Absolutely
okay,
nice!
So,
but
let's,
when
you've
found
one
that
you
want,
it
doesn't
have
to
be
urgent,
but
when
you
have
something
you
want
to
pick
give
us
a
shot
and
we
can
figure
out
logistics
on
organizing
a
session
to
go
through
awesome.
Thanks
for
suggesting
this
Alessia
I.
Think
I
think
this
will
be
super
helpful
because
there
is
a
lot
I
would.
Rather,
we
didn't
all
have
to
experience
the
pain
points
before
we
knew
how
to
deal
with
them.
A
So
hopefully
this
will
give
us
a
bit
of
a
knowledge
share,
so
we
can
actually
all
learn
from
from
the
misfortunes.
A
Cool
great,
so
release
managers,
hand
over
to
you.
G
Can
you
hear
me
yes,
yeah,
yeah,
okay,
so
my
internet
is
not
stable,
so
I'm
sorry
about
this,
so
please
bear
with
me
while
going
through
this.
Let
me
share
the
screen.
G
You
can
see
the
screen
right
so
last
week
behind
this
last
week
we
had
only
one
instant-
or
maybe,
let's
put
it
like,
merge
chain.
Instead,
we
couldn't
sink
between
clinical
and
security
release
security
report
on
26th
of
June,
so
we
kept
deploying,
but
actually
in
the
package
packages
has
the
same
or
SJ
comment.
G
So
basically
it
seems
normal,
but
like
we
had
some
hiccups
there
we
fixed
that
the
next
day
after
a
couple
of
Trials,
and
then
we
have
normal
week
and
then
today
this
week
we
also
like
started
a
bit
slow,
but
then
I
think
we
are
normal.
Now,
unfortunately,
we
have
an
estimate
today,
as
well
with
the
yeast
aging
migration.
So
this
is
blocking
us
until
it's
fixed.
G
So
let's
see
what
this
tomorrow.
G
Deployment
frequency
also
I
think
it
seems
normal
I.
Don't
really
know
how
to
read
this,
but
I
think
it's
normal
switch
to
the
last
90
days.
G
This
is
also
like
26
27th
of
June.
This
is
the
hiccup
I'm
talking
about
I'm,
not
sure
if
it's,
this
is
the
reason
it's
visible
here
or
not.
That's.
G
G
And
then
we
became
like
normal
again
most
of
the
week
deployment
blockers.
G
We
had
like
18
hours,
I
rounded,
to
18
hours,
because
there
was
no
16
hours.
It
was
only
one
blocker,
basically
the
military
and
thing
so
I,
just
surrounded
to
18
hours.
I
think
this
is
normal
procedure.
A
A
D
Now
I
wanted
to
say
that,
even
though
we
had
this
problem-
and
so
it
was
terrible-
is
actually
interesting
to
see
that
those
metrics
are
reflecting
what
we
experienced
right,
because
some
it
kind
of
gave
a
sense
of
what
we're
doing
every
week
right.
So
we
it's
working.
A
A
Awesome
thanks
for
going
through
that
and
let's
see
how
we
go
to
this
week
so
yeah
the
time
will
be
a
little
longer
I
guess
with
the
friends
and
family.
But
that's
okay,
that's
okay!
Awesome!
Does
anyone
have
anything
else
they
want
to
bring
up
on
this
recording.
A
I
just
want
to
mention
actually
actually
I'm
gonna
pre-book
this
so
just
as
I
mentioned,
I
dropped
it
in
on
slack,
we
have
the
engagement
survey
results
for
the
Department.
Unfortunately,
we
don't
have
any
delivery
breakdowns
of
the
reports,
because
both
teams
were
below
the
minimum
numbers
to
maintain
confidentiality.
That's
okay,
it's
a
shame,
but
it's
okay.
We
will
work
from
the
platform
results,
but
what
I
think
would
be
good
to
do
is
for
maybe
on
Monday
in
delivery.
Weekly
have
a
bit
of
a
chat
about
how
we're
doing
on
that.
A
Whether
there's
something
interesting.
If
anyone
has
any
ideas
about
things,
they'd
like
to
see
change
as
a
result,
so
platform
results,
weren't
exactly
represent
delivery,
so
I
think
that's
where
a
discussion
becomes
more
useful
because
we
can
talk
about.
Does
it
feel
like
it?
It
reflects
what
we
are
seeing
in
delivery.
Are
there
other
things
we
want
to
focus
on
and
hopefully
see
if
we
can
contribute
back
some
suggestions
for
improvements.