►
From YouTube: 2022-05-02 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
I
guess
we
are
complete.
Emmy
is
not
running
this
show
today,
so
she
asked
me
to
do
so.
So
let's
see
I'm
also
happy
that
he's
also
recorded.
So
I
can
see
myself
failing
at
this
multiple
times
and
it's
gonna
be
fun,
so
I
hi,
I
think,
harry
we
never
met,
but
we
have
a
coffee
chat
tomorrow,
so
I
guess
we're
gonna
we're
gonna
meet.
A
Q2-
and
there
is
the
issue
that
I
talked
hopefully
the
right
group
in
last
week
and
any
I
saw
scarborough-
you
posted
some
comments
on
that.
I
didn't
fulfill
it
so
far
and
thank
you
also
for
adding
elements
for
the
retro.
A
Also,
the
team
split
issue
is
something
that
already
is
already
known
and
is
something
that
probably
I
would
like
to
address
today
and
see
if
there
are
extra
comments
or
maybe
extra
hints,
of
what
to
do,
especially
in
regards
of
naming
one
of
the
most
difficult
tasks
to
do
find
a
name
for
a
team
or
find
a
name
for
a
variable.
I
think
it's
going
to
be
a
challenge
in
any
case,
and
I
see
mira,
you
also
added
something
there
that
I
still
didn't
see.
A
C
D
E
A
A
One
of
the
other
chaos
purposes
to
have
successful
team
split
and
the
first
one
was
to
measure
and
track
release
measurement
delays,
I
think,
is
something
that
I
think
is.
I
think,
if
I
remember
correctly
from
historical
reason,
did
there
was
an
effort
to
try
to
measure
some
things,
but
there
was
not
enough
data
that
was
extracted
from
that,
and
probably
this
is
kind
of
a
step
forward,
then
also
step
back.
A
E
What
is
the
difference
between
this
issue
and
the
merge
request
that
amy
opened
some
weeks
ago,
which
also
discussed
the
oculus,
was
different?
I
can
search
for
that
north
shore
yeah.
C
I
think
that
one
was
about
strategy,
so
that's
kind
of
the
the
it's
like
the
difference
between
strategy
and
tactics,
so
that
strategy
is
kind
of
a
more
long-term
goal
for
cascading
down
from
department
level,
and
they
address
not
only
what
we
want
to
do
this
quarter,
but
kind
of
in
next
quarter
and
by
the
end
of
the
year
and
next
year
as
well,
and
she
was
adding
lines
to
that
one.
So
they
will
and
those
lines
are
in
line
with
what
what
are
ocrs.
What
are
the
okrs
proposed
here?
C
So
this
one
is
for
finalize,
I
think,
finalizing
the
decision
for
the
not
upcoming,
but
the
current
quarter,
because
it
started-
and
I
suppose,
because
now
we
are
going
to
move
to
a
lie-
dot
io,
that
this
will
be
reflected
into
a
lie,
dot,
io
and
that
one
is
more
of
an
historical
thing.
So
we
want
to
or
not
only
in
strategy,
but
also
looking
forward,
because
we
want
to
do
this
and
other
things
and
and
that's
the
difference.
A
A
If
there
are
no
extra
comments
on
this
issue,
I
think
we
can
do
the
sync
on
the
team
split
one
and
I
think
miranda
john
you.
You
saw
that
and
you
also
got
your
comments
on
the
issue.
I
don't
know
if
you
have
anything
to
add
there
any
other
concern
that
maybe
is
raised
up
later
on
or
if
you
have
any
suggestion.
Also
on
the
naming
of
the
two
teams.
That
would
be
really
helpful
and
erie
yeah.
I
think
you
just
came
back
today
from
holiday.
A
So
I'm
not
sure
if
you
are
up
to
speed
to
that.
But
there
is
a
proposal
to
split
the
team
in
two
parts,
and
one
part
is
the
proposal
is
online,
more
really,
focus
on
the
release,
automation
and
one
part,
more
focus
on
on
the
kubernetes
side
and.
B
A
Sure
so
yeah,
if
any
anything
else,
we
want
to
add
to
the
issue
itself
or
there
are
extra
concern
anything
on
the
naming.
So
there
were
some
names
they
were
proposed
for
for
both,
and
I
think
the
one
that
were
added
in
the
issues
was
orchestration
for
the
release.
Automation,
team
and
at
the
beginning
was
proposed
foundation
for
kubernetes
side,
but
then
it
seems
that
we
have
another
product
at
gitlab
that
is
called
foundation,
so
to
be
to
not
be
misleading.
A
A
I
think
it's
also
something
that
it
doesn't.
It's
not
engraving
stone.
I
think
it's
something
we
can
also,
I
think,
change
it
later.
I
mean
we
don't
like
it
something
like
that.
I
don't
think
it's
gonna
be
big
unless
we
want
to
change
it
in
one
year,
probably
maybe
it's
gonna
be
a
bit
different.
It's
gonna
be
lost.
D
The
only
suggestions
I
have
for
team
names
is
to
go
something
completely
different.
I
don't
like
foundations
or
orchestration
I'd
rather
have
like
team,
vanilla
or
team
chocolate,
because
I
eat
ice
cream,
but
that's
probably
not
going
to
be
found
acceptable
either.
So
I
didn't
want
to
suggest
something
like
that.
A
A
E
A
Mira:
okay,
I'm
gonna
open
your
your
issue.
I
don't
know
what
it's
about
so.
E
Yeah,
it
is
a
merch
request
that
I
noticed
it
is
included
in
this
security
release
and
it
is
basically
going
to
limit
the
pipeline
schedules
to
the
one
that
to
the
person
that
owns
the
pipeline
schedules
and
in
our
case
it
is
only
going
to
affect
us
when
the
main,
once
the
major
release
is
released
and
the
major
yeah
the
major
release
is
published,
and
our
obs
instance
is
updated
with
this
one,
and
it
is
basically
going
to
affect
the
pipeline
schedules,
because
our
pipeline
schedules
are
owned
by
the
release
tools
mod.
E
E
C
Yeah,
I
was
okay,
I'm
I
misread
it,
so
my
first
comment
is
not
entirely
accurate,
because
I
was,
I
understood
that
only
owners
of
the
project
can
alter
pipeline
schedule
so,
and
this
is
not
correct,
because
I'm
rereading
the
issue
that
you
linked,
so
your
reading
is
correct,
myra.
So
because
we
are,
we
are
maintainers
and
not
owners
on
the
on
ops.
So
this
would
kind
of
break
all
of
our
chat
ups
for
posing
things
and
things
like
that
because
they
do
this
through
api.
C
But
this
is
not
true,
so
okay,
but
I
I
think
we
may
upgrade
earlier,
because
if
when
when
today,
we
release
the
security
release,
because
you
mentioned
this-
is
part
of
the
security
release,
isn't
it
so?
I
I
suppose
we
install
security
updates
on
obs.
So
do
we
should
I
don't
know
yeah
we
we
run
daily.
We
update
daily
the
packages.
D
So
when
the
security
release
gets
completed
and
you
go
in
and
re-enable
the
oh
I'm
on
the
wrong
instance,
never
mind
no,
but.
C
E
Oh,
that
is
gonna,
be
annoying
then,
okay,
anyway,
the
workaround
is
either
change
the
ownership
of
the
pipeline
schedule
or
sign
in
as
the
release
tools
bot.
If
we
want
to
edit
the
pipeline
schedule
or
to
click
it,
I
don't
know
if
we
want
to
play
it.
If
we
want
to
play
it,
I
don't
know
if
we
can
without
being
the
actual
owner.
I.
C
Am
afraid
that
for
the
tagging
job,
if
we
own
it,
this
will
then
trigger
that
so
because
we
trigger
child
pipelines,
they
were
they
are
tricked
by
the
owner
of
the
pipeline
itself,
so
those
who
own
the
scheduled
job
will
be
by
default,
the
owner
of
all
the
cascading
jobs.
C
C
Then
looks
like
every
single
tag
pipeline
is
owned.
C
C
So
yeah
so
because
the
tag
itself
is
generated
with
the
api
from
the
release
tool
spot
token,
then
the
pipeline
is
owned
by
the
bot,
regardless
of
the
pipeline
schedule,
because
that's
the
actual
coordinated
pipeline,
but
yeah
I
mean
we
can
test
this
tomorrow.
Myra.
Let's
make
a
note
for
the
I
mean
I'll.
I
have
a
busy
morning
tomorrow,
but
I
will
try
to
test
this
just
posing
and
imposing
the
how
to
deploy
all
together
to
see
if
it's
actually
changing
the
the
pipelines,
if
disabling
it
or
not.
D
C
A
Thank
you.
I
hope
I
caught
enough
details.
I
got
lost
a
couple
of
times,
I'm
honest,
but
I
think
if
there
is
something
that
doesn't
make
sense
in
the
next
place
quickly
to
completely
change
it.
Are
you
usually
also
looking
at
the
themed
metrics
every
monday
during
this
meeting
or
something
that
raised
managers
yeah.
C
E
If
I'm
going
to
bend,
I
don't
want
it
to
be
recorded
okay,
we
can
do
it
at
the
end
yeah.
I
also
don't
have
anything
to
run
about
it.
I
guess
I
do
have
remember.
Okay,
okay,.
C
I
I
do
have
something
which
I
think
is
important
for
for
the
team
in
general,
and
maybe
I'm
also
looking
for
for
your
opinion
on
this
mirror,
because
evo
involved
both
of
us
it's
the
elastic
search,
client
update,
so
just
gonna
give
a
brief
overview.
C
So
last
week
we
were
asked
to
deploy
in
isolation,
merge
requests.
This
merge
request
was
bumping
the
elastic
search
gem,
because
the
elasticsearch
was
dropping
some
kind
of
support
for
something,
and
so
they
want
to
be
extra
sure
that
the
things
was
not
breaking
stuff
and
they
wanted
to
do
manual
testing
on
staging
graph
staging
production
canary
a
lot
of
stuff.
A
lot
of
things
now.
C
Just
what
I
want
to
point
out
this
here
is
that
this
was
during
the
security
release
window,
which
kind
of
adds
stress
to
the
whole
process
itself.
But
moreover,
this
thing
I
think,
took
almost
24
hour
of
continuated
work,
because
I
had
to
start
very
early
in
my
morning
to
pose
everything
so
autoemployer
was
imposed
state.
C
Then
I
had
to
go
through
synchronization
of
all
the
environment
during
my
morning,
and
this
was
kind
of
delayed
by
an
outage
that
happened
in
the
morning
before
I
woke
up,
and
this
delayed
a
change
request
from
the
ci
decomposition,
so
we're
cascading
failures
on
top
of
each
other.
So
when
we
reach
the
point
when
we
were
able
to
test
these
things,
we
have
to
have
the
usual
pipeline
are
not
ready,
and
then
we
have
to
merge,
and
then
we
have
to
peek
and
muster
was
broken
and
blah
blah
blah
things.
C
So
I
handed
up
my
day
that
we
were
still
testing
those
things.
The
manual
test
was
supposed
to
take
around
two
hours
and
well
that
my
point
is
that
the
plan
was
not
clear,
so
they
were
just
asking
us.
Can
we
deploy
this
in
isolation
and
to
me
was
more
about?
Yes,
we
can
post
deploy
the
thing
we
do
the
qa
and,
and
then
it's
done
and
in
developers
head.
This
was
more
about.
We
want
to
stop
everything,
no
promotion
of
any
sort.
C
We
had
to
cancel
production
canary
deployment
as
well,
because
they
want
to
actually
go
into
the
machine
and
run
some
testing
yeah.
I
don't
know
if
we
start
asking
this
sort
of
things.
We
will
just
have
trouble
shipping
changes,
so
we
need
to
figure
out
a
better
way
for
handling
this
type
of
situation.
E
Yeah,
actually,
I
think
the
manager
or
the
author
of
the
merch
request
being
me
to
kind
of
assess
if
the
merger
quest
could
be
deployed
in
isolation,
and
I
have
no,
I
have
little
understanding
of
how
elasticsearch
works,
so
I
started
to
ask
questions
about
what
is
your
plan
to
deployment?
What
metrics
are?
Are
you
going
to
look
for?
What
should
we
do
if
something
goes
wrong
and
the
answers
that
I
received
did
not
satisfy
me.
E
So
I
was
a
bit
worried
about
the
deployment
of
that
north
request
and
then,
along
with
the
team,
we
decided.
Okay,
we
are
going
to
deploy
this
installation,
but
the
idea
that
they
had
was
very
far
away
from
the
deployment
we
actually
used.
E
They
told
me
we
are
going
to
test
this
in
staging
and
then,
if
everything
is
okay,
we
are
going
to
deploy
production
and
I
was
like
yeah
that
is
not
going
to
work
because
we
deploy
it
like
in
the
same
almost
at
the
same
time,
and
it
does,
it
doesn't
work
like
that.
So,
on
my
side,
it
always
shocks
me
how
far
away
the
developers
are
from
our
release
and
deployment
process,
and
even
though
we
announced
this
change,
even
though
we
published
publish
it
in
different
slack
channels,
we
created
an
announcement.
E
We
put
it
on
the
engineering
we
can
review.
I
guess
people
don't
like
to
read
and
I'm
not
sure
how
to
fix
that
or
if
it's
possible
to
fix
that
and
yes,
it
took
a
lot
of
time
after
unless
you
had
it
to
me.
I
think
it
took
four
hours
after
that.
So
I
could
redeploy
everything
and
restart
everything,
and
I
guess
on
the
good
side,
it
was
an
easy
upgrade
and
I'm
going
to
quote
easy
upgrade
because
it
took
a
lot
of
time,
but
we
didn't
face
any
production
incident
caused
by
it.
D
I
wonder
if
we
should
start
treating
this
type
of
thing
similar
to
back
port
requests,
because
this
is
so
difficult
for
us
and
we
had
changes
in
our
pipeline
that
some
persons
were
unaware
of
there's
still
some
education
that
still
needs
to
be
passed
around
to
some
people,
and
then
we've
got
some
coordination
that
needs
to
occur
on
our
side,
similar
to
the
amount
of
coordination
that
needs
to
happen
with
a
backboard
request
to
make
sure
everything
is
good
to
go,
and
you
know
we're
all
on
the
same
page.
D
I
wonder
if
we
need
to
consider
building
a
process
or
procedure
for
that,
because
I
feel
like
this
particular
request,
like
maybe
it
would
have
been
beneficial
if
we
had
a
different
method
of
deploying
it
that
way
it
could
could
be
tested
in
isolation
and
staging
without
ever
having
go
into
production.
D
Until
they
know
things
are
okay,
which
also
means
we
need
to
make
sure
that,
like
we
would
have
to
take
into
consideration
using
the
old
ci
pipeline
method
for
that
matter,
which
is
kind
of
out
of
the
question,
because
we
don't
support
going
backwards
with
that
method.
But
that
would
have
been
easier
for
them
to
probably
have
a
better
idea
as
to
how
and
when
they
could
test.
E
Well,
we
have
staging
ref.
The
problem
is
that
they
couldn't
test
it
on
staging
canary
because
they
were
relying
on
sidekiq
and
scikit
is
not
part
of
canary,
so
they
needed
to
build
the
whole
environment
for
it.
So
they
tried
to
test
it
on
staging
ref,
but
then
again
it
needed
to
be
tested
isolated
without
any
other
change.
So
I
think
the
backbone
requested
something
a
process
like
that,
something
that
we
could
consider.
E
D
But
at
least
when
the
request
comes
in
we'll
have
the
ability
to
schedule
it.
You
know
we
they
could
fill
in
all
the
details
as
necessary
and
could
be
like
okay,
now
that
we've
got
our
details,
we'll
schedule
this
at
some
point
in
time
there
with
us
or
something.
C
Skyrim,
I
think
that
the
right
tool
here
was
a
change
request
so
to
me,
going
through
this,
the
missing
part,
what
they
didn't
told
about
all
the
moving
parts
and
the
fact
that
this
thing
was
taking
hostage
the
whole
systems
for
a
long
period
of
time,
and
I
have
to
say
that
this
was
a
specific
change
that
was
impossible
to
ship
with
fisher
flags.
I
mean
hard
hard
close
to
impossible
to
ship
with
fisherflex,
because
it's
on
dependency
upgrade,
so
you
can't
really
feature
flag.
C
It
I
mean
probably
the
engineering
effort
to
make
it
load
both
gems
and
switch
at
runtime
based
on
some
environment.
Variable
is
more
than
just
writing
a
change
request
and
just
explain
what
you
want
to
do,
because
what
happened
was
that
I
I
was
told
we
are
going
to
be
here
around
12
utc
so
that
we
can
start.
So
if
you
can
prepare
the
environment
ahead
of
time,
we
will
run
our
tests
and
when
they
show
up
online,
we
say
okay,
what
what
do
you
need
to
test?
C
This
was
supposed
to
happen
before
because
then
we
we,
I
was
trying
to
match
their
expectation
with
how
deployments
work,
and
this
was
should
not
happen
when
everything
is
already
composed
for
several
hours,
and
I
we
also
have
to
say
that
this
was
specifically
because
this
is
a
major
release
and
was
their
only
chance
to
ship
a
breaking
change.
And
so
many
things
were
competing
to
this
situation.
But
probably
the
change
request
is
the
safest.
B
I
want
to
ask
them:
what's
the
reason
to
request
to
deploy
this
in
isolation,
because
we
were
not
sure
of
what
would
happen
or
because
we
knew
that
we
need
to
do
it,
because
something
would
break.
C
B
Why
do
we
need
to
do
this
in
isolation,
then?
Why
can't
we
just
slip
on
in
other,
mrs,
at
the
same
time,
because
this
is
how
our
normal
deployment
works,
and
they
still
should
be
able
to
test
if
things
are
working
or
not
right.
So
I
think
these
this
is
often
abused,
just
to
be
beyond
the
cautious
side
like
with
the
grpc
gam
update.
B
I
think
that
we
had
last
time
in
isolation
so
that
we
feel
not
really
safe
about
what
would
happen,
but
that
could
have
been
catch
with
proper
testing
in
a
staging
environment,
for
instance
beforehand,
and
it
absolutely
wouldn't
need
to
be
deployed
in
isolation
just
to
test.
If
everything
still
works,
it's
just
for
developers,
I
think
nice
to
be
on
the
cautious
side,
but
often
totally
unnecessary,
doing
this
isolation
deployment.
I
think
we
should
somehow
document
that
this
isn't
meant
to.
B
You
know,
get
away
from
testing
things
and
to
be
extra
safe,
because
this
is
just
too
much
effort
and
doesn't
bring
really
a
lot
of
wins
just
to
you
know,
print
other
mis,
maybe
also
adding
some
problems.
Maybe
that
are
not
related
to
this.
So
I
think,
for
the
grpc
update
and
also
for
this-
maybe
it
wouldn't
have
worth
to
do
it
in
isolation.
We
should
just
have
done
it.
The
usual
way,
maybe
stop
after
staging
to
just
test
and
then
go
on.
If
you
feel
you,
it
works
right.
C
C
B
All
or
mrs
that
we
are
doing
could
break
something,
and
so
everybody
needs
to
be
taking
over
responsibility
then
and
try
to
find
out
what
it
is
and
if
he
broke
something
and
if
you
feel
before
and
that
you
are
very
unsure
about
something,
that's
no
reason
to
treat
it
differently.
I
think
just
test
it.
Instead
of
exclude
all
other
changes
right.
C
Yeah,
I
think
what
was
missing
here.
It
was
the
what
could
go
wrong
section
when
we
say
is
the
type
of
failure
from
something
broken
from
this
merge
request,
big
enough
to
demand
an
deployment
in
isolation,
because,
probably
that's
that's
the
thing
or
so
because
yeah
I
mean
sounds
like
you
don't
you
are
not
really
sure
if
this
is
completely
working
or
you
want
to
play
extra
safe
and
you
don't
really
understand
the
amount
of
work
that
you're
putting
on
top
of
others
and
the
delay
that
you
are
putting
on
on
the
release
process.