►
From YouTube: 2021-08-25 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I've
spent
trying
to
get
some
photos,
so
I
can
show
you
stuff,
but
I've.
I
turned
off
the
like:
save
random
pictures
from
whatsapp
to
my
photos.
I
turn
it
off
because
I
kept
getting
like
you
know:
I'd
go
through
my
photos
and
I'd
be
so
confused
about
all
the
random
stuff
that
other
people
had
sent
me
and
now
it
turns
out
it's
completely
extreme
opposite,
and
now
that
my
sister
has
sent
me
some
photos
and
what's
up,
they
cannot,
for
the
life
of
me,
work
out
how
to
save
them.
B
A
B
Yeah
I
missed
this
last
time,
so
I
thought
I'd
put
it
in
this
time
and
just
wanted
to
note
that
I
think
we
can
really
see
the
effect
of
green
doing
deployments
in
apec
now.
Yeah.
B
Shows
off
that
we
have
better
numbers
like
we
have
more
deployments
and
also
better
mttp
now,
and
the
only
thing
that
slowed
us
down
a
little
bit.
I
think
is
yesterday
that
I
didn't
deploy
as
much
to
gpro
because
we
looked
for
a
web
and
how
it
reacted
to
aptx
so,
but
that
was
expected
so
overall,
I
think
we
are
pretty
good
now
great.
A
Stuff
yeah:
I
need
to
catch
up
on
doing
the
deployment
blockers
numbers
so
what
we've
been
seeing
over
the
last
few
months
as
we've
done.
These
is
there's
a
fairly
standard
baseline
of
deployment
blockers
that
we
experience
it's
probably
15
hours
a
week.
It's
my
guess
is
just
what
we
normally
have,
and
so
on
weeks
or
months.
I
guess
where
we
have
significantly
more
than
that.
A
We
see
it
on
mttp,
but
what
we
should
see
in
terms
of
kind
of
being
able
to
reduce
mttp
further
is
there's
two
real
ways
at
the
moment
that
we
can
clearly
do
that.
One
is
faster
deployments
which
we
get
through
kubernetes,
but
unfortunately,
the
new
staging
stuff
kind
of
goes
against
faster
deployments,
but
the
actual,
really
big
one,
I
think,
is
going
to
be
fewer
incidents
or
fewer
blocking
incidents.
So
the
new
staging
stuff
hopefully
improves
that.
A
I
think
we'll
most
likely
see
way
more
failures
on
staging
and
so
a
little
bit
more
disruption
there.
But
what
we
should
see
is
if
we
go
beyond
staging
things,
just
move
through
more
quickly
more
easily,
which
is
faster
to
to
resolve
an
incident
on
staging.
So
I'm
hoping
that
once
we've
got
everything
in
place
and
worked
out
that
we
should
see
mttp
come
down
a
bit
further.
C
We
both
had
ideas,
I
guess
on
the
blockers,
so
this
is
not
to
do
with
like
this
is
just
like.
Oh,
we
have
deployment
blocker
like
when
the
pipelines
fail.
What
are
the
reasons
that
are
failing,
and
so
I've
called
out
some
reasons
we
can
make
that
quicker.
He
had
some
good
ideas
as
well
to
make
things
a.
A
C
That
part
we
should
see
mttp
go
down
like
obviously
moving
web
into
kubernetes
will
lower
the
time,
but
also
the
amount
of
blockers
like
we
do
have
often
web
value
like
we
anything
that
runs
in
vms
does
give
us.
I
I
I
think
at
least
like
the
occasional
breakdown
of
pipelines
right
like.
C
C
A
A
What
we
don't
have
at
the
moment,
we
should
work
out
how
to
we
should
work
out
how
disruptive
it's
going
to
be
for
you.
We
I
mean
we
guess
we
could
just
put
that
logic
on
the
other
way.
Is
we
break
out
the
release
management
schedule?
So
at
the
moment
we
display
ameer,
and
then
we
have
emr
apac.
A
We
could
split
that
out
into
the
three
time
zones
and
list
you
on
that
ground.
It
means
you'll,
get
every
release,
manager,
ping.
C
C
A
A
A
C
B
C
B
A
C
A
Else
yeah
sounds
good,
sounds
good
awesome,
so
lots
of
pto
stuff
coming
up
in
the
next
few
days
so
mostly
just
be
aware
of
that
on
the
discussion
items.
So
I
wanted
to
chat
a
little
bit
about
staging
because
staging
is
in
progress.
The
project
is
well
underway:
it's
not
a
delivery
project
so
that
none
of
us
are
actually
kind
of
actively
involved
in
the
work.
A
But
we
are
going
to
see
some
of
these
things
and
we
certainly
will
want
to
kind
of
give
some
guidance.
I
think,
on
the
wiring
in
of
both
the
staging
canary
and
also
the
staging,
it's
not
10k
anymore,
maybe
staging
ref
or
something
environment.
So
the
rough
idea
for
this
project
is
to
it's
a
it's
kind
of
a
rapid
action
following
several
incidents
that
come
from
the
mixed
deployment
problem
as
we
deployed
through
canary
and
production.
A
So
as
the
simplest
version
it's
to
stand
up
a
stage
in
canary
and
hopefully
that
catches
the
bulk
of
the
stuff.
But
alongside
this
there
is
also
a
lot
of
limitations
to
existing
staging
so
quality.
I
have
a
kind
of
parallel
project.
I
guess,
where
they're
going,
to
stand
up
for
staging
a
reference
architecture
and
staging
alongside
existing
staging
and
try
and
get
good
at
like
standing
up
tearing
down,
get
environments
and
sorting
out
data
and
accounts
and
stuff
like
that.
So
for
now
that
one
won't
massively
affect
us.
A
But
I
expect
that
these
two
projects
together
neither
is
probably
going
to
be
the
perfect
solution.
So
I
would
expect
in
the
future
there
is
kind
of
another
iteration
where
a
better
version
of
staging
comes
into
existence
on
stage
and
canary
though
at
the
moment
I
believe,
like
I,
don't
know,
graham
how
much
you
if
you've
spent
trying
to
pierre
about
this.
But
at
the
moment
it
seems
to
only
be
focusing
on
the
kubernetes
stuff
yeah.
C
Do
we
need
giddily
and
prefect
like
a
staging,
giddily
and
prefect?
No,
because
they're
gonna
have
to
be
net
new
nodes.
I
think
like
we
just
don't
have
a
concept
of
them
at
the
moment,
even
just
I
saw
the
mr
come
through
to
actually
like
do
the
kubernetes
like
hit
the
button
and
deploy
it
all
on
kubernetes,
but
I'm
I'm
still.
C
I
want
to
have
a
close
look
at
that
because
I'm
curious
how
this
is
going
to
affect
auto
deploy
straight
away.
Like
the
last
thing
I
want
this
to
do.
Is
he
pierre
merges
that?
And
you
know
I
just
get
something
wrong,
because
you
know
it's
difficult
to
get
right
the
first
time
and
then
that
blocks
all
that
deployment
pipelines
because
now,
like
I
don't
want
it.
C
I
don't
want
to
not
in
the
main
pipeline,
but
I
guess
I
kind
of
do
not
want
on
it
in
the
main
pipeline.
I
I
guess
I
would
like
to
see
a
stronger
plan
on
deploying.
It
is
one
thing:
when
are
we
wiring
it
in,
so
it's
not
going
to
block
auto
deploys
because
I'm
not
confident
it's
going
to
come
up
and
be
ready,
so
I
don't
want
it
to
block
straight
away.
Okay,
yeah
that.
A
C
C
C
A
A
B
Yeah,
just
for
the
question
for
for
gitly
I
mean
we
have
canary
italy
in
production,
right
and
yeah,
so
I
think
it
makes
sense
to
also
have
the
same
in
yeah
staging.
I
think
we
want
to
have
it
as
same
as
production,
as
we
can
right
just
to
find
everything
also
to
be
able
to
test
our
configurations,
but
I
think
this
could
be
complex,
because
then
we
also
need
to
involve
these
special
routing
rules.
B
We
need
to
maybe
move
repositories
over
to
the
new
italy
node,
because
we
have
our
gitlab
project
on
the
canary
node
and
and
production,
and
I
think
we
want
to
have
the
same
then
and
staging
so.
I
could
think
this
that
this
is
some
work
to
be
done.
So
maybe
it's
it's
on
the
next
iteration.
Then
we
don't
need
to
have
this
from
the
beginning.
I
guess
that
would
be.
A
C
A
A
A
A
A
The
same
with
the
staging
canary
and
staging
and
have
a
deployment
staging
and
run
tests
against
staging
at
the
same
time,
but
zep's
actually
putting
in
either
some
additional
tests.
I
think
it's
going
to
be
an
additional
canary
test
suite
that
would
run
us
using
cookies
to
kind
of
simulate
this
but
yeah.
I.
I
think
it
would
be
great
if
we
got
to
a
stage
where
the
staging
canary
deployment
pipeline
looked
as
close
to
the
gprod
canary
pipeline,
as
we
could
get.
A
But
we
can
do
it
at
situations
as
well,
so
how
about
so?
How
about
I
open
an
issue
to
add
in
italy.
A
Cool
and
then
we
can,
we
can
do
that
cool
and
then
on
the
station
10k
piece.
It's
not
called
that.
I
don't
know
what
the
name
is.
I'm
sorry,
the
the
goal
of
this
longer
term
is
that
they're
going
to
practice
tearing
down
and
recreating
the
environment.
Now
I'm
a
little
unclear
as
to
how
disruptive
this
would
be
like
this
version
of
it
should
end
up
being
relatively
stable
and
kind
of
a
long-lived
get
environment
that
lives
alongside
staging.
A
But
I
did
hear
in
the
kickoff
video
that
mech
recorded
he
was
talking
about
like
one
of
the
benefits
of
having
this
environment
is
we'll
get
to
learn
a
lot
about
tearing
down
and
creating.
So
there
is
a
chance
that
when
we
come
to
deploy
to
this
environment,
it's
not
actually
there.
A
B
Maybe-
and
so
I
think
what
we
want
to
do
with
this
special
environment,
maybe
is
really
run
qa
test
against
the
specific
version
of
gitlab
and
maybe
performance
tests,
but
this
all
sounds
like
performance
and
qa
right
and
not
really
that
much
related
to
what
we
are
doing
with
rolling
upgrades
and
and
auto
deployments
right.
So.
A
Well,
no,
so
the
only
thing
about
that:
no
is
it's
going
to
have
its
own
additional
test
suite,
so
the
idea
is,
it
will
run
in
parallel.
So
there
is
a
point
in
the
there
will
end
up
being
a
point
in
the
deployment
pipeline.
I
say
they
will
the
current
plan
there'll
be
a
point
in
the
deployment
pipeline,
where
probably
at
the
point
we
deploy
into
canary
there's,
also
a
splinter
and
it
deploys
to
the
get
environment.
A
Then
there
are
tests
running
and
then
the
idea
is
at
the
end
of
the
staging
deployment.
All
of
the
results
come
back
together
and
tests
must
have
passed
on
get
as
well
as
on
staging.
So
there's
going
to
be
different
tests
running
on
get
and
that's
because
they're
putting
different
test
data
in
and
they
can
test
admin.
It
gives
them
far
greater
testing
capabilities,
but
that's
going.
C
C
But
I
think
what
correct
me
from
wrong
henry.
What
you're
saying
is
like
the
distinction
here
is:
do
we
want
to
update
a
running,
get
environment,
or
do
we
actually
want
to
at
an
auto,
deploy
time,
cut
a
version,
but
they
actually
spin
up
our
whole
get
environment
off?
That
version
run
tests
and
tear
it
down
so
there's
never
a
get
environment
that
is
quote
unquote,
updated
or
continuously
deployed
to.
B
C
A
A
B
B
Blocking
or
not
right,
yeah.
B
B
Goals
or
see
other
degradations,
maybe,
but
I
think
just
triggering
to
to
run
it
in
parallel,
but
not
blocking
us.
Maybe
it
would
be
the
best
first.
A
B
C
But
even
if,
even
if
they
do
it,
the
question
is:
is
that
what
they
want
us
to
test?
You
know?
Do
they
because
I
feel,
like
part
of
the
reason
for
this?
Is
they
say?
Oh
the
existing
staging
environment
has
got
all
bad
data
like
long-lived
environments,
is
something
they
deliberately
sound
like
they're,
not
wanting
with
this,
in
which
case
I.
A
Probably,
rather
than
then
whether
it's
long-lived
or
not,
I
think,
probably
a
goal
at
some
point
that
the
stage
groups
will
do
testing
against
get
environments
because
they'll
give
admin
testing
capabilities,
which
we
don't
have
right
now.
B
A
So,
yes,
that's
a
good
point:
let's
try
and
get
this
knob
back.
It's
gonna
be
a
lot
of
stuff
in
this
one.
That
one
will
follow
a
little
further.
So
canary
will
be
the
first
piece,
I'm
not
sure
how
far
away
we
are
from
to
get
stuff.
There's
some
database
provisioning
work
and
things
like
that
going
on
that
needs
to
happen
as
well.
B
Yeah,
I
just
want
to
note
that
one
thing
that
is
making
me
nervous
a
little
bit
about
the
new
environments
is
that
this
is
nothing
new.
But
whenever
we
spin
up
new
nodes-
and
they
end
up
not
being
fully
configured
because
of
some
errors
or
problems,
then
they
are
in
chef
and
we
in
our
ansible
deployment
scripts.
B
If
they
are
not
fully
configured-
and
in
this
case
we
need
to
just
remove
them
from
chef
with
a
simple
knife
command
and
then
we
can
go
on.
But
this
maybe
could
happen
more
often
now
with
this,
and
also
I'm
wondering
what
is
happening
if,
in
the
database,
migrations
job,
we
now
see
two
deploy
nodes
instead
of
just
one.
If
that
would
have
worked
with
the
new
deploy.
Node
would
be
then
just
a
friendly
migrations
two
times
or
something
I'm
not
sure
what
would
have
been
the
end
result
in
there.
B
So
we
need
to
see
how
ansible
is
reacting
to
this.
If
a
new
canary
note
is
showing
up
with
a
role
which
is
putting
it
into
the
inventory
there,
so
I'm
really
not
sure
what
can
happen
there.
So
we
need
to
be
prepared
that
something
shows
up,
maybe-
and
in
this
case
we
should
just
remove
them
from
from
stop
shaft
clients
so
that
they
don't
get
in
again
from
by
themselves
and
then
paying
people
to
fix
that,
while
looking
to
ensure
how
we
can
separate
this
out
that
this
doesn't
happen
in
the
inventory.
A
Yes,
would
you
mind
opening
an
issue
henry
just
generally
like
about
the
fact
that
when
new
notes
come
up
and
they're
not
fully
configured
like
we
have
this,
and
then
we
can
actually
see
if
people
have
any
ideas,
because
maybe
it's
just
something
like
we
put
into
the
mr
template
or
maybe
we
do
need
to
change
something
in
like
ansible
or
chef,
or
something
like
that
to
make
this
less
blocking.
A
A
Yeah
that'd
be
great
thanks
a
lot
and
then
just
as
a
heads
up
I'll,
add
a
comment
on
the
issue
as
well,
but
yesterday
in
the
registry
sync,
we
also
realized
that
so
the
the
get
version
of
these
environments
will
have
its
own
database.
It
won't
be
used
in
the
staging
databases.
Registry
is
bringing
in
a
new
database.
It's
not
going
to
be
in
the
reference
architecture.
So
I'll
add
a
comment.
Ain't
java's
leading
most
at
work
around
the
get
database
provisioning,
but
just
to
kind
of
heads
up.
A
If
we
do
see
problems
there
that
we
may
need
to,
I
think
it's
going
to
probably
be
a
kind
of
a
quality,
slash
job
alejandro
as
the
people
involved
in
the
get
project
and
standing
up
the
registry
database,
but
we
may
just
need
to
just
as
a
heads
up
that
this
could
also
be
another
thing.
A
Cool
and
then
just
in
the
last
few
minutes,
I
just
want
to
give
a
brief
hiring
update.
So
as
as
you
all
know,
in
fact,
we
filled
our
back
end
position
with
this
wonderful
back
end
developer,
called
reuben
so
welcome.
That
was
a
nice,
easy
easy
hire
and
we
also
have
an
open,
sre
position,
and
so
that
is
officially
open.
Now
we
have
a
candidate
who
was
interviewed
for
a
different
sre
position.
A
Who
did
pretty
well
got
some
really
good
scores
in
the
interviews
was
considered
to
be
a
little
bit
too,
like
didn't
quite
have
as
like
enough
as
much
experience
as
needed
for
the
role
that
they
were
interviewing
for,
but
things
are
looking
pretty
good
for
it
being
a
good
fit
for
delivery
as
a
candidate
with
like
relevant
experience,
a
good
fit
for
what
we
do
coding
experience
as
well
as
some
sre
experience,
so
I'm
just
going
through
and
putting
things
in
place
now,
but
I'm
hoping
we
could
get
this
all
approved
today
or
tomorrow
and
make
an
offer
to
potentially
fill
our
sre
position
as
well,
so
that,
hopefully,
is
an
exciting
one
and
hopefully
we'll
fill
these.
A
Both
these
positions
without
needing
to
do
any
extra
interviews,
so
that
would
be
great
now
there
may
be
other
positions
coming
up
in
the
future.
So
if
you
do
hear
of
people
who
one
of
the
downsides
is
we
didn't
open
up
these
either
of
these
two
positions
kind
of
internally?
So
if
you
are
aware
of
anyone
when
you're
coffee
chatting,
who
perhaps
feels
like
oh
you
know,
I
wish
you
know,
I
would
have
loved
to
have
had
that
opportunity.