►
From YouTube: 2021-10-06 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Awesome
so
we're
at
times
so,
let's
get
started,
I
have
popped
in
mttp
in
there
like
ruben.
Is
there
anything
particularly
that
you
wanted
to
call
out
or
anything
we
need
to
change
or
improve
to
help
with
mttp.
B
Can
you
hear
me
now?
Yes,
okay,
so
I
think
on
days
when
we
don't
have
incidents,
we
easily
manage
to
do
five
deployments
a
day,
four
or
five,
usually
so
yeah.
I
don't
think
we
need
to
do
anything
to
further
improve
that.
A
Great,
it
might
be
worth
having
a
chat
with
myra
about
whether
the
schedule
we'll
see
more
as
we
get
the
changes
coming
through
on
the
new
staging
on
stage
in
canary
the
tests
will
start
running
soon,
but
it
might
be
worth
if
we
are
kind
of
consistently
getting
four
or
five
deployments
a
day,
possibly
the
the
timing.
The
spacing
of
those
throughout
the
day
will
impact
mttp,
because,
theoretically,
if
we're
doing
five
a
day,
then
mttp
should
be
like
just
over
five
hours.
A
So
at
some
point
we
are
like
we've
not
got
those
quite
staggered
evenly
throughout
the
24
hours,
so
it
could
just
be
worth
spending
time
with
myra
and
review
that.
A
Awesome,
okay,
great
stuff,
and
then
I've
added
in
the
announcements
read
only
but
just
a
heads
up
that
we
are
going
to
be
quite
light
on
people,
particularly
next
week
with
henry
and
skelbeck
both
out
and
then
robert
is
out
as
well.
I
I
also
saw,
I
don't
think
I've
seen
it
anymore
pop
it
in
roots
yet,
but
I
saw
in
one
of
the
slack
channels
that
the
11th
is
also
a
u.s
public
holiday.
A
So
I
don't
know
if
that
means
that
robert
may
be
also
out
additionally
on
the
11th
so
I'll
check
in
there,
but
just
as
a
kind
of
a
heads
up
on
visibility.
Stuff.
That's
I'm
expecting
that
we,
we
won't
be
making
too
much
progress
in
the
next
few
weeks
and
that's
fine,
but
also,
if
you're,
needing
help
from
other
people
also
fact
that
into
your
your
times.
A
Great.
So
on
the
discussion
point
I
was
chatting
with
graham
earlier
on
today
and
we
were
talking
about
kate's
workloads,
and
I
know
this
has
come
up
in
kind
of
lots
of
different
contexts
and
conversations
around
like
kate's
workloads
and
release
tools
and
how
we
so
the
sort
of
three
pieces
that
on
all
of
our
plans,
is
like
how
does
kate
work,
kate's
workloads,
deploy,
code
and
config?
A
A
So
I
feel,
like
a
lot
of
the
conversations
recently
been
sort
of
circling
around
the
the
bulk
of
this,
and
perhaps
what
certainly
I
feel
I'm
missing
a
bit
of
is
the
really
good
understanding
of
like
how
each
of
these
pieces
is
working
and
what's
the
pros
and
cons
of
those
I
suspect
we
haven't,
got
anyone
in
team,
maybe
alessio,
you
could
be
our
exception
here
of
anyone.
Who's
really
got
a
good
understanding
of
all
those
pieces.
A
So
it's
going
to
suggest,
but
I'm
very
open
to
hearing
what
people
think
sounds
might
be
a
better
sort
of
way
of
doing
this.
But
what
I
was
thinking
of
doing
was
starting
to
put
together
some
issues
that,
like
individual
issues,
because
such
a
huge
topic
but
start
to
build
up
a
picture
of
like
this,
is
specifically
how
like
deployer
is
doing
things.
This
is
why
it
does
it
in
that
way.
This
is
why
we're
still
using
it.
A
This
is
what
kate's
workload's
doing,
and
this
is
how
it's
built
up,
and
this
is
why
release
tools
and
try
and
get
to
a
stage
where
everyone
in
the
team
has
like
a
good
enough
understanding
of
the
pieces
that
we
can
then
have
a
good
discussion
as
a
team
like
total
async,
but
just
in
terms
of
like
we're
working
on
the
single
pipeline
at
the
moment.
But
how
do
we
actually
make
that
single
pipeline
fulfill?
All
of
these?
These
needs
like
what
would
that
actually
end
up?
A
Looking
like
so
just
kind
of
from
gathering
thoughts
today,
like
do.
Does
that
sound
like
a
good
approach
to
try
and
start
capturing
this
like
to
actually
just
have
some
issues
to
to
get
the
foundation
pieces
in
place.
C
I
have
to
say
I
like
this
idea,
because
I
think
a
lot
of
what
we
right
now
have
is
historically
grown,
and
although
there
are
a
lot
of
discussions
and
and
also
documentation
on
file,
we
did
things
like
we
are
doing
them.
I
think
it's
hard
to
get
the
big
picture
of
that,
especially
if
you're
not
we
are
dealing
with
this
from
the
beginning.
D
What
sparked
this
conversation
with
amy
is:
I'm
I'm
slowly
the
more
I
look
at
the
kate's
workloads,
repo,
we're
at
a
point
where
I'm
like.
Oh
my
god,
like
all
of
these
things
we
have
and
so
much
work
we've
put
into
it.
I
actually
think
we've
gotten
to
a
point
now,
where
honestly,
I
was
kept
coming
up
with
ideas
to
fix
things
and
then
I'm
like.
Actually,
we
could
probably
replace
this
whole
thing
now
with
like
two
lines
of
bash
or
something
or
like
that.
D
What
value
does
this
get
us
now,
because
we
just
see
less
and
less
value
in
all
these
pieces,
but
yeah,
but
at
the
same
time
you
know
I
I
I
know
I
know
all
the
case-
workloads
history,
because
I've
been
working
on
that
repost
and
start,
but
I
don't
know
like
deployer
or
release
tools
and
the
value
they
have
and
there's
a
and
and
for
kubernetes
specific
deployments,
there's
a
lot
of
off
the
shelf
tools
that
do
a
lot
of
really
good
things.
A
Yeah
for
sure
one
thing
I
was
a
little
worried
about
with
this
approach.
Perhaps
we
can
address
as
we
go
through,
the
issues
was:
are
we
going
to
do?
Is
there
documentation
already
exists
about
how
some
of
this
stuff
works
like?
Are
we
going
to
end
up
duplicating
like,
for
example,
on
case
workloads?
Is
there
already
some
documentation
on
how
that
pipeline
is
working.
D
The
pipeline
no,
and
in
fact
the
pipeline
is
actually
the
worst
and
weak
part
weakest
part
about
that
repo.
I
it
needs
an
o.
Well,
that's
what
we're
talking
about
right,
like
yeah,
the
pipeline
pipeline
is
weak,
the
documentation
on
it.
It's
just
because
it
grew,
grew
organically
everything
did
and
it's
probably
worthwhile.
I
could
pull
together
some
pretty
ancient
issues
where
a
lot
of
the
decisions
were
made,
and
I
probably
should,
as
part
of
this
exercise,
because
but
yeah
it's,
it
was
literally.
D
It
started
as
literally
one
step,
which
was
just
like
one
one
stage
with
just
like
20.
Well,
not
20,
like
three
jobs,
I
think
staging
production
and
thing
it
just
literally
started
with
that
and
was
just
adding
piece
after
piece
after
piece
after
piece
and
it's
actually
not
even
that
complicated.
The
complicated
part
is
the
auto
deploy
like
that.
The
pipeline
looks
like
this
for
most
of
the
times,
but
then
the
auto
deploy
pipelines.
You
know
get
invoked
multiple
times
in
that
repo
and
look
different,
and
once
again
that
makes
sense.
D
It's
a
perfectly
valid
design,
but
you
know
there's
also
consideration
of
like
we
could
do
like
a
pipeline
for
everything,
but
this
manual
tasks
and
maybe
release
tools
or
deployers
clicking
the
manual
tasks
instead
like
maybe
that
you
know
I
I
don't
know,
these
are
all
other
things
right
like
we
can
go
back
and
really
have
a
look
at.
How
do
we
want
this?
D
To
kind
of
look
like
or,
and
and
you
know
it
does,
kate's
cable's
workloads
like
triggers
the
qa
jobs
and
stuff,
or
it
does
for
the
default
pipelines,
but
not
for
auto
deploy
pipelines
because
we
do
them
differently.
So
this
is
back
to
the
question
of
well.
If
just
deployer
is
handling
everything
regardless
of
source
like
if
you
consider
a
conflict
change
just
to
deploy
through
deployer
like
it
can
coordinate
qa
jobs
anyway.
E
So
and
if
we
look
at
what
we
are
doing,
I
I
don't
think
we're
doing
a
good
job
in
that
sense
right,
because
we
basically
are
using
a
randomly
old
version
of
the
charts
which
is
not
auto,
deployed
and
we're
just
bumping
the
image
version
and-
and
that's
already
something
that
I
personally
don't
like.
But
this
is
okay.
E
This
can
be
okay,
so,
but
also
on
top
of
that,
charts
have
made
of
multiple
components
with
several
images
and
we
are
not
respecting
those
that
are
included
in
the
in
the
current
version
of
the
outer
employee
charts,
which
means
that
we
may
be
out
something
on
production
and
then
talk
a
monthly
release
with
stuff
that
never
reached
our
production
environment,
and
I
just
gave
an
example
but
very,
very
simple,
cast
because
it's
a
good
example
of
this,
because
cass
is
bumped
in
gitlab.org
gitlab
project.
E
So
we
made
really
something
that
we
never
tested
and
another
good
example
of
this
is
another
different
example
of
this
problem.
Is
registry
registry
doesn't
even
have
a
version
file
in
github
in
the
gitlab
rails
application,
but
it
has
in
our
minibus,
I
think,
and
obviously
because
we
package
it
in
omnibus.
So
it's
kind
of
weird,
because
the
development
team
bumps
his
own
version
in
in
kate's
workload
because
they
want
to
test
the
thing.
But
this
has
no
relation
and
there's
nothing.
We
control
over.
E
What
do
we
ship
every
month,
because
then
there
is
another
other
two
version:
information
one
is
in
the
cng
inherent
chart.
Basically,
the
version
that
we
put
in
the
chart
and
the
omnibus
version
and
to
my
understanding-
maybe
they
are
not
interested
in
this
at
the
moment,
because
the
thing
they
are
working
on
are
specific
for
gitlab.com,
only
not
supposed
to
be
released
to
customer
at
the
moment.
E
C
Some
discussion
around
this
and
they
are
interested
in
this,
but
it's
not
high
priority
and
I
think
they
deal
with
that
manually.
Fine,
so
far,
bumping
versions
in
cng
and
and
also
in
quintus
workloads.
So
they
are
there
kind
of,
but
it's
not
really
automated
in
some
way,
so
they
can't
trust
it.
E
Yeah
but
long
term,
this
is
really
a
problem
right,
because
I
would
like
to
be
able
to
reduce
the
number
of
competing,
merge
requests
in
kate's
workload.
E
So
I
think
that,
right
now,
if
we
go
down
this
route,
we
could
reach
a
point
where
we
only
have
two
things
that
changes
one
is
auto
deploy
and
the
other
one
is
configuration
changes
because,
for
instance,
mailroom
is
another
good
example.
So
we
we,
we
have
hard
coded
version
of
mailroom
in
kate's
workload,
but
we
should
use
the
auto
deploy
version
and
we'd
just
be
always
the
same
image.
E
D
Yeah,
no,
I
I
agree
with
what
you're
saying
you're
right.
We
need
we
need.
I
I
think
the
mental
model
I
consider
is
so
release
tools
or
something
has
to
coordinate
all
of
these
things
coming
in
right,
so,
whether
it's
so
the
version
of
gitlab.
So
like
the
cng
images
like
the
helm
chart,
we
don't
but
we
kind
of
like
we,
we
tag
helm
charts,
but
we
never
use
it
like
that
that
what
helm
chart
we
use
is
off
to
the
side.
D
So
all
of
these
things
should
come
in
the
good
part
with
the
cake,
the
kate's
workload
stuff
for
kubernetes
as
well,
is,
if
we
peel
away
all
of
the
craft
or
or
on
honestly,
I
mentioned
this
to
amy,
I'm
happy
to
even
throw
away
that
repo
and
start
again,
but
if
we
peel
it
all
the
way,
the
craft,
all
we
need
release
tools
or
something
to
do
is
just
pass
the
the
yaml
for
like
pass
some
values
into
yaml
file
that
it
needs,
like
whatever
we
can
determine
automatically,
and
it
can
just
be
like
this
is
the
chart
version.
D
D
We
see
andrew
and
jarv
wanting
to
move
the
configuration
like
you
know
what
environment
variables
are
in
ruby
or
something
for
kate's
workloads,
they're
they're,
trying
to
interest
in
coordinating
that
into
the
or
you
know
what
the
settings
are
for
load,
balancers
or
whatever
they
need
to
configure
in
the
terraform
part.
They
want
to
put
that
there.
D
So
it's
like,
if
you
consider
the
repo
like,
if
we
put
if
we
take
apart
all
of
the
kate's
workload
stuff
and
like
this,
is
configuration
values
and
most
of
it
we
have
to
pull
out
from
google
cloud
which
has
been
created
like
there's
so
much
craft.
We
have
to
do
if
we
just
push
it
somewhere
else,
but
consider
that
an
upstream
source
once
again
at
some
kind
of
deploy.
Time,
like
you
know
chart,
is
a
source
images.
D
As
the
source
configuration
management
is
a
source,
it's
all
of
these
sources
that
just
need
to
coordinate
and
come
together
to
at
that
pipeline,
that
single
pipeline
that's
release
tools
or
whatever
it
is
that
gives
us.
You
know
the
output
and
then
we
aren't
trying
to
like
coordinate
the
two,
because
it
just
becomes
another
source
of
data
that
needs
to
go
into
the
pipeline
and
then
everyone's
able
to
watch
that
that
one
pipeline,
whether
it's
configuration
change
or
whatever,
how
we
coordinate
that
and
timing.
D
You
know
we
can
figure
that
out,
but
it's
the
same
pipeline,
it's
the
same
qa,
it's
the
same
everything
and
it's
the
same.
Then
we've
got
the
confidence
that
the
rollback
procedure
is
using
the
rollback
procedure,
we've
already
developed
and
we've
tested,
and
we
know
it
works,
and
we
can
just
comfortable
doing
that
instead
of
like,
what's
the
rollback
procedure
for
a
conflict
change,
we
don't
really
know
like
you
know
it's
up
to
the
sre,
to
figure
it
out.
E
Yeah,
I
do
agree-
and
I
was
also
thinking
that
so
I
know
maybe
you're
going
too
much
into
diesel
seems
like
this
is
kate's
workload
meeting
instead
of
a
delivery
team,
but
just
want
to
point
out
this
so
because
we
are
still
in
this
transition
from
the
single
pipeline.
E
Kate's
workload
has
this
very
unfortunate
situation,
where
there's
two
level
of
indirection
from
release
tools,
because
we
do
release
tools,
deployer
and
deployer,
gates
workload
as
soon
as
we
move
kate's
workload
on
the
same
level
of
deployer,
which
means
breaking
down
deployers
so
that
we
can
extract,
for
instance,
migrations.
So
we
do
migrations,
then
we
do
fleet,
and
then
we
do
impair
with
the
fleet.
We
do
also
the
case.
The
kate's
workload
has
a
part
of
the
fleet.
E
When
we
have
this
level
of
control,
it
will
be
easier
to
provide
artifacts
downstream.
So
let's
say
you
want
to
have
your
yaml
file
with
versions.
Release
tools
will
generate
the
artifacts
stored
as
an
artifacts
triggered,
build
and
say,
hey
get
the
artifacts
that
I
build.
These
are
extra
values
you
have
to
put
on
top
of
whatever
you
think
is
the
right
value
to
apply
in
your
health
charts,
because
I'm
telling
you
these
are
the
versions.
E
This
can
be
something
because
then
we
can
have
a
sensible,
artifact
story
policy.
So
we
say
we
just
this
type
of
artifacts
we
serve
for
one
month
after
one
month.
We
don't
care
because
I
mean
the
life
went
on
and
the
those
version
numbers
are
no
longer
relevant
or
things
like
that
so
yeah.
This
is
a
this
is
and.
D
It's
gonna
note,
and
in
that
scenario
the
version
of
the
helm
chart
itself
could
become
something
that
we
pass
along
as
well
right
because
we
want
to
get
to
that
point
of
it's
not
just.
Oh,
I
have
to
bump
the
helm
chart
now
it's
constantly
being
treated
as
the
first
class
citizen
and
we
will
break
pipelines
if
something
in
there
goes
wrong.
E
Yeah
absolutely,
and
on
this
scenario
also
was
we
wanted
to
mention
that
we
will
not
have
qa
as
part
of
the
kate's
workload
stuff,
because
there
has
to
be
a
general
qa
which
is
part
of
the
of
the
single
pipeline.
Then,
if
inside
kate's
work
could
we
need
some
special,
faster,
specific
qa,
then
fine?
It
will
be
part
of
his
own
pipeline.
But
then
the
smoke
test
and
this
stuff
has
to
be
at
the
release
tools,
level.
A
So
I'm
going
to
open
up
the
issues,
so
we
can
start
doing
the
foundation
of
stuff,
because
I
think
at
the
moment,
like
game
alessia,
you
two
are
in
a
unique
position
to
be
able
to
talk,
propose
solutions
because
you
understand
the
foundational
stuff,
but
let's
try
and
get
everyone
onto
that
foundational
level.
So
we
can
put
the
right
proposed
solutions
now
they
reckon
that
whatever
we
decide
is
going
to
be
a
possibly
big
project
like
I'm
not
expecting
that
we
answer
this
like
this
week
and
set
this
up
for
q4.
A
So
I
will
get
things
started
out,
but
I'm
much
more
interested
that
we
we
feel
happy
with
like.
We
understand
the
problem
and
and
work
to
the
solution
rather
than
the
time
in
which
it
takes
us
to
do
that.
So
I'll
open
up
the
issues
and
ping
people
on
those,
and
then
we
can
start
to
use
those
to
facilitate
more
conversations
in
the
team
sort
of
related
to
that.
I'm
also
gonna.
I'm
this
week,
gonna
try
and
open
up
the
kind
of
kick
off
discussions
for
q4.
A
Now
it's
there's
some
stuff
that
maybe
is
gonna,
be
kind
of
obvious
contenders.
It's.
It
may
be
more
clear
as
we
have
some
of
this
direction
stuff
in
place,
so
it
feels
a
little
bit
like
there
will
be
some
discussion
around
that,
but
I
want
to
get
that
started
sooner
rather
than
later,
just
because
various
people
are
out
on
pto,
so
I'll
start
getting
that
discussion
moving
this
week.
But
again
it's
fine.
If,
if
we
don't
have
like
a
totally
clear
answer
right
away
on
like
what
should
q4
look
like.
A
Awesome
is
there
anything
else
anyone
wants
to
bring
up
in
the
discussion.