►
From YouTube: 2021-11-18 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
I
should
just
wait
a
few
seconds.
Whilst
I
get
the
agenda,
I
think
graham
is
intending
to
join
us
and
that's
gonna,
say
and
ahmad,
maybe
as
well.
A
Okay,
let's
just
get
started
so
I've
dropped
in
the
mttp
graph,
like
just
a
little
note
on
this
one.
So
we've
asked
davis,
who
is
oh,
hey,
graham
russ
davis,
who
is
in
the
oh,
I'm
not
sure
his
job.
His
team
title
is
now
the
team,
basically
who
helped
us
with
like
analytics
and
and
metrics
and
things
so
davis
was
the
one
who
converted
our
existing
our
weekly
mttp
chart
and
he
turned
it
into
the
monthly
kind
of
standardized
view
for
us.
A
So
just
asked
him
for
some
input
on
how
exactly
this
gets
calculated,
because
there
is
some
kind
of
interesting
stuff
things
I
do
know
about.
Mttp
is
we
only
count,
mrs
on
days
where
there
is
a
deployment,
so
on
the
monthly
view
at
least
the
weekend
should
just
be
automatically
excluded.
Family
and
friends
days
do
just
get
automatically
excluded,
but
what
skews
the
monthly
view
a
little
bit
is
around
the
beginning
of
the
month.
A
As
it
looks,
I
think
what
it's
doing
is
calculating
how
many
days
there
are
in
a
month
and
doing
the
average
across
that.
So
if
you
find
you
get
long
incidents
or
long
blockers
very
early
on
in
a
month,
it
can
skew
things
quite
a
bit.
So
that's
what
sometimes
makes
it
jump
around
quite
a
lot
in
the
first
week
or
so
of
the
month,
but
I'll
come
back
to
you
as
we
get
more
info
from
dave
I'll
put
something
in
the
release
docs.
So
we
actually
have
this
documented.
A
Cool
I've
added
a
couple
of
read-only
announcements,
so
you
can
look
through
those,
and
I
just
wanted
to
check
in
on
in
people's
thoughts
on
this
one.
So
there
is
a
db
sharding
project
taking
place.
I
have
to
admit
I'm
not
super
close.
I
need
to
catch
up
on
it,
but
one
thing
I
did
see,
as
I
was
scanning
through
things-
is:
there's
a
new
petroni
cluster
being
created
for
staging
and
production.
B
I'm
not
really
in
the
details
for
for
the
charlie
plants
here,
but
we
never
had
patrony
clusters
in
pre
or
in
release
or
ops,
for
instance,
where
we
are
using
cloud
sql,
because
we
just
need
a
database
to
to
run
github
there
right.
And
if
you
don't
do
this
kind
of
you
want
to
have
the
same
environment
like
in
staging
production
and
pre-order,
and
release
it's
more
to
be
able
to
test
that
we
can
install
packages
and
maybe
also
just
other
things,.
B
Of
gitlab
itself,
but
not
really
to
test
our
infrastructure
architecture
in
this
way,
so
I
I
would
be
surprised
if
we
would
need
to
set
up
another
patronic
cluster
for
pre.
For
instance,
I
mean
what
this
is
for
I
mean
if
we
want
to
do
some
kind
of
switch
over
from
one
cluster
to
another
cluster
or
if
this
is
for
testing.
But
I
guess,
if
it's
just
about
having
sharding
working
at
some
point
in
time,
then
I
guess
it
would
mainly
be
maybe
a
new
new
cloud
sql
instance,
but
not
yeah.
A
That's
what
I
mean
so
would
we
for
in
terms
of
being
able
to
test
well,
no,
in
terms
of
being
able
to
be
confident
that
the
application
is
deploying
and
running
correctly,
it
sounds
like
on
pre.
At
least
we
would
want
the
concept
of
a
sharded
database
right,
regardless
of
which,
which
type
of
database.
B
Yeah,
but
but
if
we
want
to
go
to
a
sharded
database,
that
means
at
one
point
we
need
to
migrate
from
our
current
database
to
a
sharded
database
right,
and
I
don't
know
what
the
plans
technically
are
for
that.
If
that
really
in
all
cases
means
we
need
to
switch
from
one
cluster
to
another
one,
because
that
would
be
really
hard
for
our
customers
right
and
it
won't.
A
Be
one
to
another:
well,
it
will
be
data,
I
mean
yes,
it
is
happening
like
and
yes,
it's
hard
and
that's
why
we've
been
having
these
incredibly
long
post
deployment
migrations
over
the
last
few
months.
That's
where
that's
coming
from
so
I
don't
have
to
do
so.
We
can
certainly
find
out,
but
we
should
consider
pre
right.
It's
what
I'm
kind
of
taking
from
this
like
yeah.
C
C
Ci
builds
table
is
massive,
so
I
think
once
again
just
looking
at
this,
the
theory
would
be
I'm
not
sure
if
we
necessarily
need
to
migrate.
What
happens?
Is
you
spin
up
this
new
cluster
as
well?
You
add
it
and
then
the
app
will
look
for
the
data
in
the
old
cluster,
like
your
current
db,
but
put
the
new
data
in
the
new
one
and
then
like
combine
it.
So
it's
like
I.
C
I
think
this
is
what
I
am
thinking
is
it's
like
an
additional
database
that
can
be
like
what
we're
doing
for
registry
new
stuff
can
go
in
there
which
which
circling
back
means
for
pre.
Then
we
probably
do
want
it,
and
hopefully
it
doesn't
mean
we
migrate.
We
just
turn
it
on
and
then,
when
the
feature's
ready,
we
just
kind
of
that.
That's
what
I'm
thinking
we
should
do
a
cloud
sql
instance.
C
A
A
C
They're
talking
about
it,
that's
hopefully
possible
they're
also.
A
B
A
Does
anyone
have
a
no
like
know
whether
we
do
anything
similar
for
release,
like
my
impression,
is
that
we
don't
we
don't
do
stuff
there
right?
We
just
deploy
packages
on
pre.
Oh
sorry,
our
release.
B
B
We
never
cared
about
databases,
so
we
just
used
cloud
sql
because
it's
easy
to
have
and
we
just
need
a
database,
and
I
think
we
should
stay
with
that
approach.
If
you
need
to
another
database,
I
think
you
should
just
spin
up
another
cloud:
sql
instance
there
and
should
be
fine.
We
shouldn't
try
to
start
testing
around
with
patrony
deployments
in
pre
right
there's,
no.
C
No,
no,
no.
I
agree,
I'm
just
I'm
just
thinking
like.
Surely
there
must
be
some
support
pattern
for
customers
with
small
install
like
we're,
saying,
hey
this
you've
got
a
small
install.
You
don't
need
this
sharding.
Yes,
the
question
is:
do
we
want
to
class
pre
under
that
umbrella
or-
and
I
was
originally
like?
No,
but
now
I'm
kind
of
thinking.
Maybe
yes,
maybe
we
just
want
to
say
priest
too
simple.
It's
not
going
to
have
any
sharding
at
all,
but.
A
B
A
Right,
yeah,
cool,
okay,
great
and
I'll
check
in
with
pierre
as
well,
because
staging
ref
is
also
a
new
fun
addition
to
these
sorts
of
things.
In
that
I
don't
exactly
know
like
for
now.
It's
low
risk,
because
it's
it's
not
really
used
for
anything,
but
at
some
point,
staging
ref
does
become
a
quite
a
valuable
environment,
particularly
for
quality,
and
what
we
haven't
yet
got
is
a
process
for
how
we
get
so
stage.
A
Arrest
is
interesting
in
that
is
built
from
the
reference
architectures,
which
means
in
a
way
it's
our
final
environment,
so
changes
say,
for
example,
particularly
like
infra
changes
that
we
do.
We
would
go
through
pre,
then
through
staging
canary
production
and
then,
after
that,
they
become
part
of
the
reference
architecture,
which
means
then
they'd
get
applied
onto
staging
ref.
A
But
for
things
like,
for
example,
the
registry
database,
this
the
shaded
database
and
those
sorts
of
changes
like
I
guess
we
also
need
a
process
to
get
those
onto
our
staging
ref,
somehow
not
necessarily
else,
but
as
gitlab.
A
So
I
shall
follow
up
with
pierre
yeah
great
thanks
for
that,
and
also
I'll
have
a
little
closer
look.
It's
quite
likely
that
we
will
want
someone
from
delivery
to
well,
certainly
as
delivery.
We
will
need
to
be
aware
of
the
progress
on
the
sharding,
because
at
some
point
it
happens,
and
it
almost
certainly
will
affect
our
deployments,
but
I'll
have
a
check
in
on
whether
we
actually
need
to
do
more
than
that.
A
Cool
great
is
there
any
other
discussion
items
that
people
want
to
bring
up.
D
I
wanted
to
ask
about
the
process
for
patching
so
previously
or
for
any
patches
that
of
the
latest
release.
We
simply
deploy
it
to
dailies,
I
think
or
pre
release.
I
think
so.
We
simply
deployed
to
release
and
that's
considered
our
testing,
but
is
that
really
equivalent
to
running
an
entire
qa
pipeline?
It
seems
like
running
a
qa
pipeline
is
the
more
intensive
testing.
A
I
think
you're,
probably
right
yeah,
I
think
it
probably
is.
We
do
have
a
bit
of
a
gap
always
on
like
packaging
things
for
older
versions.
So,
for
example
like
on
the
security
releases,
we
create
three
packages,
but
we
only
test
the
current
one,
the
current
version,
so,
yes,
I
think
a
full
staging
sorry,
a
full
qa
pipeline
is
the
is
the
full
away,
but
even
just
any
sort
of
anything
beyond
that,
like
you
know,
even
a
deployment
and
checking
it
stands
up
is
is
better
than
nothing.
A
D
Yeah,
so
so
that
issue
that
will
meek
created
about
allowing
a
qa
pipeline
to
be
run
against
any
version
will
be
useful
for
all
patches.
I
guess
we
could
modify
our
process
to
run
a
full
qa
pipeline.
A
I
think
that's
a
good
one,
yeah
absolutely
and
what
I'm
actually
hoping
in
the
future
that
what
staging
ref
may
open
up
for
us
is
the
standing
up
a
get
environment,
and
so
the
way
staging
ref
well
a
little
bit
like
the
way
get
should
work.
Is
you
stand
up
an
environment
with
a
you
can
put
a
specific
package
on
it
and
point
tests
at
it?
A
So
in
the
future,
that
sounds
like
a
pretty
good
one
for
any
of
our
kind
of
patches
or
security
releases,
as
you
could
put
you'd
have
somewhere
to
put
them
for
testing.
Okay,
I
think
that's
the
tricky
thing
at
the
moment.
Isn't
it
like,
like
for
a
security
release
in
a
way
we
wouldn't
want
to
have
to
do
those
in
parallel.
It'd
be
great.
If
you
could
just
throw
up
three
environments
run
the
three
like
test
pipelines
in
parallel
and
then
tear
the
environments
back
down
again.
A
A
Get
like
quality
have
been
using
it
for
some
performance
testing,
but
it's
not
in
any
way
like
fully
ready
to
go
and
staging
ref
is
trying
to
kind
of
work
through
the
the
gaps
we
have
there
at
the
moment.
The
big
thing
that
we
don't
have
an
easy
way
of
doing
is
actually
getting
test
data
and
all
the
tests
rely
on
data
kind
of
existing
already.
So
there
are
some
pieces
that
quality
are
working
on
to
actually
mean
you
could
stand
up.
C
Actually
said
it
kind
of
yeah
well
kind
of
the
opposite.
I'd
almost
say
it's
easier
to
do.
I
can
duplicate
environments
in
kubernetes
quite
easily,
including
italy,
like
the
charts.
Are
there
to
give.
They
may
have
rough
edges,
but
it's
better
than
nothing.
You
know
like
like
the
problems
would
be
giddily
pods.
A
C
Half
an
hour,
it's
not
it's!
It's
not
something
like
I'm,
not
sure.
If
that's
our
problem,
that's
probably
more
qa
space,
but
I'm
just
I'm
just
thinking
out
loud
that
that's
a
that's
a
whole
other
factor
of
delivery
like
delivering
qa
of
what
we
ship
right.
That
I
think
we
have
more
of
a
gap
to
say
that
we've
tested
this
like
we
really
don't.
The
testing
for
the
cloud
native
is
not
as
strong,
which
is
fine.
B
C
B
There
are
big
opportunities
here
for
several
use
cases,
because
what
we
do
with
staging
right
now
is
running
qa,
which
is
nice,
but
also
mostly
trying
to
test
that
our
production
infrastructure
architecture
is
working
like
it
should
right.
B
So
this
is
the
reason
why
we
have
staging
being
as
equal
as
much
as
we
can
to
to
production,
just
to
be
able
to
also
confirm
that
our
chef
and
everything
else
recipes
and
queen
businesses
working
as
as
we
wanted
production,
but
this
is
standing
in
the
way
to
be
more
flexible,
to
do
certain
kinds
of
tests
for
what
we
want
to
do,
for
instance,
do
our
packages
installed
correctly?
This
is
why
we
test
on
release,
for
instance,
or
on
pre,
which
could
be
also
done
with
staging
graph.
B
For
instance,
then
we
have
a
big
gap
and
performance
testing
like,
for
instance,
for
registry
all
the
changes
we
do
for
registry
right
now
normally
should
be
tested
for
performance,
because
we
do
a
lot
of
database
changes
right
now
and
we
are
not
certain
enough
if
database
would
hold
up
in
production
right
and
we
can't
test
this
easily
in
staging.
B
So,
just
spinning
up
a
staging
graph
or
another
environment
to
with
the
newest
registry
version
a
database
and
then
running
some
performance
tests
to
see
if
something
changes
all
these
kind
of
stuff
requires
something
more
flexible
than
our
staging
environment
right
now,
so
staging
graph
or
other
get
environments
that
we
can
spin
up
and
turn
down
would
be
perfect
to
close
these
kind
of
gaps.
So
just
saying,
and
so
for,
for
the
the
reason,
our
the
question
of
ruben,
of
why
we
have
a
release
for
testing.
A
Motivation
that
that
came
from
quality
about
staging
ref
is
he's
actually
getting,
so
we
can
spin
them
up,
use
them
and
then
tear
them
down.
Yeah,
there's
lots
and
lots
of
use
cases.
We
use
staging
for
lots
of
different
use
cases
like
you've
mentioned
and
actually
being
able
to
separate
those
out.
So
we
have
dedicated
environments
which
avoid
a
lot
of
the
kind
of
collisions
we
see.
A
So
hopefully,
we'll
still
see
real
progress
on
that
like
staging
ref
should
be
the
first
case.
We've
definitely
hit
some
interesting
stuff
there.
It's
not
going
to
be
it's
not
a
trivial
environment
to
recreate,
and
so
that
kind
of
is
failing
at
that
point,
and
that's
really
because
of
test
data
and
particularly
accounts,
because
when
you
spin
it
up,
you
have
no.
No
one
has
any
permissions
on
to
do
anything
so
that
sort
of
stuff
needs
to
be
scripted,
but
you
know
certainly
we're
making
some
progress.
C
A
We're
also
running
different,
get
lab
versions
right
like
so.
You
see
some
differences
as
well
between
at
different
get
lab
up
instances
depending
on
whether
you
have
commute
when
the
environment
is
running
community
edition,
so
you
get
kind
of
a
different
view
of
things
that
way
as
well.
A
Awesome
thank
you
for
asking
that
I
will
go
ahead
and
stop
the
recording.