►
From YouTube: 2021-06-08 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
Cools
off
a
little
bit
today's
the
last
like
really
hot
day,
and
then
it
goes
back
to
like
normal
temperatures
but
yeah.
It's
been
amazing
wow.
I'm
super
happy
because
it's
this
time
of
year.
So
last
week
when
I
went
swimming
in
the
reservoir,
somebody
asked
me
and
they're
like
how
are
you
gonna
keep
going
through
winter
and
I
was
like
like
definitely
not,
but
this,
like
short,
like
heat
wave,
has
just
like
raised
it
right
up
again.
So
I
reckon
there's
probably
a
couple
of
weeks
of
swimming.
C
Graham
is
on
its
way,
but
we'll
get
started
since
we're
at
time.
So
just
a
couple
of
announcements
in
the
read-only
stuff-
hopefully
you've
seen
them
all,
but
just
for
easy
reference.
Oh
one
thing
I
didn't
mention
on
the
s
the
rsus.
There
are
some
amas
that
have
been
scheduled
for
next
week.
So
if
you
have
questions
about
that,
then
that's
the
place
awesome.
C
So
I
just
wanted
to
so
job's
suggestion
about
speeding
up
rollbacks,
and
I
guess
that
I
suppose
my
kind
of
initial
thoughts
on
this
one
is
I
we
probably
need
to
be
a
little
bit
careful
that
we
don't
just
get
lots
of
people
now
diving
in
and
saying
awesome,
rollbacks
and
pushing
them
out
so
yeah.
I
wanted
to
just
think
about
whether
we
increased
the
risk.
So
thanks
for
adding
that
henry,
do
you
want
to
verbalize
your
your
excellent
input.
B
Yeah,
so
it's
still
that
we,
since
we
moved
api
over
two
communities
that
we
saw
during
deployments
short
updates
drops
and
it's
still
not
fully
clear
where
they
are
coming
from.
So
we
spent
a
lot
of
time
to
look
into
that
trying
to
pin
this
down
to
something,
but
we
couldn't
find
the
real
truth
until
now,
what
is
causing
this
aptx
drops?
B
We
had
several
theories
among
them
and
I
think
the
latest
development
in
the
issuer
link
there
from
yesterday
from
matt
smiley
was
that
it
might
be
related
to
database
and
connections
in
pg
bouncer,
which
could
be
explainable
by
deployments
starting
up
new
ports,
while
the
others
are
still
not
teared
down,
and
so
we
create
more
connections
to
the
database.
D
C
Yeah,
it's
good
to
know
like
to
be
clear,
like
I'm,
in
no
rush
that
we
suddenly
speed
up
rollbacks,
like
I
think,
they're
incredibly
quick
like
they
are.
You
know
like
more
than
twice
as
fast
as
a
hot
patch.
So
already
we
saved
like
from
this
one
yesterday
like
speeding
up
the
incident
investigation
is
going
to
be
the
biggest
opportunity.
So,
let's
not
like
we
will
get
some
rollback
improvements
as
we
speed
up
the
pipelines.
C
But,
let's
not
rush
to
to
add
in
like
for
now
we
shouldn't
add
any
risk
to
rollbacks,
because
we're
still
in
the
convinced
several
rollbacks
are
safe
and
should
be
widely
used
and
freely
used.
So,
if
there's
anything
that
makes
a
rollback
more
risky
than
it
currently
is,
let's
not
do
that
for
at
least
this
quarter
as
we
build
our
confidence
on
it.
E
Yeah
another
point
on
this
is
that
if
we
want
to
run
all
of
them
together,
we
also
all
the
cluster
together.
Is
we
need
to
take
care
about
the
the
capacity
that
we
are
removing,
because
if
we
are
doing
so,
and
at
least
we
need
to
toggle
the
the
amount
of
parts
that
we
are
allowed
to
kill
during
a
rollback,
it
has
to
be
lower
than
what
we
do
during
roll
forward,
because
the
main
difference
is
that
in
that
case
we
are
doing
cluster
by
cluster.
C
Cool
okay,
great
good,
great
input,
so
let's
take
care
with
this
one
like
if.
C
If
you
see
this
as
come
in
as
an
mr,
which
it
may
do,
let's,
please
not
merge
that
in
now
get
this
into
an
issue.
It
is
going
to
be
something
we'll
want
to
optimize
in
the
future,
but
but
I
think
for
now
it'll
it.
It
won't
mitigate
the
incident
that
much
faster
and
it
adds
some
risk.
So.
F
I'm
kind
of
a
little
bit
rollbacks
making
them
on
one
stage
is
fine,
I'm
pretty
against
doing
it
for
deploys
just
simply
because
when
something
goes
wrong,
it's
still
really
not
slightly
annoying
for
me
to
like
reconverge
the
clusters
and
basically
unwedge
helm
and
stuff,
and
so
I'd,
rather
only
do
it
on
one
or
two
clusters
than
on
four
clusters
like
if
something
goes
wrong,
our
ci
jobs
just
spin
and
we
have
to
basically
kill
the
process
and
kill
him
it's
awful,
and
that
is
in
the
like
talking
about
the
tech
that
stuff
like
that
is
in
the
long
trail
of
technic
stuff.
F
At
one
point,
I'm
actually
going
to
chop
that
out
chop
him
out
for
doing
the
applies
and
there's
the
issue
there
for
it,
and
I
would
feel
more
comfortable
changing
it.
Then,
when
I'm
confident
that
killing
lci
basically
detaching
the
ci
jobs
from
getting
things
in
a
bad
state,
then
I
would
be
more
open
to
the
idea
personally.
C
F
Yeah
good
point
actually
from
elysia
there
like
ours.
Our
surging
would
probably
cause
database
pressure
and
other
stuff
that
we're
already
seeing
as
well.
So
we
we
kind
of
yeah.
We
really
have
to
be
careful
and
we
we
do
need
to
get
on
top
of
that
aptx
dip
for
the.
I
think
it's
actually
all
web
service
pods.
F
We
just
never
really
noticed
it
because
it
just
doesn't
alert
and
that
smiley's
done
some
really
good
investigation
at
the
moment,
and
it
actually
looks
like
it
might
not
be
what
I
originally
thought
it
was,
which
was
readiness,
probes
or
something
so
yeah.
We
need
to
get
a
hold
of
that
and
cool
okay.
F
Finish,
all
I
was
gonna
say
is
yeah
it's
one
of
those
things
actually
tricky.
I
don't
know
how
it
I.
I
would
like,
as
a
general
thing
to
be
monitoring
like
like,
unless
something
kills
aptx
enough
during
deploys.
We
never
notice
that
when
deploys
happen,
the
system
suffers
in
some
way,
shape
or
form,
and
I
don't
know
how
we
could
get
more
visibility
into
every
time.
We
do
a
deploy,
even
if
it's
not
bad
enough
to
cause
an
alert
just
these
trends
over
time
of
as
we're
changing
deploys.
F
E
Can
we
check
deployment
metrics
instead
of
the
regular
slo,
because
they
are
more
stricter
and
so
may
give
us
a
trend
to
see
what
is
happening
if
we
had
higher.
F
Probable
yeah,
that's
possibly
what
we
should
we
should
do
is
like
see
if
we
can
figure
out
some
way
of
holding
our
at
least
the
deploys
holding
them
to
a
standard
that
we
know
we're
not
causing
or
even
close
enough,
so
that
just
not
a
problem
on
top
of
the
already
bad
problem
we
have
we've
caused
by
deploys
then
puts
us
into
a
you
know.
Another.
B
In
a
similar
way,
we
also
have
some
alerts
going
to
the
feed
general
alerts
channel,
which
we
don't
notice
normally
and
and
andrew
made
the
suggestion
if
we
might
want
to
consider
to
put
those
kubernetes
related
alerts
into
a
special
slack
channel,
so
that
we
would
be
that
they
get
more
visibility
from
the
developer
delivery
team,
at
least-
and
I
think
that
might
make
sense,
because
right
now
we
are
tuning
a
lot
still
on
communities
and
trying
to
get
node
pool
sizes
right
and
in
this
particular
instance,
the
one
of
the
node
boots
was
saturated
and
they
needed
to
increase
the
number
of
nodes
in
that
one.
B
But
he
didn't
notice,
because
this
was
going
to
this
feed
alerts
channel
and
nobody
is
paged
for
that
one
and
nobody's
looking
to
the
many
alerts
that
are
coming
up
there
and
I
opened
an
issue
for
that
one.
So
we
can
think
about
this-
maybe
to
get
humanities
related
alerts
for
saturation,
for
instance,
into
a
special
channel,
maybe
at
least
as
long
as
we
are
still
tuning
things
later.
We
should
get
this
away
again
because
this
is
adding
to
alert
noise,
but
right
now
it
might
make
sense.
Maybe.
F
Yeah,
I'm
I'm
I'm
realizing,
might
be
going
slightly
off
track
here.
I'm
kind
of
of
two
minds
in
some
ways,
especially
as
someone
who
is
an
engineer
on
call
and
getting
paged,
I'm
almost
like
part
of
me
wants
to
make
them.
Paging
alerts
like
to
me
that
concept
of
pushing
them
into
a
I
don't
know
about
everyone
else,
but
it's
really
hard
slack
channel
alerts.
F
I
just
don't
find
grip
me
enough,
no
matter
what
channel
they're
in
I
I
don't
know,
but
at
the
same
time,
paging
them
and
and
like
paging,
so
they
actually
cause
an
incident
and
like
how
we're
going
to
resolve
this,
and
an
investigation
is
good,
but
maybe
overkill.
F
I
I
don't
really
know
what
the
real
answer
is
and
the
other
thing
that
was
interesting
about
that
capacity
alert
is
so
we
didn't
notice
this.
You
know
going
off
the
sre
handbook
and
everything
you
alert
on
if
your
customers
are
experiencing
issues.
So
the
fact
that
we
were
hitting
the
max
number
of
nodes
and
serving
traffic
fine
is
almost
like.
Should
we
even
be
raising
that,
because
all
that's
going
to
do
is
just
consume
extra
resources
unnecessarily?
F
B
I
guess
that's
why
I
meant
like
q
and
alerts
temporarily
put
into
a
special
channel,
so
we
are
have
better
visibility
for
tuning,
because
when
something
saturated
we
know
we
need
to
tune
something
we
maybe
don't
have
an
issue,
but
something
to
be
doing
probably
so
that
was
the
idea,
but
I
think
we
don't
need
to
get
into
this
right
now
here,
because
it's
a
different
topic.
I've.
B
C
And
we
can
always
talk
about
it
in
the
case
demo
as
well.
If,
if
we
need
more
sync
discussion
on
that,
but
hopefully
the
issue
like,
I
think,
if
it's
something
that
like
looks
like
a
reasonably
good
idea
or
at
least
not
bad
idea,
we
can
make
a
decision
quite
quickly
and
try
it
out
so
super
and
then
see.
Is
this
mostly
just
for
visibility
like.
C
I
think
we
need
to
make
sure
we
have
a
plan
for
this
one,
but
the
this
probably
won't
be
the
only
time,
but
we
certainly
have
for
this
one.
A
high
risk,
mr
coming
in
from
database
team,
I've
asked
that
they
ping
release
managers
before
they
do
the
merge
and
then
what
we'll
need
to
do
is
coordinate
this
so
that
we
can
get
it
deployed
in
isolation
alongside
the
like,
with
the
with
support
from
database
team.
C
So
we
should
probably
should
we
get
an
issue
open
to
make
the
plan,
because
I'm
kind
of
aware
that
there's
going
to
be
potentially
any
one
of
the
four
of
you,
maybe
more
than
one
of
you
involved
in
actually
executing
this
plan.
Depending
on
when
the
mr
comes
in.
B
C
C
B
F
F
F
B
Fun
big
big
idea:
migration
running
waterpost
migration,
I
think,
but
a
bigger
migration
which
we
got
pinned
by
the
database
team.
So
this
is
running
and
canary
right
now,
so
we
need
to
see
if
we
have
issues
with
that,
but
I
don't
think
so,
but
this
one
is
really
a
bigger
one.
I
think
yes.
A
C
To
separate
these
two
out
so
this
one
on
the
agenda
like
yeah,
this
one
is
a
bit
more
risky.
So
that's
why
it
should
go
on
its
own
so
that
if
we
have
any
problems
at
all
with
that
deployment,
we'll
know
straight
away.
It's
from
that,
mr!
So
let's
get
the
issue
open,
so
we
can
work
out
like
exactly
the
steps
and
then
this
one
that
we
have
today.
So
the
reason
they're
pinging
us
really
is.
C
B
Yeah,
typically,
what
we
see
in
this
occurrences
is
that
database
migration
job
is
failing
with
some
message
that
it
couldn't
get
a
database
lock
or
something,
and
then
typically
the
database
team
is
pinging
us
ahead
and
saying
it's
fine
in
this
case
to
re-run
the
job,
because
it's
very
important
or
if
there
are
some
special
measures.
We
can't
do
this
so,
but
normally
in
the
last
occurrences,
nothing
happened
and
just
just
worked.
So
it's
fine.
B
I
think
the
idea
here
was
that
we
want
to
just
get
this
one
change
in,
so
that
we
don't
risk
that
we
also
interfere
with
other
merge
requests.
Maybe
and
then
it's
easier
to
isolate
the
problems
right.
It's
just
a
special.
C
Measure
ruben,
I
was
asking
like
so
there's
two
approaches
for
isolating
is
that
is
that
what
you're
asking
so
we've
got
one?
Is
we
just
stop
pulling
new
branches
and
add
to
the
one
we've
got
the
other
one
we
can
do
is
lock
to
an
existing
branch.
Is
that
what
you're
asking
ruben
or
are
you
asking
like?
Why
are
we
isolating.
D
C
E
But
there's
also
another
thing
that
we
may
have
another
problem
which
is
unrelated
to
this,
because
we
we
just
got
a
new
branch
and
we
had
a
new
release,
so
maybe
there's
something
else
that
is
broken
and
we
cannot
roll
back,
for
instance,
because
there's
a
positive
line,
migration,
which
is
also
a
long
one.
So
we
kind
of
extend
the
window
where
we
are
kind
of
clueless
and
we
don't
know
what
to
do
so
here
we
just
at
the
cost
of
slowing
down
mttp
for
everything
else.
We
just
make
this
a
bit
safer.
D
Okay,
so
so
what
we
will
be
doing
is
after
a
successful
deploy,
we
pause
the
auto,
deploy,
prepare
job
and
then
pick
this,
mr
into
that
successful
diploma.
So
it's
like
just
this
one
new,
mrs
okay,.
D
D
Ab,
what
was
the
other
method
of
isolation?
We
were
talking
about.
C
So
the
other
one
which
we've
done
in
the
past
is
locking
onto
an
or
a
specific,
auto
deploy
branch.
I
guess
we
do
we.
I
guess
we
do
that,
like.
If
we've
already
created
a
branch
right
like
we
can
switch
back
to
running
autodeploy's,
offer
off
an
older
branch
and
pick
pick
this
one,
mr
onto
onto
that
branch,.
C
E
Yeah,
because
the
point
is
that,
if
it's
still
manual
what
it
used
to
be,
then
you
still
have
to
pose,
because
otherwise,
the
next
time
you
do
and
now
to
deploy,
you
create
a
new
branch.
It
will
override
because
it's
just
an
environment
variable
in
the
project
yeah.
So
you
can
set
it
back
to
the
previous
one
and
do
whatever
you
want
on
the
old
branch.
But
then
it
will
be
overwritten
by
the
next
schedule.
Okay,.
C
C
So
we
can
manage
this,
so
we
will
get
pinged
on
that
one
they're
expecting
to
finish
the
work
this
week.
One
thing
to
add
on
this
is
we
will
still
prioritize
our
14.3
release
over
this.
So
if
the
dates
don't
work
out,
if
if
they
have
a
delay-
and
it
comes
in
and
they're
saying
it's
next
tuesday
or
wednesday
or
whatever
we're
not
comfortable,
we
won't
merge.
C
C
Super
is
there
anything
else
for
this
recording.