►
From YouTube: 2021-05-19 Delivery team weekly APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah,
okay,
great
yeah-
I
enjoy
this.
I
don't
know
I
like
it.
If
the
word
looks
like
it's
going
down
but
you're
in
a
safe
place
at
home,
you
are
not
affected,
that's
great
as
long
as
you're
not
outside.
Yet
exactly.
A
Yeah
I
was
at
home
and
I
was
very,
very
happy
to
be
to
be
inside.
So
yes,
they've
just
demolished
a
load
of
stuff
in
front
of
my
apartment
and
as
they've
been
doing
it,
our
view
from
our
window
has
been
like
gradually
increasing
and
increasing,
and
now
yeah
we
got
pretty
pretty
confused.
Sadly
we're
not
facing
into
the
most
exciting
part
of
london,
but
still
it's
nice
to
have
a
few
right.
A
Yeah,
like
dramatic
weather
is
always
is
always
good
to
see
right
so
awesome,
so
we
are
one
minute
over,
so
I
will
get
started.
Hopefully,
graham
will
join
us
in
a
bit.
I
think
he
is
around.
B
A
Okay,
so
on
mttp
it's
looking
pretty
good,
it's
still
not
full
data.
I'm
expecting
that.
Actually,
once
we
all
the
incidents
aside
right,
but
I'm
expecting
once
the
data
catches
up,
that
this
will
look
pretty
healthy.
A
Yeah
yesterday
we'll
upset
things
for
sure
yeah
absolutely,
but
I
don't
think,
but
certainly
the
data
is
starting
to
come
through.
It
won't
have
yesterday's
edit.
Yet
I
think
it's
up
to.
I
did
get
told
the
actual
date
it's
up
to
like
end
of
last
week
or
something
at
the
moment
cool.
So
I
haven't
reduced
the
mttp
target.
Yet
we
are,
I
can
only
reduce
it
per
month,
so
I
may
just
wait
till
beginning
of
june
and
drop
it
back
down.
A
Assuming
things
look
better
going
into
june
cool
and
then
a
couple
of
announcements,
we
can
read
those
just
some
time
out
coming
up.
I've
added
a
new
section
as
well
here
interesting
reads,
just
if
it's
useful
just
to
put
some
stuff
but
just
wanted
to
share
that
the
error
budget
stuff
is
rolling
out
and
worth
having
a
check
in
on
that,
like
hopefully,
as
this
gets
adopted
by
the
by
the
teams,
we'll
start
to
see
some
real
benefits
coming
in
like
in
terms
of
fewer
incidents.
C
But
do
we
have
some
kind
of
I
mean
I'm
thinking
about
error
budgets
in
terms
of
how
they
are
defined
in
the
sre
handbook
right.
So
when
you
would
place
your
budget,
you
cannot
deploy.
A
Enforce
different,
so
these
ones
are
around
how
we
prioritize
work.
So
one
of
the
challenges
we've
always
had
is:
how
do
we
prioritize
new
features
over
tech
debt,
for
example?
So,
once
you
start
to
use
up
your
error
budget,
you
have
to
shift
that
perspective
and
you
have
to
start
paying
down
tech,
debt
and
sort
of
changing
what
you
focus
on
working
on.
B
I
think
I
really
still
need
to
read
the
details
of
that,
because
I
didn't
really
figure
out
what
we
are
measuring
there.
It
looks
a
little
bit
like
like
error
rates.
We
are
measuring
in
the
the
dashboards,
so
I
don't
know
if
we
really
have
good
numbers
of
optics
barriers
at
which
we
start
saying.
This
is
not
okay
anymore,
because
for
me
it's
hard
to
figure
this
out
from
the
dashboards,
at
least,
but
I
guess
this
is
still
working
which
will
come
up.
Yeah.
A
I
mean
help
feel
free
to
have
a
look
and
ask
questions.
Yeah,
like
I
mean
it
say,
is
rolling
out.
So
there
are
things
for
how
the
product
team
sounds,
corresponding
issues
on
how
the
product
teams
are
using
their
stuff
as
well
to
measure
it,
but
certainly
from
our
side.
I'm
expecting
us
to
with
infrastructure
issues
should
be
easier
to
prioritize,
and
then
hopefully
we
get
the
knock-on
effect
of
overall.
The
platform
is
in
a
slightly
better
place.
A
Cool
and
then
what
I
want
to
use
a
bit
of
time
for
today
is
to
talk
about
issues,
so
we
have
issues.
Obviously
I.
A
You
can
catch
up
with
the
recording
as
well.
That's
fine
see
you
later.
I
want
to
chat
a
bit
about
issues
and
how
we
use
these.
So
I
know
we've
got.
We
use
the
shoes
all
the
time.
What's
interesting
about
issues
is
kind
of.
A
They
have
different
stages
of
life
right,
so
we
have
like
the
issues
before
we
start
working
on
things
issues
whilst
we're
working
on
it
and
after
we
finish
working
on
them
and
particularly,
I
always
kind
of
think
in
six
months
time
like
how
does
this
issue
become
useful,
and
I
generally
I
need
to
use
issues
like
I
kind
of
drop
in
on
them.
Try
and
understand
what
the
problem
is.
Try
and
work
out.
A
The
priority
is,
and
then
often
I'm
also
looking
for
quick
updates,
so
how's
this
piece
of
work
doing
like
do.
We
need
to
pull
more
people
in
or
is
there
anything
we
need
to
watch
for
so
I
want
to
kind
of
just
chat
a
bit
about
how
we
can
ease
on
this
stuff,
so
yeah.
So
I
grabbed
some
examples.
Apologies,
it's
a
very
narrow
scope.
A
I
just
went
through
my
open
tabs,
so
these
are
not
like
extreme
cases,
but
more
just
sponsor
on
my
mind,
so
yeah
so
on
one
seven,
four,
nine
thanks
for
your
comment
that
henry
I
was
guessing
like.
What's
our
view
of
this
one
like
do,
you
all
think,
is
this
a
description
that
makes
it
easy
to
keep
track
of
what's
happening
on
this
on
this
issue,
other
other
things
we
want
to
be
seeing
in
here.
A
I
guess
specifically,
it's
is
that
is
there
stuff
that
you
find
you're
having
to
read
comments
for
in
order
to
understand
the
status.
B
B
It's
getting
complicated
if
you
need
to
get
the
status
of
some
task,
which
you're
not
directly
involved
in
and
it's
a
very
complex
topic,
which
is
technically
hard
to
understand
and
maybe
involving
a
lot
of
other
epics
and
issues.
Then,
of
course
you
need
to
read
a
lot
of
other
things
that
need
to
be
linked
there.
I
think
that's
the
second
thing
that
is
important.
I
say
okay
or
you.
B
B
Yeah
I
mean
in
cases
where
now
it's
the
ongoing
work
and
where
things
need
to
still
be
researched,
because
we
are
not
sure
if
we
fixed
everything,
then
I
think
that
doesn't
help.
You
need
to
really
read
the
history
of
issues
right
like,
for
instance,
with
this
complex
topic
of
how
do
we
get
environment
variables
into
kubernetes
without
taking
secrets
and
different
approaches
and
possibly
fixing
this
in
the
charts
in
different
ways.
This
is
something
which
I
even
now
have
problems
to
follow,
because
it's
right
not
easy
and
and
changing.
B
So
I
don't
know
a
good
answer
to
this.
I.
C
B
C
B
Moment
especially
with
this
example,
for
instance,
because
figuring
out
what
is
happening
on
the
different
fronts
here,
like
in
our
charts
in
kubernetes.
In
addition
to
that,
then
also,
our
query
is
cluster,
upgrade
that
happens
and
on
all
these
things
come
together
here
and
you
need
to
try
to
follow
all
of
this
and
I
spent
I.
C
A
D
Okay,
so
one
sorry,
I
just
wanted
to
add
just
some
thoughts
as
well.
I
have
from
a
different
perspective,
it's
nice
when
we
can
like
update
the
description
with
like
what
the
current
status
is
and
stuff,
but
there's
a
couple
of
downsides
to
that
which
I
find
as
well,
which
often
so
I'm
a
person.
That's
like
updating
the
comments,
not
updating
the
status,
but
I'm
not
saying
that's
a
right
or
wrong
way
and
the
reason
I
do
things
like
that
is
the
two
reasons
one
is
when
you
update
the
description.
D
D
D
You
have
the
discussion
to
say
the
good
solution,
but
that's
all
the
race.
Now
all
you
see
is
the
good
solution
and
some
comments
about
a
bad
solution
that
is
no
longer
having
a
history
of.
I
don't
know
if
it's
just
because
gitlab
doesn't
give
you
a
good
history
of
issue
descriptions
and
what
have
you
so
I
I
don't
know.
I
kind
of
like
it
it's
nice
to
have
the
description,
updated
and
the
clear
outlines
and
say
what
the
status
update,
especially
if
an
issue
goes
multiple
weeks
right
like
like
you
know.
D
Another
good
example
would
be
that
service
discovery
issue
went
multiple
weeks.
We
did
lots
of
different
things
related
to
it.
Yeah
kind
of
like
you
could
just
add
stuff
to
the
description,
but,
as
I
said,
people
wouldn't
be
getting
notifications
about
it.
It
would
be
in
the
description
becomes
one
giant
thing,
so
it's
kind
of
like
you,
you,
I
don't
know.
D
Maybe
it's
almost
like
you
want
to
have
the
description,
maybe
with
the
link
to
the
latest
comment
of
where
the
status
is
at,
but
me
have
a
structure
for
what
you
should
be
commenting
on
as
a
current
status,
but
that's
kind
of
not
I
don't
know.
I
don't
have
a
good
solution,
but
I
just
wanted
to
point
out
it's
it's
some
of
the
downsides
when
you
rewrite
the
issue,
description,
yeah
for
sure
yeah,
absolutely.
B
B
There's
a
point
of
of
not
not
changing
the
description
if
we
change
course
because
if
you
lose
history,
that's
really
bad,
maybe
updated,
saying:
okay!
This
was
the
first
approach
and
this
is
the
next
one.
But
if
you
erase
what
you
have
planned
before
and
before,
I
think
that's
really
bad.
I
experienced
this
in
several
cases.
B
D
Do
we
need
something
like
you
have
a
comment
thread
like
when
you
open
the
issue?
You
have
a
comment
drink
for
like
status
updates
or
something
can
just
reply
to
that
comment
thread.
So
maybe
you
kind
of
like
you
can
talk
about
all
this
other
stuff,
but
at
least
you
just
have
one
comment
thread.
That's
not
clear.
The
other
thing
I
like
about
comment
threads
as
well
is
when
people
do
comment,
the
status
updates
puts
the
time.
D
D
C
So
back,
then
we
had
a
very
good
approach
with
yeah
people
that
are
no
longer
in
the
company,
but
basically
we
used
to
state
only
state
the
problem
in
the
description
of
the
issue
at
the
beginning
and
then
starting
a
new
thread
for
each
solution
or
something
like
that,
and
then
every
conversation
thread
kind
of
converged
to
something,
and
then
whoever
was
the
arrived
with
the
thing
was
was
sending
message
like
I'm,
going
to
update
the
the
issue
description
with
the
following
content
and
then
the
new
body
of
the
what
was
going
to
add
or
change
the
section.
C
It
was
going
to
change
posting
that
so
that
everyone
gets
a
notification,
and
you
have
someone
clear
thread
of
train
of
thoughts
right,
so
we
started
discussing
something
everyone
converged
to
something.
This
is
the
final
final
shape
at
the
moment,
and
then
this
is
kind
of
closing
the
thread
right.
C
So
you
update
the
description
so
because
this
is,
there
are
two
type
of
consumer
for
issues,
those
that
just
want
to
know
what's
going
on
and
they
want
to
have
a
quick,
a
glimpse
of
what's
happening
and
though,
in
this
case,
having
a
good
description
is
vital
and
those
who
want
to
participate
in
a
conversation
understanding
if
something
was
already
discussed
already
discarded
or
whatever
was
the
details
of
this
and
erasing
the
the
this
description
every
time.
C
Removing
things
that
are
no
longer
relevant
doesn't
help
here,
because
you
lose
track
of
information,
so
that
one
was
good.
I
also
like
what
we
tried
to
do
with
this
cotto
when
we
tested
it
with
for
the
rollback
it's
kind
of
well,
it
takes
time
to
get
used
to
it,
but
I
really
like
the
fact
that
if
you
are
structured
in
how
you
organize
your
issue,
epics
or
even
merge
requests
in
that
case,
it
can
provide
you
the
summary
and
table
of
contents
with
links
to
where
things
were
discussed,
but
yeah.
C
If
it
only
works,
if
everyone
is
involved
into
it-
and
it
requires
you
to
be
very
you
you
need
to
organize
the
conversation
very
well
is-
is
absolutely
impossible
to
backfill
something
that
was
not
structured
properly
into
a
disco
working
table
of
context.
A
It's
hard
to
scan
as
well
like
the
issue.
Description
becomes
a
lot
less
scannable
than
other
stuff,
but
I
like
the
idea
that
you
have
about
the
comments
like
because
yeah,
I
think
it's
a
really
great
point
about.
There
are
different
types
of
consumers
and
I
think
comments
are
great
because
they
give
a
timeline
and
they
you
know
it's
easy
to
see
what
changed
and
you
get
all
the
data
around
it.
A
But
once
the
issue
gets
to
a
certain
length,
it's
it's
a
real
investment
to
have
to
build
up
your
knowledge
of
all
those
things.
Shall
we
try
adopting
that
that
you
mentioned
there?
Also
like
I
love
the
idea
of
actually
leaving
a
comment
that
says:
hey,
I'm
updating
the
script,
the
description
with
you
know,
x
and
y
and
making
this
change.
A
A
Should
we
give
that
a
try
just
for
a
few
weeks
and
see
I'll
obviously
mention
to
everyone
else
in
the
team
and
see
if
that
helps
stuff
awesome,
great
fun?
I've
mentioned
a
couple
of
others,
but
let
us
move
on
cause.
Someone
used
all
the
time
in
this
meeting
to
just
discuss
this,
but
I'll
share
that
thing.
But
let's
give
that
one!
A
try
for
now
henry
I'll
pass
over
to
you
for
b.
B
Oh
yeah,
let
me
check
yeah,
so
I
need
to.
I
can
clarify
what
we
thought
is
the
case,
but
isn't
the
case
from
from
our
last
rollback
experiment.
So
when
we
last
did
the
production
rollback,
as
you
remember,
we
first
disabled,
dried
canary
and
then
we
started
the
rollback
which
is
starting
the
production
deployment
with
the
older
package,
and
we
did
this
during
high
traffic
times.
B
And
at
this
point
the
eoc
was
getting
paged
by
alerts,
saying
single
node
saturation
and
he
was
coming
into
the
call
and
we
very
fast
decided.
Okay,
it
seems
we
saturated
our
fleet.
We
need
to
bring
back
canary
again
and
then
I
opened
this
linked
issue
here,
saying
looking
to
the
metrics
and
then
saying:
okay,
we
need
to
scale
up
the
fleets
because
we,
you
know,
can't
and
disable
canary
reliably
anymore,
because
our
fleet
doesn't
have
enough
capacity.
B
B
But
if
you
look
at
the
graph
directly,
you
see
that
for
most
other
nodes
they
stay
below
50
saturation,
so
the
fleet
wasn't
saturated
at
all.
We
just
got
tricked
in
thinking
this
because
it
happened
all
at
the
same
time
even
jumped
into
conclusions,
and
I
didn't
check
the
graphs
thoroughly
enough,
which
I
posted
there.
But
the
conclusion
is:
the
alert
is
a
little
bit
misleading
because
it's
just
saying
single
node
saturation,
but
it
doesn't
say
you
on
which
node
and
so
you
often
think
okay.
This
is
on
a
lot
of
notes.
B
There's
saturation,
maybe,
but
in
fact
it's
in
most
cases,
just
one.
So
this
alert
isn't
very
ideal
and
inverting.
You
need
to
know
this
and
and
then
checking
the
graphs
is
important
to
to
just
see
what
you're
talking
about.
I
think
it
all
came
together
because
it
was
a
stressful
situation
and
already
looked
like
we
saturated
but
yeah.
So
the
good
thing
is,
we
don't
need
to
do
anything.
The
fleet
is
way
below
saturation.
We
may
be
even
over
committed,
and
so
we
can
safely
join
canary.
B
The
only
thing
that
we
need
to
look
into
maybe
is
phi.
During
deployments
we
saw
a
single
node
being
saturated,
maybe
there's
some
issue
with
stopping
workers
or
something
like
that,
but
it's
reserved
by
itself
during
the
deployment.
So
it
seems
to
be
fine.
So
maybe
a
small
percentage
of
users
saw
an
uptext
drop
who
were
using
this
single
node
because
it
was
saturated
but
for
the
rest
of
the
fleet.
Everything
was
fine.
C
B
If
you
look
into
this
issue,
I
think
below,
I
posted
the
error
right,
so
the
updatex
drop
wasn't
very
high.
It
was
a
small
updex
drop
but
brought
us
to
the
line
of
close
to
being
below
slo,
and
it's
very.
The
timing
is
very
much
related
to
the
error
right
going
up
on
the
single
node.
At
the
same
time,
which
I
found
in
kibana-
and
I
don't
know
how
many
much
many
notes
we
have.
B
But
if
let's
say
we
have
40
notes-
and
one
note
is
failing
during
that
time-
then
we
already
have-
I
don't
know,
have
two
percentage
of
errors,
a
rate.
If
all
of
them
would
fail,
they
didn't
fail
all
and
we
just
had
a
rise
in
error
rates
of
zero
point
something.
But
this
is
already
getting
into
the
slo.
I
think
hitting
the
sro
limit
so
yeah
if
you
had
a
small
app
desktop,
but
because
of
a
single
node,
not
because
of
fleet
situation,
which
should
really
make
sense.
B
D
B
15
was
being
highest,
I
mean
sometimes
during
deployments.
Maybe
some
puma
workers
are
not
stopped
or
something.
C
B
C
C
B
C
Because,
maybe
just
just
a
thought,
then
we
can
move
on
sorry
for
just
going
into
the
details,
but
if
we
have
a,
we
send
the
hub
signal.
So
we
stop
accepting
new
requests.
In
theory,
we
should
start
dropping
threads
that
are
not
serving
anything,
but
if
you
have
just
one
or
two
long
queued
requests,
you
may
be
live
enough
to
hit
the
new
refresh
of
the
metrics
and
then
all
of
a
sudden.
C
You
have
two
work:
two
threads
only
and
both
of
them
are
serving
because
by
by
design
right
you're,
releasing
resources
that
you're
no
longer
using
because
you're
preparing
to
to
roll
out
the
new
version,
and
in
that
case
you
are
if
this
is
how
we
check
saturation.
Your
saturation
is
high
because
by
design
that
node
should
just
stop
serving
anything
everything,
but
it
just
happened
on
the
single
node.
So
something
was
just
wrong
with
this
one
node
yeah,
because.
B
C
One
because
that
is
stuck
and
you
reach
the
grid
say
the
I
suppose
we
have
graceful
timing,
so
you
say
up
and
then
up
to
a
certain
amount
of
time
you
can
still
serve
what's
in
what's
in
there
right,
so
we
had
the
same
problem
with
giselle
when
we
were
implementing
graceful
restart.
We
say
a
lot
of
the
most
of
the
operations
are
very
short,
but
some
of
them
are
long.
If
you
are
cloning,
something
how
long
should
we
wait
to
restart?
C
B
Have
something
similar
like
something
like
10
seconds
or
I
don't
know
exactly
the
number
of
of
grace
period
very
into
this,
not
ready
state.
So
it's
not
getting
new
connections
but
still
waited
to
be
shut
down,
and
then
it's
happening.
And
if
you
look
at
the
last
graph
in
the
issue,
you
see
that
for
all
the
nodes,
the
error
road
rate
was
going
up,
which
I'm
not
sure
if
it
should
be
like
this,
I
mean
it
looks
like
we.
C
B
Yeah,
I
mean
that's
a
totally
different
problem,
not
not
saturation,
but
but
there
is
something
which
could
be
improved
during
deployments
for
sure.
I
guess
maybe
not
because
of
if
you
have
long-running
queries
really,
if
you
need
to
stop
them
at
some
point
right,
but
yeah.
A
Nice
would
you
mind
opening
some
follow-up
issues
on
that
one
henry,
just
maybe
on
the
alert
and
the
fact
that
we
do
get
a
saturation
alert
when
one
nodes
out,
and
also,
if
like
as
a
placeholder
for
investigating
how
we
can
improve
things
during
deployments.
B
I
guess
this
has
been
discussed
a
lot
already
and
this
is
a
known
issue,
but
I
need
to
figure
out
if
we
have
something
following
this
already
or
if
we
decided
this
is
acceptable,
because
it's
now
I
need
to
first
figure
out
how
much
we
discussed
this
as
a
problem
already.
But
I
will
follow
up
on
this.
So.