►
From YouTube: 2022-02-21 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
Awesome
great
stuff
so
welcome
everyone
happy
monday,
so.
D
Let's
begin
so
got
a
few
announcements
and
things
mostly
around
coming
up
time
off
and
myra
being
off
as
well
for
a
lot
of
this
week.
So
that's
good
news
and
also
the
retro.
So
I've
just
mentioned
that.
What
I
intend
to
do
is
we
on
wednesday,
have
the
apac
emea,
timed
retro
delivery
weekly,
so
we'll
use
that
time
to
put
together
a
proposal
of
actions
that
we
can
take
from
this.
D
Specifically,
I've
chosen
that
time,
just
because
at
the
moment,
skybeck
and
myra
have
had
the
most
input
there.
So
we've
already
got
some
kind
of
good
thoughts
from
at
least
a
couple
of
people
in
the
americas,
but
please
go
ahead
and
add
some
thoughts
onto
the
issue
before
the
end
of
tomorrow,
and
then
we
can
propose
some
actions
or
an
action
on
wednesday
and
share
that
back
out
to
everyone
awesome.
So
important.
Most
important
topic,
discussion
topic
is
how
we're
getting
on
for
the
release
like.
Is
there
anything?
C
I
think,
as
long
as
the
pipeline
is
green,
then
we
are
all
set
for
fine
yeah.
D
Perfect
great
stuff
well
done
and
for
the
other
incident
for
the
the
scaling,
kubernetes
scaling
problem
you
had
this
morning,
I've
had
how's
that
yeah.
B
We
also
resolved
this
so
yeah
all
good.
We
can.
D
D
Nice
work
good
stuff
on
that
one
in
particular,
was
there
so
I
saw
you
ended
up
kind
of
involved
sort
of
for
the
majority
of
the
duration
of
the
incident.
Was
there
was
there
something
that?
What
am
I
trying
to
ask
her
like?
I
guess
what
would
have
been
needed
for
somebody
that
wasn't
a
delivery
person
who's
been
involved
specifically
in
the
migration.
D
What
would
have
been
needed
for
someone
like
that
to
be
able
to
help
us,
like
I'm,
trying
to
think
about
like
what
are
the
sort
of
follow-up
actions
or
training
that
we
maybe
should
take,
so
that
we,
like
any
engineering
reliability,
would
be
able
to
debug
something
like
that.
B
Unfortunately,
it
was
tricky
because
it
was
due
to
the
shielded
vms
that
being
applied
couple
of
days
ago,
and
it
required
some
retro
like
trial
and
error
until
we
figured
it
out,
figured
it
out.
So
I'm
not
really
sure
if,
if
there's
something
like,
we
can
do
to
get
it
like
from
the
first
trial.
B
We
have
the
help
from
like
the
on-call
sre
is
on
call
steve
and
ahmed,
and
also
with
the
delivery
like
being
there,
so
they
can.
Basically,
so
we
can
retry
the
pipelines
also
and
see
what
what's
happening.
B
D
All
right,
fair
enough,
keep
that
sort
of
stuff
in
mind,
because
one
thing
I'm
sort
of
thinking
about
for
the
for
how
we
handle
this
year
is
as
delivery.
We're
not
staffed
for
incidents.
D
So
I
know
you
will
end
up
involved
in
quite
a
lot
of
them
and
as
release
managers,
we
absolutely
like
should
be
playing
a
part,
but
also
we
we
don't
like,
have
great
coverage
over
time
zones
and
across
like
vacations
and
things
for
us
to
be
kind
of
like
the
experts
in
all
of
the
kubernetes
stuff.
There
will
be
some
stuff
we
just
are
because
we've
been
the
closest
to
it.
But
in
those
cases
that's
really
interesting
to
hear
like
what
were
the
specific
skill
sets
or
like.
D
If
someone
had
known
x
would
more
people
have
been
able
to
help
us
out,
because
there
are
certainly
times
where
we've
actually
had
a
bit
of
an
unusual
run.
I
think
for
the
last,
like
four
or
five
months,
where
a
lot
of
release
managers
either
at
least
50
of
our
release.
D
Managers
have
been
sres,
but
there's
not
a
guarantee
like
we
don't
split
the
rotation
that
way
it's
just
down
to
availability,
so
there's
certainly
going
to
be
times
in
the
future
where
it
will
be
like
two
back
end
engineers,
and
at
that
time,
like
you
know,
we,
we
can't
necessarily
guarantee
that,
like
it's
delivery,
who
can
resolve
these
things,
we
may
not
have
someone
available
so
just
trying
to
think
like
how
do
we
start
spreading
these
knowledge
things
across
a
bigger
pool
of
people.
D
A
To
this
change
of
shielded
vms,
I
think
if
I
followed
it
correctly,
so
I
think
this
was
good.
Each
question
is
just
if
you
wouldn't
have
found
a
cause
or
something
to
dig
into
who
would
have
been
the
driver
for
this
one
right,
the
same
discussion
that
we
had
before
we
need
to
find
an
owner
of
an
incident
right
so
that
we.
A
B
B
B
D
Yeah,
it's
a
great
question
and
absolutely
the
answer
absolutely
is
yes.
In
actual
fact,
this
one's
interesting
and
I
think,
probably
affects
sres
a
little
bit
more
disproportionately
because
you
get
sort
of
sucked
in,
but
actually
I
did
notice
on
this
one.
I
think
I
don't
if
you
still
are,
but
earlier
I've
made
you
were
named
as
the
dri
for
this
incident
and
at
that
point
no,
you
can't
leave
because
you're
the
dri,
but
what
you
can
do
is
hand
over
and
then
yes
is
the
answer
you
can
leave.
D
So
I
would
say
and
actually
everybody
I
think,
we're
going
to
need
to
start
being
a
bit
more
active
on
this.
So
we
will
often
be
the
dri
when
we
call
the
like.
We
create
the
incident,
but
at
the
point
where
you
know
it's
not
like
you
know,
you're,
not
digging
into
release
tools
or
you're,
not
digging
into
the
diff
that
you
just
deployed,
or
you
know
it's
something
beyond
delivery.
D
So
at
that
point
I
would
be
quite
proactive
and
actually
say
like
given
we
now
know
it's
this
steve.
Can
I
assign
the
dri
array
to
you
and
then
you
can
update
the
issue,
and
then
that
removes
you
from
being
the
person
to
do
the
coordinating
and
having
to
keep
on
driving
it
forwards,
but
I
would
say:
watch
for
that
stuff.
We
always
want
to
make
sure
there
is
a
dri,
and
if
it
is
you
make
sure
you
hand
over
to
the
person
it
should
be
to
drive
this
stuff
forwards.
C
D
No,
that's
not
right.
No,
so
the
dri
used
to
be
default.
The
engineer
on
cool
that
changed
recently.
So
it's
not
the
default
and
you'll
often
see
there
are
incident
issues
with
nobody
assigned
they're
unassigned.
That's
basically
the
change
that
has
come
about
so
there's
no
sort
of
the
eoc
is
officially
like.
According
to
the
handbook,
the
point
they
engage,
they
are
the
dri,
but
I
still
think
there's
probably
great
value.
D
It
has
actually
clarified
that
when
you
see
you're
working
on
an
incident
issue
that
hasn't
got
anyone
assigned
it's
worth
just
asking,
because
that's
how
we
keep
the
kind
of
I
guess
things
moving
forwards
and
avoid
it
just
stalling
is:
if
somebody
actually
feels
like
they
are
the
dri,
then
it's
a
little
easier,
but
the
imoc
is
there
to
help
us.
So
the
imoc,
if
you
do,
need
the
imoc,
you
can
page
them.
D
D
We
would
almost
always
at
that
point
ask
the
eoc
to
take
it
over,
take
on
the
active
dri
role
and
lead
an
investigation
into
what's
happening.
We
would
assist
them
probably,
and
the
imoc
can
be
there
and
they
can
escalate
there.
But
for
us
I
think
we
would
almost
always
just
think
the
eoc
will
be
the
next
person.
C
D
It's
up
to
you,
so
it's
a
little
bit
of
a
judgment
call
so
if
you
think
of
the
imoc
as
the
person
who
is
there
to
kind
of
help
out
if
needed.
So
if
you
need
to
coordinate
things,
the
imoc
will
help
with
that.
If
you
need
to
like
communicate
stuff,
the
imac
will
help
with
that.
I
would
say
probably
when
you
first
open
instant
most
times,
you
probably
won't
need
the
imac
right,
because
at
that
stage
it's
probably
quality
or
the
eoc
may
be
there
to
help
you.
D
But
if
you
want
to
patient
by
all
means,
do
there's
not
a
you
should
have
paged
them
type
of
thing.
It's
just.
It
may
not
always
be
necessary,
and
how
do
I
page
them?
Do
we
have
a
handle
for
them?
It
should
be
the
same.
You
can
just
do
pay
like.
I
think
it's
like
slash
pay.
You
can
do
it
when
you
raise
the
incident
as
an
option
page.
Is
it
slash
page
I
mark
as
well.
Do
you
know
henry
of
the
top
of
your
head?
It's
something
like
that.
D
D
Something
like
slash
page,
you
probably
don't
need
it,
because
that
you
can
lean
on
the
eoc
and
I
think
if
the
eoc
isn't
driving
it,
you
can
perhaps
suggest
to
them
like
do
we
need
to
call
the
imoc
in?
Could
you
page
the
imac
like
the
imoc,
will
almost
certainly
be
more
expecting
the
page
to
come
from
the
engineer
on
call
not
to
say
you
can't
pay
it,
but
that's
the
more
normal
flow
thanks,
awesome,
great,
so
great
work.
Is
there
any
other
stuff
that
we
need
to
do
for
14.7.
D
No
amazing
great
work
so
b
is
probably
a
little
bit
more
kind
of
fyi,
but
I
just
want
to
check
in
on
thoughts.
So
I
shared
a
link
here.
The
youtube
link,
which
was
basically
last
week
in
the
apac
demo
game,
gave
two
really
great
demos.
D
They
were
both
kind
of
around
future
direction,
things
around
release,
tools
and
kate's
workloads,
they're
very
much
interesting
in
terms
of
deployment,
thinking
and
vision,
and
what
we
can
do
so
certainly
relevant
to
all
of
us,
so
definitely
encourage
you
to
have
a
watch
of
those
and
not
like
we're
gonna.
Do
it
straight
away
like
they
are?
D
Probably
some
of
them
are
maybe
not
even
this
year,
like
maybe
a
little
further
out,
they're,
quite
visionary
type
things,
but
what
I
wanted
to
actually
do
was
also
encourage
everyone
like,
if
you
do,
give
a
demo
or
see
a
demo
or
have
a
meeting,
and
you
think,
oh,
that
was
really
relevant
like
it
was
beyond
just
the
like:
kubernetes
migration,
demo,
scope
or
it
was
beyond
something
feel
free
to
share
those
out
and
suggest
them
for
the
team,
because
I
think
we
are
at
this
year.
D
We'll
see
a
lot
more
overlap
between
our
projects
and
how
they
sort
of
fit
together
and
then
just
opening
up
really,
if
anyone
has
any
better
ways
to
share
these
versus
just
just
randomly
dropping
them.
In
slack
and
saying
this
thing
wasn't
useful
to
watch
like,
should
we
like
do
something
with
our
youtube
playlist
or
like?
How
would
you
be
the
best
way
to
kind
of
pick
up
the?
C
C
Perhaps
we
could
adopt
the
same
approach
and
have
a
newsletter.
It
doesn't
have
to
be
a
document.
It
could
be
an
issue
of
things
that
are
interesting,
but
I
guess
that
is
also
the
point
of
this
meeting.
So
I'm
not
sure.
E
An
issue
would
be
nice
and
easy
to
start
with,
because
I
mean
anyone
can
easily
just
go
put
a
comment.
I
found
this
interesting.
A
E
The
same
is
proof
also
for
for
a
google
doc.
I
guess.
D
Yeah,
it
makes
sense,
yeah,
okay,
okay,
let
me
open
an
issue
I'll
stick
to
an
industry,
a
little
comment
and
see
we
see
if,
if
you
find
it
useful,
continue
to
add-
and
you
can
turn
on
notifications
or
whatever
and
let's
see
how
that
works
out.
Great,
that's
great
thanks.
C
I
believe,
two
weeks
ago
or
a
week
and
a
half
ago,
I
don't
remember
we
had
this
security
firing
drill
in
which
we
discussed
what
to
do
with
well,
a
security
release
when
a
security
fix
introduces
some
sort
of
breaking
change,
and
I
had
the
time
finally
to
put
together
a
document
trying
to
address
all
the
action
points
that
we
concluded.
D
E
Awesome.
Sorry,
sorry
if
this
has
already
been
covered-
and
I
missed
it
because
I
joined
late-
but
what
what
did
the
rms
have
to
contribute
for
the
incident?
E
D
E
B
B
So
I
think
after
our
meeting
I
will
also
like
get
back
like
and
see
if
it's
resolved
or
not,
because
we
me
like
we
like
said
we
it's
resolved,
but
it
seems
like
not,
and
the
other
issue
was
released.getlab.net
had
the
last
14.8
tag
being
deployed
to
it,
but
we
had
issues
with-
let's
encrypt,
I
think
not
being
like
reconfigured
via
chief,
because
we
enabled
it
or
disabled
it.
I
don't
really
remember
if
we
enabled
it
or
disabled
it.
I
think
enabled
it
so
this
this
was
the
other
issue.
E
Okay,
so
the
other
one,
the
one
for
release,
probably
affected
us
directly,
but
the
first
one.
What
was
like
the
main
role
of
the
item.
B
It
affected
the
deployment
because
it
was
on
g-staging,
so
you
were
not
able
to
deploy
because
it
affected
kubernetes
clusters.
Basically.
E
Okay,
sorry,
my
question
might
not
be
very
clear.
I
think
I'm
extra
tired
today,
I
just
I
thought
there
was
like
two
incident
calls.
So
I
was
wondering
for
the
first
one
like
who
requested
the
rm
and
not
who,
but
for
what,
as
in
like
to
know
the
diff
or
to
something
else,.
A
D
A
Deployments
failing
g-staging
for
the
first
one
and
then
looking
into
it.
We
saw
that
kubernetes
had
scaling
issues
and
then
our
long
discussion
ended
looking
to
it
discovered
the
problem
and
for
release.
We
also
saw
the
release
deployment
failing
for
the
new
package
that
we
tagged
today
and
then
we
started
looking
into
that
and
created
an
incident
for
that
one.
So,
in
both
cases
we
created
the
incident,
even
though
we
didn't
cause
the
incidents
of
the
creator.
F
D
So
good,
so
good
we're
all
here
to
help
out.
We
got
that
it
would
be
a
good
one
though,
for
that
one
I've
had
to
actually
think
as
well
once
we
get
to
the
bottom
of
it.
D
Why,
whether
or
what
if
there
was,
but
there
should
have
been
an
alert
that
went
with
that
incident
like
we
shouldn't
only
discover
scaling
problems
when
we
do
deployments
like
there's
some
severe
limitation
to
our
cluster
if
it
can't
scale
so
it
might
be
worth
us
also
just
reviewing
the
observability
of
the
of
the
cluster
as
well
awesome.
D
So
I
do
want
to
get
to
the
social
bits.
So
I
encourage
everyone
to
click
the
links
on
three,
because
mttp
is
looking
super,
so
well
done.
Myra
well
done
like
and
also
well
everyone
else.
Who's
been
involved
in
all
the
stuff,
like
it's
looking
really
really
great
this
week,
so
definitely
have
a
click
through
on
those
things,
and
we've
got
some
actions
coming
in
as
well
to
do
some
general
improvements,
particularly
on
the
security
releases.