►
From YouTube: 2022-05-16 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
we
are
at
time
well.
B
Late,
in
fact,
I'm
late,
so
I
get
started.
We
have
a
bunch
of
announcements
I'll,
let
you
all
have
a
read
through
those
in
your
own
time.
Oh
myra,
oh
you're,
good,
I
can
say
if
you
want
to
discuss
if
you've
got
cover,
awesome,
great
and
discussion
items
so
on
a
sadly,
we
are
getting
very
close
to
both
robert
and
henry
leaving
us.
We
all
know
they
have
got
a
whole
heap
of
knowledge
and
things
they're
working
on
or
have
been
working
on
over
the
years.
B
So
they're
already
both
thinking
of
hand
over
things,
but
please,
if
you
can
give
them
as
a
hand.
That
would
be
super
helpful,
so
we've
got
docs
set
up
for
each
of
them.
If
you
have
any
thoughts
at
all
on
things
that
possibly
either
of
them
uniquely
own
the
knowledge
of
or
have
been
working
on,
that,
we
do
need
to
find
a
way
to
hand
over
for
then
please
just
dot
a
note
in
there.
B
That
might
be
that
we
run
some
sort
of
knowledge,
sure
session
or
add
some
documentation
or
somebody
does
a
walkthrough
of
something.
So
henry's
got
an
example
in
his
already
he's
already
putting
together
all
the
pieces
for
what's
going
on
with
container
registry
and
he'll
be
handing
that
over
to
graham,
so
that
we
can
continue
to
run
that
so
any
thoughts
at
all
on
things
that
we
might
want
to
make
sure
we
have
handed
over.
B
Awesome
myra:
do
you
wanna
run
through
b.
C
Yep
thanks,
I
was
hoping
for
unless
you
to
be
on
the
call,
but
he's
not
here,
so
I
will
let
him
know
well.
This
is
basically
the
plan
that
we
have.
I
just
wanted
to
share
it
so
since
this
is
the
week
to
the
22nd,
the
plan
that
we
have
so
far
without
considering
awful
incidents
is
just
to
deploy
for
today
and
tomorrow,
to
continue
auto
deploys
on
wednesday
is
to
prepare
the
or
to
announce
the
candidate
commit
the
last
one.
C
Now,
I'm
not
sure
if
alessio
is
going
to
be
available
to
do
this,
I
also
have
to
be
do
a
personal
commitment,
I'm
going
to
be
very
early
on
sunday,
so
I
can
also
do
this
if
alessio
is
not
available-
or
perhaps
we
can
both
be
online.
Is
this
something
that
I
need
to
discuss
with
alessio.
B
Let
us
know
if
you
need
help,
I'm
also
around
on
on
sunday,
but
I
please
please
ask
for
whatever
you
need
to
cover
this.
C
Awesome,
okay
and
then,
if
time
allows-
and
I
think
it
will-
because
we
are
not
going
to
perform
it,
but
I
think
graham's
is
going
to
perform
the
last
batch
release
for
14.10
I'll.
B
C
Should
I
open
up
an
issue
for
that
like
there
are,
there
are
some
steps
that
are
going
to
be
required
for
him
to
do,
because
there
are
some
artichokes
assigned
to
us
that
are
directly
created
against
the
stable
branch
that
will
need
to
be
merged
by
grains
instead
of
us.
So
I
just
want
to
open
up
the
issue,
so
we
can
have
like
a
clear
service.
Okay,.
B
One
thing
to
watch
out
for
for
for
umira
and
also
alessio
and
graham,
I
hope
you
have
seen
the
recording
is
the
db
decomposition
stuff
is
rolling
through
staging
this
week.
I've
already
warned
them
that
we
have.
We
will
have
some
deployments
that
we
must
get
through
in
order
for
the
release
prep.
They
also
must
get
some
of
their
seals
through
in
order
to
continue
their
projects,
so
work
around
work
around
that,
but
also
don't
feel
bad.
B
C
A
Yeah,
so
there's
a
proposal
coming
forward
where
we're
trying
to
make
a
change
to
how
a
part
of
our
front
end
works
where
we're
trying
to
pin
front-end
requests
to
canary.
If
you
have
visited
canary
or
the
main
stage,
if
you're
on
main
stage,
I
don't
agree
with
the
solution.
A
B
That
was
what
I
was
wondering
about
yeah,
so
I
would
suggest
in
this
case
then
like
so
I
think
what
it
looks
like
from
the
mr
is
other
people,
don't
necessarily
know
either.
So
I
would
suggest
do
we
know
some
people
who
are
good
at
thinking
these
sorts
of
things
through?
Can
we
get
a
small
group
of
people
to
actually
you
know
like
capture
what
options
do
we
have
on
this.
B
And
I
mean
it
doesn't
have
to
just
be
much
right
because
I
think
we're
quite
one-sided,
like
I
think,
if
we're,
if
it's
front-end
saying
this
will
be
hard.
Okay,
let
me,
let's
see
so:
we've
got.
B
I
would
yeah
so
it's
just
disgusting.
Okay,
let
me
see
if
we
can
gather
up
a
group
of
people
who
we
can
actually
run
some
ideas
around
with
on
on
getting
this
properly
properly
solved.
B
B
Okay,
well,
thanks
for
raising
that
up
to
go
back
good,
good
couch
that
this
doesn't
necessarily
like,
because
I
think,
from
my
view,
it's
kind
of
like,
even
if
we
do
this,
it
doesn't
necessarily
gain
that
much
so
it's
kind
of
even
if
people
love
this
solution,
it
can't
probably
be
the
only
solution
anyway,.
C
I
guess
that
solution
is
to
sorry
lower
the
burden
to
fronted
maintainers,
so,
instead
of
they
merging
two,
mrs
at
different
times,
they
just
want
to
merge
one,
mr,
so
they
are
saying.
B
Go
all
right,
let's
see
if
we
can
get
some
options
thanks
for
bringing
that
up,
go
back.
A
All
right,
the
next
item
I've
got
in
myra.
This
is
related
to
something
you
and
I
ran
into
on
friday,
unless
you
brought
this
up
to
me
as
well
we're
starting
to
see
issues
where
psychic
is
getting
stuck
during
employment,
and
I
troubleshooted
this
a
little
bit
with
myra
last
week
when
she
was
doing
a
deploy-
and
I
don't
know
what
is
going
on,
but
sidekick
just
gets
stuck.
You
know.
A
So
right
now
I
just
created
a
placeholder
issue.
I
think
it's
it
warrants
setting
an
appropriate
priority
and
try
to
get
picked
up
soon.
The
fact
that
this
has
happened
twice
at
least
twice
in
the
last
two
business
days
is
quite
concerning,
so
I'd
like
to
try
to
get
in
front
of
it
to
figure
out.
A
What's
going
on
the
one
thing
that
I
did
not
comment
on
in
the
issue
yet,
but
something
I
did
see
when
I
was
troubleshooting
this
last
week
was
that
it
looks
like
our
pods
are
getting
stuck
with
a
error
that
doesn't
make
sense.
It
appears
they
get
spun
up
and
started,
but
then
they
get
kubernetes
is
recording
a
container
creation
failure,
but
the
canadores
were
got
were
started
and
the
liveliness
of
readiness
probes
are
failing.
A
So
there's
a
conflict
in
my
mind
with
how
the
expectation
of
a
deployment
is
supposed
to
roll
through.
So
it
warrants
a
little
bit
of
research
and
if
they
could,
this
continues
to
be
a
problem.
This
is
obviously
going
to
be
quite
irritable,
so
I
would
say
for
release
managers
if
the
regional
cluster
is
taking
longer
than
say,
20
minutes
kind
of
assume,
there's
a
problem
at
the
moment,
and
we
need
to
try
to
figure
out
what
to
do
to
alleviate
that
problem.
A
The
work
that
I
did
on
friday
to
troubleshoot
wasn't
significant
enough
to
lead
me
in
a
direction
towards
trying
to
figure
out
what
a
good
solution
to
this
problem
may
be
at
the
moment.
So
more
work
is
required
to
figure
this
out.
So
I
just
wanted
to
raise
this
because
it's
going
to
be
a
cost,
a
problem
until
we
address
it.
B
So
a
couple
of
questions
just
go
back,
so
would
you
be
able
to
add
some
details
like
have
we
got
any
logs
or
jobs
or
things
that
someone
who
didn't
see
this
getting
hung
would
be
able
to
go
into
and
take
a
look
at.
B
That's
all
the
answer,
I'm
wanting,
because
what
I'm
going
to
suggest
is
we
give
grammar
ping
on
this
because
I
know
a
lot
of
people
are
quite
busy
on
other
critical
things,
but
at
the
same
time
as
much
as
I
don't
want
to
interrupt
what
graham's
working
on
also,
I
think
it'd
be
better
to
not
have
this
through
the
whole
of
this
week.
If
we
are
trying
to
get
a
release,
prepped
yeah,
okay,.
A
A
B
Cool
okay,
so
release
management,
stuff,
so
deployment
blockers
we
haven't
got
at
the
moment,
but
myra,
let's
see
I've,
given
you
a
quick
ping
on
gathering
up
what
the
blockers
were
from
last
week.
At
the
moment
it
looks
like
last
week
was
incredible,
so
I
go
with
that.
Still
I
hear
otherwise,
but
I
feel
like
there
may
be
in
other
ways,
but
based
on
things,
we've
seen
and
we've
got
this
sidekick
deployment
issue
coming
in
for
investigation
like
do.
B
We
need
to
be
changing
team
priorities
to
support
you
by
which
I
mean
should
do
we
need
to
have
the
conversations
to
downgrade
sshd
and
downgrade
registry
or
stop
on
our
okr
in
order
to
do
more
to
support
release
management.
C
No,
I
don't
think
it
is
the
case.
I
don't
see
any
issue
that
will
move
us
faster
right
now.
As
long
as
we
don't
have.
Incidents
then,
which
is
a
big,
has
long
release
management
is
easy
or
should
be
easy.
This
week.