►
From YouTube: 2023-04-24 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
It
looks
like
we
are
complete,
so
welcome
delivery
group,
weekly
24th
of
April
2023,
emea,
America
time
zone.
We
have
some
announcements
on
expenses
and
other
experimental
features
that
have
been
added
and
next
month
is
going
to
be
in
a
lot
of
places,
public
holiday
and
we
can
start
with
a
discussion
topic
I
think
we
can
immediately
move
to
the
point
B
since
Jenny
doesn't
feel
she
doesn't
feel
very
well
so
I
guess
this
is
gonna.
Let's.
E
I,
don't
want
the
stage
so
for
those
that
are
playing
release
manager
roles
at
this
moment
in
time.
You
know
that
we're
blocked
because
of
the
16-0
package
and
Omnibus
having
deprecations
that
are
preventing
out
of
the
boy
from
doing
its
job.
E
This
should
not
be
an
issue
because
we've
got
a
procedure
in
our
monthly
release,
task
that
says.
Look
for
deprecations
and
you
know,
let's
get
ahead
of
the
game.
E
I
fired
up
an
issue
to
figure
out
what
we
should
do
about
this,
but
I
think
I.
Think
we've
just
been
looking
at
the
wrong
thing,
so
I
have
a
mild
proposal
in
the
issue
that
I
created
that
kind
of
adjusts.
What
we
look
for
inside
of
that
applications
file,
whether
that's
enough
I,
don't
know
so
I'm
looking
for
feedback
from
others
Steve.
Thank
you
for
taking
a
Gainer,
because
I
do
think
that
is
a
decent
approach.
E
C
Yeah,
so
in
the
the
the
task,
that's
in
the
that's
in
the
release,
task
issue
so
open
one
up
here
and
I
can
get
so
the
the
task
that
says
you
know
check
for
any
deprecations
to
see
if
we
were
possibly
affected
in
our
kids
or
Chef
config.
So
there's
the
deprecations
link
which
links
to
the
file
that
has
all
the
deprecations
listed.
But
then
we
just
link
to
the
kids
and
Chef
directories.
E
Okay,
yeah
I
didn't
scroll
all
the
way
to
the
right.
That's
actually
a
really
long
line
in
our
file.
Okay,
so
yes,
the
configuration
deprecations
tell
us
what's
being
deprecated
those
other
two
links
point
to
where
we
configure
those
options
so
Chef
repo
for
all
of
our
Omnibus
installations.
The
Kate's
workloads
would
be
where
we
configure
items
really
to
our
Helm
chart.
E
That's
just
where
we
maintain
those
configurations
and
that's
where
those
changes
would
be
prompted
So
based
like
based
on
the
comment
that
you've
created
I
think
we
just
need
to
probably
do
a
minor
update
to
either
maybe
two
places.
One
is
inside
up
here
that
points
to
a
run
book
that
describes
what
we
should
do
and
create
a
run
book
to
better,
describe
and
more
better
describe
what
we
should
be
doing
to
fix
the
situation,
which
is
probably
creating
an
issue
for
the
appropriate
team.
E
We
could
certainly
handle
some
of
this.
You
know
there's
a
bunch
of
sres
on
this
team
that
we
could
tackle
some
of
this
and
realistically,
we
should
have
tackled
what
is
blocking
us
today
two
months
ago.
So
this
is
a
little
unfortunate
that
we
ran
into
this,
but
if
anyone
has
any
other
ideas,
I
might
get
started
working
on
this
exact
process
today,
just
to
I
feel
like
this
should
be
something
that's
quite
easy
to
work
on,
but.
F
F
What
I
would
like
to
see
here
is
more
about
as
a
I
remember
when
I
was
in
development
having
no
clues
of
where
configurations
for
gitlab
lives-
and
this
is
really
bad
right,
because
when
there
was
issues
there
was
an
incident
or
anything
that
I
had
to
figure
out.
F
What
is
the
configuration
of
the
gitlab
that
we're
running
on
github.com
to
figure
out
if
the
to
understand
if
the
incident
was
related
or
not
has
always
been
a
pain,
so
I'd
rather
spend
time
in
making
sure
those
information
are
easier
to
figure
out
for
everyone
in
development.
So
what's
the
configuration
and
expand
on
the
breaking
on
the
I
suspect
there
would
be
workflow
somewhere?
That
explains
how
developers
generate
the
deprecation
message
and
we
shouldn't
build
on
top
of
that
and
say:
go
there
figure
out
if
we
are
using
this.
F
So
do
we
have
amounts
of
time
to
figure
things
out
and
make
things
working
as
well
as
educating
everyone
in
in
the
process,
because
we
don't
get
calc
by
surprise
by
things
changing
and
they
start
I
mean
we,
they
I
mean
we
are
they
in
this
case
as
we
and
they
we
are
all
the
same
company,
but
just
to
make
fun,
and
they
start
to
figure
out
proactively,
where
the
configuration
lives
and
how
and
understand
how
we
run
the
system.
E
E
Effectively,
so
let's
continue
with
the
issue
that
I
created
so
far
and
see
what
we
could
do
in
the
meantime
and
then
in
parallel
I'll
try
to
figure
out
what
we
could
do
to
spin
up
a
issue
to
kind
of
like
modify
procedures
to
cover
that
exact
aspect,
because
I
appreciate
that
and
I
think
that
should
be
a
way.
There
should
be
a
way
to
accomplish
just
pushing
that
towards
engineering,
then
after
the
fact
of
building
packages.
So
that
all
makes
sense
to
me.
D
And
that
part
as
well
about
preparing
for
17.0
is
something
Sam
can
help
us
with.
So
I
know
he's
working
on
that.
So
he
can
combine
our
knowledge
and
kind
of
like
here's.
What
here's!
How
like
here's,
what
we
can
see
or
here's
the
tool
we
could
use
and
then
he
can
help
us
actually
like
surface
out
and
get
other
teams
to
be
able
to
prioritize.
E
All
right,
cool
I'll
handle
that
in
some
way
shape
or
form.
D
Awesome
thanks
come
back
one
other
piece
I
wanted
to
just
mention
on.
This
is
perhaps
the
the
other
pressing
piece
on
our
side
that
we
can
take
an
action
for
is
I,
I,
think
I
will.
Release
templates
will
always
have
things
that
are
a
little
bit
ambiguous
or
perhaps
out
of
date
just
because
of
time
evolving,
and
you
know
we
write
stuff,
usually
when
when
a
situations
happen,
so
we
have
context
and
then
six
months,
12
months
later,
there's
that
context
doesn't
always
exist
or
the
tool
has
changed,
or
anything
like
that.
D
So
I
would
say
on
this
one.
If
we
have
steps
like
that,
where
it's
check
this
thing
or
make
sure
this
thing
has
happened,
and
it's
not
really
clear
what
you're
meant
to
be
looking
for
or
why
that's
on
there
please
ask
best
case.
We
can
eliminate
a
step,
or
maybe
it's
one
of
these
where
actually
it's.
It's
not
super
clear
unless
you
know
specifically
what
you
actually
are
looking
for
and
we
can
maybe
catch
something
early.
C
And
maybe
this
is
for
a
separate
conversation,
but
because,
like
so
I've,
never
looked
at
this
file
before
and
like
is
this
specific
to
infrastructure,
because
we're
talking
about
pushing
things
to
development
and
like
this
all
has
to
do
like
from
what
I
can
see
here,
it's
not
stuff
that
development
will
be
working
on
like
they
have
a
separate
deprecation
process
for
features
that
I'm
aware
of
that
that
it
doesn't
include
like
this
type
of
file.
Well,.
E
This
is
just
general
configuration
for
gitlab
itself,
so
if
engineering
is
Desiring
to
change
the
way
we
need
to
configure
our
services,
then
someone
needs
to
reconfigure.com
in
some
way
shape
or
form
to
account
for
that
desired.
Configuration
so
like
Alessio
is
suggesting.
If
an
engineer
wants
to
reconfigure
the
configuration
file
for
giddly,
we
need
to
know
about
that.
It
would
be
wise
if
the
development
team
kind
of
assisted
with
us
in
that
realm
and
maybe
reached
out
to
us
ahead
of
time,
especially
if
something
got
merged
in
that
enabled
backward
compatibility
from
the
start.
E
That
way
that
change
could
be
tested
ahead
of
time
assuming
Omnibus
and
our
Helm
chart
was
updated,
obviously,
and
then
that
changes
then
tested
it
works
and.com
and
that's,
like
another
data
point
to
encourage
that
the
change
that
was
proposed
is
acceptable
and
working
as
desired
because,
hypothetically
since
we
use
Auto
deploy
as
a
method
of
testing
everything,
if
we're
not
testing
the
new
configuration,
we're
kind
of
missing.
E
C
A
B
Point
B
is
mine,
so
I
I
start
start
looking
at
the
the
deployment
metrics
and
the
whole
epic
about
visualizing
the
the
things
that
we
need.
Hopefully,
we
have
more
metrics
now,
but
I
wanted
to
point
something
about
our
visualizations.
B
So
I
think
that,
despite
the
fact
that
we
discussed
this-
and
we
discussed
that
we're
gonna
have
like
a
One
dashboard
per
deployment
where
it
shows
everything
about
deployment
and
we're
gonna
have
like
dashboard
for
for
jobs
and
then
like
a
overall
dashboard
that
shows
deployment
slas
and
we
need
to
repurpose
this
deployment.
Sla
I
see
that
we
start
building
like
multiple
dashboards
per
different
piece
of
the
the
this
information,
so
I
I
I.
B
Think
in
that
we
need
to
kind
of
combine
different
pieces
of
info
in
one
single
dashboard
and
show
and
and
build
that
build
the
dashboard,
as
we
discussed
before,
with
drill
downs
and
all
the
things
and
also
I
see
that
we
have
in
in
multiple
places
we
have
combination
of
aggregated
data
and
the
data
per
per
version.
So,
even
if
we
have
a
version
select
box
on
some
dashboards,
we
still,
you
know
like
throw
in
the
the
aggregations
over
multiple
versions.
B
B
That's
one
thing,
and
another
thing
is
that
we
need
to
use
more
visualizations
because
some
of
the
dashboards
they
are
just
tables
with
numbers
and
also
are
use
more
color
coding,
because
color
coding
is
kind
of
that's
what
drives
your
attention
and
also
I
want
to
point
that
dashboards
should
also
have
should
be
accessible.
B
And
having
like
gray
numbers
on
black
dashboard
is
also
not
super
friendly
for
the
colorblind
people,
yeah
I
think
like
we
need
to
think
on
overall
design,
whatever
overall,
overall
design
strategy
for
the
dashboards,
I
think
and
I
gonna
I
I.
B
Coming
back
to
this
to
this
topic
of
visualizations
now
and
I
will
try
to
combine
everything
that
we
have
now
in
one
single
dashboard,
it's
about
deployments,
but
in
the
same
time
I
will
try
to
come
up
with
kind
of
design,
guide
guidance
for
the
dashboards
and
they
need
to
kind
of
each
dashboard
should
answer
the
questions
that
you
are
looking
for
and
yeah
like.
Currently,
information
is
a
bit
sparse
over
multiple
dashboards
and
it's
hard
to
digest
it.
I
would
say.
A
One
conquest
of
Vladimir
about
this:
maybe
let's
see
what
actually
fits
in
the
horoscope
or
the
horizon,
and
what
we
can
work
in
that
later
right.
So,
let's
be
sure
that
maybe
we
can
show
values,
even
if
the
dashboard
is
not
the
best
design
one,
but
they
have
like
the
right
value
that
we
can
show
to
the
timings
and
the
the
body
is
part
and
so
on.
A
I
know
there
were
some
some
technical
difficulties
without
the
latest
pipeline
name
to
do
the
drill
down
if
something
was
discussed
last
week
with
the
Ruben,
but
before
doing
like
extra
improvements-
let's,
let's
finish
this
iteration
with
what
we
have
and
to
not
a
large
scope.
Now,
in
the
last
the
last
few
days
of
the
quarter.
B
D
It
I
recommend
on
that
one
is
I
I'm,
not
I'm,
not
knowing
much
about
where
system
are
up
to
on
these
things,
I'm,
not
sure
if
you're
wanting
us
to
contribute
all
that,
but
I
think
if
there
are
issues
that
you
know
there
are
questions
about.
Do
you
want
people
to
wait
in
on
the
layout
or
interpretation
of
things
then
feel
free
to
Ping
the
issues
out.
So
we
can
actually
do
that.
Otherwise,
I
gotta
trust
that
you
will
have
it
under
control.
B
G
Yeah,
thank
you.
So
I
don't
have
an
issue
for
this.
One
I
plan
to
open
one
soon,
but
I
wanted
to
also
chat
with
you
about
this.
So
on
the
20th
which
I
think
it
was
last
Thursday,
we
started
the
preparations
for
15-11
the
release
candidate.
G
When
we
talked
rc42,
we
encounter
a
bunch
of
failures
on
that
and
turns
out
that
these
failures
were
present.
Since
at
least
two
weeks
ago
we
fixed
most
of
them
I
think
we
encounter
four
different
failures
and
three
were
fixed,
but
one
was
missing
and
then,
since
it
was
late
already
in
my
afternoon
already
in
the
evening,
we
were
supposed
to
talk
on
the
next
day.
I
took
the
executive
decision
of
just
ignore
this
failure.
I
did
it
because
it
was
only
on
Ce.
G
These
failures
were
not
pleasant
on
security,
neuron
canonical,
they
seem
to
be
sporadically
flaky
but
yeah.
So
my
question
is
here:
I
have
a
bunch
of
questions
about
this.
It
is
basically,
how
can
we
detect
these
failures
earlier
during
the
release
cycle
and
not
when
we
talk
the
release
candidate
and
what
should
we
do
when
they
fail
I,
try
to
analyze
the
root
of
the
of
these
failures
and
try
to
ping
some
themes
about
it
to
to
ask
them
hey
this
failure
is
legit.
What
should
we
do
about
it?
G
But
since
it
was
late
in
my
afternoon
and
most
of
the
teams
were
located
in
India,
that
was
in
a
good
resource
and
then
I
was
also
not
clear.
Who
should
be
the
dried
of
these
failures?
If
should
be
engineering
productivity,
if
it
is
on
us
or
basically,
what
should
we
do,
because
we
don't
really
have
our
own
book
for
that.
D
G
D
Okay
cool
so
I
have
a
call
actually
that
touches
on
this
a
bit
later
with
Vincy,
and
she
has
mentioned
that.
One
thing
that's
tricky
is
we
tend
to
Ping
quality
on
these
spec
failures
and
actually
quality,
don't
write
the
specs
or
even
review
the
specs,
so
they
actually
don't
know.
Maybe
even
there
may
be
no
less
than
us.
D
She
suggested
that
Devon
call
escalation
process
may
be
the
way
to
go,
but
I
think
to
kind
of
summarize,
where
we're
up
to
right
now
is
we
basically
need
to
figure
out
a
process
so
that
we
can
actually
that's
to
answer
your
your
C3,
which
is
like?
Basically,
what
do
we
do?
D
Maybe
just
rolling
out
of
that
gives
us
a
way
to
detect
earlier,
but
yeah
go
ahead.
Let's
see.
F
I
was
going
to
say
that
this
is
an
environment,
diversity
problem
where
you
have
something
that
works
on
two
out
of
three
environments
and
it
failed
on
the
third
one,
which
is
really
hard
to
figure
out,
because
it
very
likely
will
be
unreproducible
on
developer
machines
and
asking
developers
on
call
to
fix
this.
Where
I
mean
we've
seen
the
result
of
developers
on
call
for
regular
incidents
and
things
that
were
even
reproducible
in
general,
I'm
not
expecting
to
to
find
I
mean.
D
F
Do
yeah,
but
this
will
not
solve
Myra's
problem
because
she
was
late
on
a
time-bound,
Pros
Pro,
so
that
maybe
we
should
talk
about
how
do
we
surface
them
earlier
because
she
mentioned
it
was
failing
since
three
days
right,
yeah-
and
this
is
probably
this-
is
something
about
the
environment
diversity.
It
was
mentioning
before
right,
so
we
are
treating
canonical
as
the
see
the
reference
environment
for
masterbroken
and
all
this
type
of
all
hands
on
Jack
reaction
activity.
So
if
Master
is
broken
on
canonical,
is
very
well
surfaced.
F
Every
everyone
I
mean
there
is
a
clear
path
to
get
out
of
this
right,
but
no
one
take
a
look
at
what
is
happening
on
dev
security.
I
think
we
are
monitoring
now
I'm,
not
sure.
So
that's
the
thing
right.
So
we
rely
on
on
the
last
chain
of
the
mirroring,
but
no
one
else
is
taking
care
of
those
because
they
don't
need
it,
and
so
that's
the
problem
right.
So
either
we
move
all
the
monitoring
to
the
to
the
last
stage.
F
So
we
do.
The
master
broken
is
always
on
dev,
for
example.
So
because
it's
the
last
one
and
then
we
we
need
something
to
make
sure
that
we
are
not
lagging
behind,
that
the
things
are
working
and
that
is-
and
they
are
actually
green
as
well
on
top
because
we
tag
based
on
the
status
on
canonical
or
or
Dev,
depending
on
where
we
are
in
the
state.
But
then
we
rely
on
the
pipeline
on
on
Devo.
We
can
even
say
we
don't
care
about
the
pipe,
the
spec
on
devs,
okay.
F
F
F
So
if
the
thing
is
is
CI
working,
there
are
a
lot
of
projects
there
that
are
just
proving
that
CI
is
working
I,
don't
think
anyone
is
just
doing
the
checking
that
the
and
the
result
of
the
pipeline
or
on
on
ee
canonical
ee
is
the
same
result
of
the
pipeline
on
a
Dev
C
right.
So
no
one
is
checking.
So
this
is
not
real
test.
D
D
G
D
Okay,
it
doesn't
have
to
be
to
plan
a
process
like
we
can
certainly
figure
out,
but
let's
try
and
understand
you
what
happened
and
then
maybe
pull
in
the
the
thoughts,
let's
say:
I
just
shared
plus
any
others
around
like.
What
might
we
need
for
this
to
be
a
process
like
what
are
the
different,
critical
points
or
times
we
need
to
test
something
and
then
from
there
we
can
build
out
a
new
process.
D
D
We
can
we
can
figure
out
next
steps
for
sure
perfect
thanks
very
much.
B
Yes,
sorry
I,
just
very
yes,
so
last
week
was
in
terms
of
incidents.
It
was
worse
than
the
week
before
I'm
afraid,
but
anyway
we
managed
to
promote
your
packages
and.
B
B
Yes,
the
the
first
thing
like
we,
we
managed
to
successfully
roll
out
1511
version,
so
that's
a
kind
of
major
Milestone,
at
least
for
me,
their
deployment
frequency
was
more
or
less
the
same
as
before.
I
think
here
somewhere
here
is
our
release.
I
guess
maybe.
B
But
the
in
the
average
we
had
like
five
six
deployments
a
day,
then
what
else
yeah
here
is
here
is
the
release.
I
think
21st,
no.
B
Yeah,
yeah,
okay,
but
anyway,
their
deployment
kind
of
lead
time
is
about
18
hours.
B
On
the
on
the
spike,
this
was
on
Monday,
I,
guess
and
then
afterwards
was
less
than
that
and.
B
Deployment
blockers,
as
I
said,
we
had
pretty
intense
week
in
terms
of
incidents,
so
we
had
putting
incidents
in
total
and
it
blocked
like
a
38
hours
in
total.
What
else
I
need
to
show.
D
Just
start
on
those
blockers,
Vladimir
yeah,
especially
given
we
had
so
many
and
such
a
lot
of
time
blocked.
Would
you
be
able
to
add
the
blocker
type
label
onto
the
the
details
table
under
week,
16
for
each
of
the
incidents.
D
Root
the
root
cause
label
for
each
of
those.
B
And
then
yeah,
that's
it
from
my
site.
D
I
just
want
to
say:
Well
done,
Vladimir
well
done.
First
shift
completed
big
milestone,
so
I
don't
go
through
that
and
Myra
as
well.
Well
done
to
you
because
I
know
it
was
a
tough,
tough
Milestone,
so
super
work
getting
the
release
out
on
time
on
Saturday.
B
I
definitely
happy
to
go
through
this
process
and
I
want
to
say
huge
thanks
to
Myra
and
everyone
who
was
involved.
So
that
was
a
lot
of
new
knowledge
for
me
and
a
lot
of
new
stuff
and
yeah
I'm
looking
forward
for
the
next
shift,
so
I
I
hope
next
shift.
Gonna
go
smoother
than
this
one.
So
since
I
already
know
a
little
bit
about
this
release,
management
thingy
so
yeah
like
need
to
iterate
and
improve.