►
From YouTube: 2023-03-13 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
This
is
Monday
13th
of
March
2023
in
here
in
America.
Delivery
grew
quickly,
so
we
have
thank
you,
mother,
for
the
reminder
of
the
monthly
ma.
A
So
I
guess,
please,
let's
give
it
a
read
and
come
ask
with
some
question:
if
it's
needed,
even
in
advance,
is
even
better,
some
people
are
being
off,
and
then
we
have
the
first
discussion
point.
It
is
from
myself.
A
So
what
I
wanted
to
discuss
a
bit
is
what
the
entire
team
would
expect
from
this
work
that
we
are
doing
behind
your
hair
or
measuring
reducing
deployment
pipeline
efficiencies.
This
is
the
account
that
we
are
working
on
in
system
for
this
quarter.
We
already
started
to
track
several
metrics
and
traces.
A
A
If
you
know
we
are
missing
something
to
see
if
there
is
something
that
we
were
looked
not
only
from
the
data,
if
you
are
tracking,
but
also
in
which
will
be
the
best
way
for
the
team
to
access
this
data
and
how
we
can
generate
the
most
value
out
of
this
data
right
because
most
of
the
time
we
have
a
lot
of
data,
but
if
you
don't
have
the
right
way
to
approach
it
and
to
expose
it,
and
we
are
not
easy,
not
easy
to
access,
clearly
we're
not.
We
cannot
extract
this
value.
A
So
I
would
like
to
have
like
a
water
group
discussion
here
to
see
if
there
is
something
missing
that
we
didn't
already
discuss
or
having
some
wishes
that
that
not
only
coming
from
system,
because
you
know
this
is
a
kind
of
a
system
geared
okay
are
so
far,
but
we,
this
is
going
to
benefit
by
everybody
right
everybody's
in
the
risk
management
everybody's
going
to
look
at
those
metrics.
What
are
the
stresses,
for
instance,
this
morning,
Alessio
was
asking
question
about
how
we
can
you
know,
use
this
stress
better.
A
So
so
far
we
didn't
have
this
data
so
so
far
we
are
not
even
expert
in
understanding,
which
are
the
exact
use
cases
we
want
to
have
on
this
data
and
I
think
would
be
start.
You
know
a
good
starting
discussion
point
to
see
if,
since
we
are
six
seven
weeks
out
from
the
end
of
the
quarter,
if
we
need
to
steer
a
bit
the
direction
where
the
zohara
is
going
to
produce
more
value
for
ourselves.
C
So,
basically,
that
was
exactly
what
we
discussed
with
Ahmad
like
a
few
minutes
ago:
okay,
perfect
so
and
I.
Well,
we
looked
at
the
whole
epic.
Well,
first
of
all
now
we
split
inefficiency
and
kpi
epics
right,
so
we
have
like
a
different
dedicated
epic
for
kpi
and
dedicated
epic
for
for
in
Pipeline
and
efficiency.
C
For
me,
it
looks
like
a
it's
a
bucket
of
ideas
and
and
metrics
that
take
in
just
for
the
sake
of
metrics,
so
it
looks
very
sporadic,
honestly
and
I
think
that
well
the
the
goal
of
this
should
how
to
say
we
should
have
our
observability
should
our
reservability
system
should
be
able
to
answer
specific
questions
and
the
specific
questions
are
like
where
we
should
go.
Where
do
we
optimize?
What
is
our?
C
What
is
our
main
target
for
the
optimization,
because
having
the
metric
that
telling
you
that
amount
of
failure,
job
increased
yesterday
and
decreased
today,
doesn't
tell
you
anything
I
think
like
from
the
observability
point
of
view,
we
should.
We
should
think
about
like
the
whole
chain
of
of
decisions.
So
you
have
a
data
from
one
side
and
you
have
a
decision
on
another
side,
because
by
looking
at
the
data,
you
need
to
decide
something.
Okay,
there
is
data.
C
In
the
sake
of
data
doesn't
make
sense,
data
should
should
be,
should
drive
the
decision-making
process
right
and
our
and
and
also
having
from
my
point
of
view
again,
you
might
I
might
be
wrong,
but
from
my
point
of
view,
having
the
aggregated
data
exposed,
already
kind
of
limits
us
on
on
on
more
granular,
thingy
I
would
better
not
to
if
we
are
thinking
about
metrics,
like
with
the
first
thing
that
we
we
need
to
think
is
like
what
the
what
the
queen,
what
the
question
answers
this
metric.
C
C
C
It's
not
the
best
approach.
From
my
point
of
view,
we
need
to
expose
the
the
data
itself
and
then
build
aggregation
and
derivative.
On
top
of
that,
because
derivative
is
are
is,
is
actually
is
an
analysis
on
there
on
the
lower
level
data
right
and
you
can
build
a
lot
of
those
derivatives
on
top
of
that,
the
on
top
of
the
pure
data
and
also
what
I
I.
C
What
we
also
discussed
with
Ahmad
today
is
like
we
have
a
We
have
basically
we
we
have
a
delivery
process
right
and
delivery
process
starts
with
with
the
packages.
We
cannot
deliver
more
packages
that
we
actually
than
we
actually
built
right.
C
So
therefore
we
we
have
this
package
build
pipeline
is
our
kind
of
entry
point
is
and
it's
our
bottleneck
and
we
should
optimize
for
the
bottleneck
and
we
see
like
we
should
produce
the
same
amount
of
deployments
per
day
as
we
produce
amount
of
packages
we
produce
right
and
and
then
we
and
then
we
should
think-
and
here
is
the
our
main
question
that
we
wanted
me
and
Ahmad
wanted
to
discuss
with
the
team.
What
is
our
optimization
strategy?
C
We
are
we're
optimizing
for
delivery
or
for
the
mttp
or
where
Optimum
optimizing
for
for
for
the
stability,
because
you
can't
have
you
can
deploy
packages
twice
an
hour
with
success
of
50
and
you
you
will
have
like
mttp
of
one
hour
or
you
can
have.
You
can
deploy
packages
once
an
hour
with
success
of
100,
and
you
also
will
have
mttp
of
one
hour
and
like.
Where
is
the
point
like?
What
is
the
main
strategy?
Do
we
do?
C
We
want
to
push
the
things
faster
or
we
want
to
to
to
to
to
optimize
for
stability
that
every
single
package
delivered
successfully
I.
Think,
from
my
point
of
view,
since
we
have
this
bottleneck
of
like
a
package
pipeline
like
in
common
in
in
common
kind
of
capacity
of
the
packages
amount
of
packages
that
we
have,
we
cannot
optimize
more
for
for
the
for
mttp
for
more
for,
like
we
cannot
like
sorry,
let
me
rephrase
it.
C
We
all
the
optimizations
that
lead
us
to
deliver
faster
will
not
make
sense,
since
we
are
building
x
amount
of
packages
per
per
day,
so
I
would
say
that
the
we
need
to
first
start
with
optimization
for
both
for
this
bottleneck
and
see
that
we
have
like
a
smooth
flow
of
the
packages
to
the
to
the
to
the
to
the
production,
and
we
don't
have
bad
bottlenecks.
We
don't
pile
the
packages,
we
we
don't
like
pause
them
for
certain
reasons
or
etc,
etc.
C
So
we
we
everything
that
we
built
we
deliver
and
then
on
the
next
Milestone.
We
optimizing
this
pipeline
that
delivers
the
packages
right
we
and,
and
then
we
can
just
fast
like
we
build
faster
and
and
and
deploy
faster
and
from
observability
point
of
view.
So
back
into
observability
point
of
view.
We
need
to
have
from
we.
We
need
to
have
a
visibility
of
this
flow
right
like
where
we
are
how
we
are
like
how
many
packages
we
are
producing,
how
many
packages
we
successfully
delivering.
C
What
is
the
time
for
these
packages
because,
like
we,
cannot
deploy
slower
than
the
interval
between
packages
that
produced
and
then
well
again
like
coming
back
to
Dory
metrics?
That
should
be
like
the
overview
and
again,
like
I,
already
discussed
that
and
the
like
Dora
metrics.
Is
it's
a
good
starting
point
and
then
you
should
be
able
to
in
case
of
something
like
in
case
of
like
you
want
to
optimize
the
pipeline
itself,
and
you
see
you
see
from
the
Dora
metrics
that
something
wrong
with
pipeline.
C
And
also
I
would
also
think
that
we
might
switch
our
kind
of
mind
from
getting
the
metrics
out
of
the
Skies
to
have
like
a
proper
slos
for
deliveries,
because
slos
is
where
we
are.
We
are
react.
C
Triggers
the
action
should
trigger
the
action,
and
then
we
have
like
a
for
example.
I
will
finish
soon
sorry
and
we
have
a
SLO
for
for
the
lead
time
right
so
and
it's
kind
of
12
hours
right
if
I'm
not
mistaken
and
then
also
SLO
for
the
failure
is,
and
we
we
we
measure
how
many
like
a
average
failure
rate
we
have
and
we
set
the
Baseline
at
the
very
beginning.
C
We
set
the
Baseline,
so
we
we
we
have
like
a
average
failure
rate
is
x
amount
of
failures
per
day,
and
then
we
we
set
the
SLO
for
that
rate.
For
for
this
Baseline
and
we
alarm.
If
this
slope
breaks,
if
it's
a
low
breaks,
then
we
we
need
to
do
something
with
that
and
once
we
are
stable
enough
with
this
SLO
and
with
this
Baseline
we
will
lower
down
the
Baseline
and
make
it
more
efficient.
C
So
that's
that's
like
what
we
discussed
with
not
much
today
and
also
I
discussed
with
different
team
members,
but
yeah.
We
just
wanted
to
bring
it
to
to
the
team.
Basically.
D
Thank
you,
Vladimir
I
was
actually
spending
the
last
a
bit
of
time
last
week
and
trying
to
build
the
model
and
where
we
spend
time
and
well
before
this,
you
touched
many
things,
so
the
metrics
out
of
the
sky
likes
number
of
things
that
failed,
and
things
like
that
is
because
here
we
are
mixing
two
different
effort.
One
was
about
trying
to
figure
out
if
there
is
a
reference
or
a
matrix
to
release
manager,
struggle,
release,
manager,
workload,
and
so
those
type
of
things
were
kind
of.
D
Let's
try
to
count
and
aggregate
data
because
we
always
have
this.
How
do
you
feel
about
the
release
management?
Yeah
staff
has
spent
a
lot
of
time
doing
things,
and
so
the
idea
was:
are
there
low-level
metrics?
That
kind
of
represent
this
type
of
things,
which
is
things
are
not
moving
by
themselves?
D
D
That
being
said,
we
had
spent
a
lot
of
time
and
effort
thinking
about
if
reducing
the
number
of
ways
in
terms
of
a
package
produced
was
a
good
move,
and
if
you
I
mean
we
will
look
at
those
numbers
later
when
we
go
through
the
dashboard.
We
already
are
counting
those
things
and
what
I'm
thinking
right
now-
and
we
was
trying
to
build
a
model
to
explain-
is
that
this
is
not
the
way
forward.
D
So
we
should
waste
more
packages
and
the
reason
why
we
should
waste
more
packages
is
that
we
can
build
more
packages
than
the
one
that
we
can
deploy.
But
if
something
goes
wrong,
it
takes
12
hours
to
get
back
on
track,
so
I'd,
rather
just
waste
packages
constantly
and
when
I'm
ready
to
deploy,
get
the
most
recent
one,
because
this
one
will
kind
of
close
the
gap
and
give
me
and
kind
of
reset
mttp,
because
the
point
of
mttp
is
based
on
merge,
request
level.
So
we're
not
our
key
Matrix
is
not
on.
D
Packages
to
production
is
on
from
merge
to
production,
and
so
the
amount
of
changes
inside
the
package
level
out
the
the
mttp
value
itself,
because
if
you
have
thousands
of
changes-
and
they
are
three
days
old-
they
have
they
yeah.
This
is
three
days
of
mttp
for
thousands
of
changes,
but
if
you
can
keep
building
so
by
the
time
you
actually
deploy
something
you
may
have
those
100
packages,
sorry
100
merge
requests.
D
C
E
C
D
Another
problem
that
I
wanted
to
touch
because
you
mentioned
it
right,
so
the
thing
about
optimizing.
The
process
itself
works
very
well
if
you
are
in
control
of
every
aspect
of
the
process,
but
we
are
not
so
that's
the
problem
right,
so
we
can
act
directly
as
a.
Let
us
react
to
something
quickly
on
something
that
we
control.
So
we
broke
release
tools.
D
Good
then
we
can
react
figure
out
so
I'll
give
an
example
very
simple
one.
So
we
Friday
we
removed
one
hour
from
our
complete
deployment.
D
We
just
carved
30
minutes
from
Canary
and
from
every
Cannery
stage,
because
we
were
noticing
this
asset
priming
that
took
30
minutes,
30
minutes
for
staging
Canary
30
minutes
for
a
production,
Canary
I
was
suggesting
to
try
to
converge
into
a
single
job
and
eager
just
said:
I
mean
this,
takes
there's
no
reason
why
it
should
take
30
minutes,
so
he
figured
out
that
we
were
using
errorsync
instead
of
CP,
and
so
it
was
rebuilding
the
whole
status
of
the
bucket,
which
it
never
deletes
anything
so
that
same
job,
changing
from
everything
to
CP,
went
from
30
minutes
to
three
minutes.
D
So
basically,
it's
a
one
hour
right.
So
this
is
an
efficiency.
They
should
have
been
tracked
by
this
type
of
tool,
right
so
drill
down
on
every,
and
this
is
actually
how
we
met.
We
figured
this
out
because
we
were
doing
the
Ruby
3
roll
out,
so
we
had
a
lot
of
time
to
look
at
the
Jaeger
metrics
and
we
were
looking
at
all
deployments
just
to
have
a
rough
idea
of
where
we
were
into
the
the
overall
process.
So
we
are
doing
this.
D
D
But
that's
the
point
right.
So
this
is
something
in
our
control
and
okay.
Now
we
have
cut
this
number
down,
but
if
packaging
takes
longer,
we
can
alert,
but
we
can't
really
react
if
QA
fails
more
often
we
can
alert,
but
we
can't
react.
D
So,
and
this
is
the
thing
so
either
mttp
is
a
company
goal
and
not
a
kpi
for
our
team,
or
we
need
to
isolate
the
the
things
that
are
outside
of
our
control,
so
that
we
can
say
is
going
up,
but
it's
going
up
where
we
have
no
control.
So
we
are
alerting
those
who
are
responsible
for
the
section
of
mttp
that
we
can
control
exactly.
F
And
I
think
just
to
like,
like
we
do
have
on
our
delivery
strategy,
page
the
two
pieces
that
we
did
deployment
SLO
year
before
last
and
the
two
pieces
that
go
alongside
that.
One
is
packaging,
SLO
and
one
is
test.
Slo
and
those
are
the
two
bits
which,
if
we
got
those
setups
that
they
were
dependable
and,
like
you
know,
visual
or
you
know
somewhere
that
other
people
go
and
see
the
data.
Then
we
can
absolutely
set
a
Target
and
go
and
like
build
expectations
about
other
teams
for
that
stuff.
C
I
just
wanted
to
say
that
pingin
quality
team,
when
something
goes
wrong,
is
also
kind
of
action
right.
F
I
wasn't
thinking
of
that
so
much
as
like,
we
have
an
agreement
with
the
Department
around
once
things
get
to
us.
So
if
things
are
consistently
at
a
certain
number
we
go
and
you
know
we
have
some
agreement
so
not
just
like
a
kind
of
passive
ping
into
a
slack
Channel,
but
actually
some
agreement
with
them
that
you
know.
Maybe
it's
like
next
quarter.
It
becomes
an
okr
or
you
know
something
a
little
bit
bigger
that
we
can
actually
go
partner
with
the
mod.
A
It's
not
talking
about
the
amount
of
failure
right,
if
let's
say
every
couple
of
weeks,
they
start
to
add
10
minutes
in
the
QA
end
to
end,
then
we
are
going
to
end
up
to
it
all
the
benefit
that
maybe
we
had
what
they
were
the
deeper.
Unless
you
did
so,
we
need
to
establish
also
trend
line,
alerting
also
on
on
traces
and
so
on,
and
maybe
even
like
directly
paying
them.
Hey
you,
the
last
10
days.
A
A
So
we
added
automatics
because
we
need
to
one
part
of
this-
is
part
of
tracing
reducing
release
measurement
workload
right
so
retries
is
something
that
is
important
to
understand
of
any
advice
we
have,
and
so
we
need
to
start
to
understand
what
makes
sense
to
measure
and
what
we
can
like
correlate
together
to
okay,
this
failure,
or
this
longer
pipeline,
is
longer
because
of
that
and
go
down
to
a
root
cause,
because
right
now,
if
you
are
pipeline,
that
mttp
is
like
one
hour
longer,
I'm,
not
sure
that
anybody
is
going
there
to
understand
where
we
spend
all
this
time.
A
F
And
I
think
a
lot
of
what's
been
difficult
in
the
past
has
been
planning
the
work
we
need
to
do
to
reduce
like
failures
or
workload,
or
things
like
that,
because
we
don't
really
have
the
data
to
back
that
up.
So
I
think
if
we
can
figure
out
what
questions
we
want
to
answer.
Maybe
that
makes
it
we
say
you
don't
need
to
know
everything,
but
I
know
there
are
certainly
things
which
say,
for
example,
on
deployment
SLI
right
now
when
I
go
to
it
and
try
and
use
it.
F
The
thing
that
always
strikes
me
is
it's
quite
difficult
to
drill
down,
so
you
go.
Oh
great,
there
was
a
big
spike
last
Tuesday,
but
I
don't
really
know
why,
and
so
it's
very
hard
to
plan
follow-up
work
to
improve
things.
This.
D
End
to
end
instead
of
job
by
job
and
then
aggregate.
So
if
we
derive
the
overall
Time
by
aggregating
at
metrics
level,
instead
of
collecting
the
overall
metrics,
then
we
would
have
drilled
down
for
free,
just
have
to
build
the
aggregation.
F
One
thing
I,
don't
know
if
you've
thought
about
it
already,
but
just
to
mention
as
well
is
I
think
it
is
also
useful
to
have
these
presented
somewhere
where
people
outside
delivery
can
see
them.
So
it
would
also
be
good
for
us
to
think
about
whether
they
is
end
up
on
the
delivery,
metrics
handbook
page
or
whether
that.
B
F
A
Have
an
issue
already
as
a
that
should
be
added
as
access
criteria
of
this
effort
that
we
should
recommend
in
our
handbook.
Some
of
them
are
as
our
performance
indicators.
Oh.
B
A
We
want
we
want
to
see
them
there,
but
you
know
I
would
like
to
get
in
in
the
order
of
days
to
a
clear
understanding
of
what
we
want
to
document
and
which
why
we
recommend
that
which
kind
of
value
we're
bringing
because
we
were
documenting
that
right,
because
if
it's
a
performance
indicator,
maybe
it's
not
a
key
one.
But
we
need
to
be
mindful
of
the
Target
that
we
are
also
setting
for
each
other.
E
How
much
of
this
can
we
push
to
kind
of
Victor's
team
as
well
in
those
kind
of
pipelines
and
features
that
are
absent
are
missing.
F
None
at
this
stage,
I
would
say
not
I
think
there
probably
are
things
in
the
product
that
that
would
could
help
us,
but
I.
Think
the
complex,
like
these
sort
of
feel,
like
two
very
either
end
of
the
problem,
I
think
the
Big
Challenge
we
have
when
we
try
and
map
kind
of
our
experience
to
the
product
is
that
our
experience
is
like
fully
end-to-end,
which
means
you're
going
you're
sort
of
touching
on
virtually
every
Stage
Group,
not
everyone,
but
like
a
lot
of
stage
groups.
F
So
I
think
we
need
to
be
careful
like
we
should
try
and
map
some
of
that
stuff,
but
probably
like
I,
wouldn't
I
wouldn't
suggest
combining
them.
I.
Think
that
will
add
quite
a
lot
of
complexity.
D
Yeah-
and
we
also
have
the
same
old
problem-
that
everything
in
the
product
is
designed
around
the
project
and
we
don't
have
those
projects
on
Ops
and
we
have
coordinators
aggregations
and
things
like
that
and
nothing
in
the
products
maps
to
this
yeah.
D
So
I
wanted
to
do
something
in
the
release
manager
section,
but
it's
really
in
line
with
what
we
are
discussing
right
now.
So
maybe
I
would
just
quickly
share
this
right
now
and
then
we
continue
because
it's
really
on
topic,
so
I
was
taking
a
look
at
this
this
morning,
whereas
the
E
Okay,
so
I
was
trying
to
take
a
look
at
this
top
section
here
or
Jaeger.
So
this
is
our
Jaeger
interface,
that
is
tracking
coordinated
Pipelines,
and
this
is
really
about
I.
D
This
is
why
I
had
all
this
question
this
morning,
because
I
was
trying
to
filter
out
things
out
of
this,
to
try
to
understand
what
was
happening
and
I
think
that
the
current
week
is
also
special
because
of
the
Ruby
tree
rollout.
So
it's
really
interesting
so
just
to
give
an
explaining
what
I'm
doing
here
so
zoom
bar
come
on
here
we
go
so
not
yet
still
in
the
middle
okay.
D
So
basically
we
are
filtering
all
the
deployment
all
the
deployment
for
one
month,
because
there
is
a
limit
of
retention
of
one
month,
and
this
thing
here
tells
us
that
there
are
82
traces.
D
This
is
only
tracking
a
successful
deployment,
okay
so,
but
the
interesting
part
is
that
first
of
all,
this
is
not
three
but
they're
thin.
So
the
first
thing
that
was
trying
to
figure
out
why
we
had
so
quick
deployment
here,
this
outlier,
but
no
they
are
the
slow
one
and
but
yeah
that's
CSS
problem,
but
yeah.
So
we
can
see
those
clusters
that
are
the
weeks
right
and
basically
everything
stays
below
the
8
20
hour
threshold
I
mean
the
big
bulk
of
everything.
D
The
funny
thing,
though,
is
that,
because
this
is
Trucking
times
end
to
end,
the
delay
in
approval
are
reflected
here,
because
there
is
because
I
was
looking
at
this,
because
I
wanted
to
see
the
the
result
of
removing
that
one
hour.
So
I
was
expecting
to
have
quicker
things
here,
because
we
had
just
a
couple
of
deployments
since
Friday
evening,
but
I
couldn't
find
them,
because,
obviously,
if
I
didn't
promote
something
for
30
minutes
or
an
hour,
I'm
already
eating
all
that
benefit.
D
D
So
it's
really
interesting.
Oh
this
one
here,
I
have
the
thing.
So
this
is
really
really
interesting
and
get
me
my
mouse
back.
So
it's
really,
but
it's
really
hard
to
drill
down.
So
the
only
thing
you
can
do
is
just
clicking
here
and
that's
the
whole
end-to-end
Pipeline
and
it
shows
basically
you're
looking
for
those
things
here
right,
so
why
this
staging
I
mean
there's
a
staging
graph.
We
don't
really
care.
D
You
know
something
like
this,
why
staging
cni
took
8
hour
and
12
minutes,
obviously,
because
we
canceled
this
as
part
of
the
Ruby
3
rollout
and
we
replayed
it
the
morning
after
so
this
is
a
it's
a
very
interesting
pipeline
because
it
shows
say
weird
Behavior.
Right
often,
this
is
not
what
happens
daily,
but
it
kind
of
give
you
hints
right.
So
what
should
I
look
at
if
something
is
completed
off
right,
so
you
have
all
those
three
these
are.
Can
we
come
yeah?
D
We
should
be
able
to
collapse
everything
kind
of
no,
this
one,
almost
there
yeah
and
so
come
on
well
and
looking
at
those
big
bar,
we
kind
of
see
where
things
are
not
working
as
expected
and
yeah.
So
this
is
interesting
what
what
I
wanted
to
really
see
here?
But
it's
not
easy,
is
trying
to
I
mean
it's
not
possible.
Right
now
is
trying
to
filter
out
things
like
pipeline
that
had
failures,
or
things
like
that.
D
So
because
my
question
was:
can
I
have
a
streamline
number
of
how
long
it
really
takes,
but
so
the
answer
is
no
I
can't.
So
this
is
kind
of
goes
into
the
question
about
what
we
would
like
to
to
to
see
right
so,
but
this
is
already
I
think
a
very
valuable
information,
because
I
don't
think
we
had
a
visualization
like
this
one
before
so
it.
It
already
shows
Trends
right
so
I
mean
transfer
one
month,
because
then
we
lose
them
but
yeah.
So
that's
the
thing
that
I
wanted.
G
A
A
A
So
this
is
not
already
not
gonna
bring
us
all
the
value
that
we
wanted,
because
if
we
maybe
we
see
a
trend
that
is
happening
on
a
seven
weeks
time
or
eight
weeks
times
something
like
that
is
not
that
obvious.
If
you
only
have
four
weeks
early
right,
okay,
do
you
mind
to
open
any
children
so-
and
maybe
ping
me
on
me
being
me.
D
And
then
another
things
that
came
to
my
mind
when
I
was
trying
to
work
with
this.
Is
that?
Because
so
because
of
how
things
are
tracked
together
that
you
had,
the
deployment
pipeline
has
a
only
element,
and
then
everything
is
nested
to
it
is
I
mean
it's
impossible
to
do
the
same
thing
for
only
production,
because
I
was
expecting
to
be
able
to
do
something
like
this.
So
we
have
operations
here,
right
so
I
say.
B
D
That
give
me
this
so
staging
Canary
right.
This
is
the
bridge
pipeline,
so
say:
maybe
it's
just
showing
me
the
no.
This
is
not
what
I
wanted
yeah,
but
basically
my
understanding
is
that
it's
showing
me
every
pipeline
that
includes
that
job,
so
I
would
get
the
same
answer
as
before.
I
was
really
hoping
to
have
the
same
graph
here,
which
is
the
same
as
before,
but
only
for
the
that
specific
pipeline.
So
if
on
that,
there
is
something
we
can
do
where
we
can
actually
probably
split
here
at
service
level,
I
suppose
right.
D
So
if
my
understanding
of
how
those
things
behave
so
right
now
we
are
pushing
the
metrics
end
to
end,
but
if
we
could
split
it
by
deployment
Target
or
things
like
that,
it
will
kind
of
probably
be
more
actionable
in
in
terms
of
looking
at
things.
I.
C
Think
I
think
it
makes
sense
to
add
this
to
girlfriend.
Actually
and
again
you
can
have
the
same
results
if
you
have
a
deployment
metrics
per
deployment
right
and
then
you
can,
even
you
know,
drill
down
to
the
locks.
If
you
add
the
elasticsearch
endpoint
and.
C
Traces
I
would
I
would
see
traces,
better
works
on
job
level,
so
like
a
trace,
the
job
like
what
exactly
it's
doing,
not
on
like
the
whole
pipeline
level,
because
the
whole
pipeline
level
you
can
it's
like
metrics
and
locks,
is
enough,
I
think
and
they
they
bring
more
value.
From
my
point
of
view,.
D
Yeah,
this
is
true,
I'm
more
concerned
about
the
having
I
mean,
probably
with
aggregation
works
as
well
having
an
hour
a
build
High
View
on
things
that
we
don't
control
over
things
that
we
control,
like
so
in
a
production
deployment.
D
I
would
likely
go
down
and
drill
down
on
what's
in
there,
but
probably
in
qai,
not
interested
that
much
right,
so
I
would
rather
to
just
see
QA
as
a
whole,
but
I
I
mean
once
you
have
everything
once
you
have
all
the
aggregation
and
it
will
straight
down
you,
you
can
navigate
the
way
you
like
it.
Thank.
G
So
a
trace
is
important
on
the
pipeline
level
as
well
to
understand
you
know
how
things
run
together
and
when,
when
can
something
be
parallelized,
when
can
something
not
be
parallelized
stuff,
like
that
yeah
lso's
idea
of
separating
out
the
pipeline
Traces
by
by
pipeline
I,
guess
or
by
service,
is
interesting
but
I
suppose
we
will
still
want
to
have
like
a
trace
of
the
whole
thing
or.
D
Yeah
but
I
see
there
are
things
like
the
dependency
graph.
I,
don't
know
this
tool
very
well,
so
I'm
kind
of
thinking
if
there
is
an
option
where
different
elements
in
the
sync
and
still
linked
to
others,
so
that,
because
just
given
a
random
an
example
right.
So
we
have
this
thing
and
everything
is
the
same.
D
I
mean
from
the
from
the
query
perspective
right,
so
you
can
query
on
only
staging
Canary
deployments
and
so
the
the
the
the
numbers
and
the
values
you
get
are
just
if
we
go
back
here
this
thing
here,
so
everything
inside
this
block,
so
my
base
unit
of
reference
is
this
29
minutes
sub
pipeline.
G
Yeah
yeah,
it's
a
much
better
tool
yeah.
So
there
are
industry
tools
which
which
are
much
better
than
Jager,
but
they
have
their
own
problems,
especially
like
with
procurement
and
legal
and
stuff,
like
that,
yeah.
C
Honestly,
I
never
I,
never
seen
someone
using
Jagger
UI
by
itself
like
if
it's,
if
it's
a
traces
that
people
just
you
know,
take
the
traces
and
put
them
into
I.
Don't
know
data
dock
or
honeycomb
or
grafana,
or
something
like
that.
Like
never
use
and
like
no
one
uses
Jagger
UI,
because
it's
just
super
simple
and
it
doesn't
have
a
lot
of
functionality
and
it's
I
I,
don't
know
I
would
I
would
take
those
traces
and
and
and
just
deliver
them
to
grafana.
A
So
we
have
an
issue
for
that
Vladimir
right,
so
I
think
out
of
this
call
today,
I
think
we
need
to
prioritize
things
that
Alessia
today
showed
us,
because
I
mean
it
look
how
to
generate
value
out
of
this
effort
so
include.
If
you
want
to
see
all
the
field
pipeline
as
well,
we
have
an
issue
for
that
to
start
tracking,
also
pipe
manager
that
they
failed,
because
right
now
is
only
a
successful
one.
We
also
already
discussed
about.
A
We
want
to
have
more
granitary
on
the
traces,
also
Group
by
pipeline
or
service,
and
also
we
want
to
see
only
I,
don't
know
staging
Canary
deployments
and
see
all
of
them.
So
there
was
another
issue
so
plus
so
from
today
things
I
think.
Today
we
can
start
to
prioritize
in
the
work
on
the
topic,
those
things
because
I
want
to
get
at
the
end
of
this
quarter.
A
Where,
with
these
are
we
can
answer
those
questions
out
of
that
right
and
I'm,
not
an
expert
in
grafana
I,
don't
know,
grafana
can
work
out
of
these
grouping
together
and
everything.
So
there
was
an
issue
for
that.
Look
into
that,
but
it's
good
that
we
need
to
start
to
work
in
visualization,
because
it's
how
we
actually
extracting
value
out
of
this
all
this
information.
C
C
Ditch
some
of
the
metrics
that
can
be
can
be
replaced
with
high-level
aggregations.
D
Can
we
have
a
proof
of
concept
of
this,
a
very
simple
thing,
with
three
nested
pipeline,
even
a
synthetic
one,
because
I
I'm,
what
I'm
thinking
is?
Probably
we
don't
have
a
good
understanding
of
what
graphana
can
do,
but
just
going
to
the
effort
of
instrumenting
deployer,
QA
packaging
and
everything
probably
is
too
much
right.
D
So
if
we
can
just
build
a
fake
job
in
three
different
projects,
something
like
this,
something
that
is
gives
us
the
sense
of
the
complexity
that
we
have
when
we
want
to
drill
down,
and
so
we
see
right
and
say
because
I
I
have
this
feeling.
That
probably
is
the
right
is
the
right
move,
but
maybe
we
can
see
what
we
can
see
yeah.
We
can
see
what
we
can
see
here.
This
is
Turbo
with
grafana
and
and
yeah
and
make
a
decision.
A
Okay,
thank
you
that
was
extremely
useful
to
understand
this
anything
else
on
the
Matrix
topic.
D
So
no,
this
is
the
epic
thing
here
we
go.
This
is
a
very
very,
very
interesting
week,
so
that's
the
Ruby
E3
rollout
window,
which
obviously
we
can
see
from
the
number
of
Target
packages,
the
number
of
pipelines.
Everything
reflects
that
we
were
not
working
as
usual
and
then
by
when
we
re-enabled
this
is
when
I
re-enabled
the
the
also
deploy
pipelines
in
the
morning
we
can
see
it
is
ramping
up
going
to
his
regular
number
of
packages
daily
and
kinds
of
them.
D
The
number
of
things
that
got
promoted
is
kind
of
in
line
with
the
usual
three
four
two
wasted
packages
each
day.
The
second
part
of
the
week
was
a
bit
more
challenging.
Let
me
take
a
look
at
my
notes
as
well.
If
I
remember
not
only,
we
have
this
very
interesting
outage
window
of
PDM,
not
working
for
the
whole
weekend,
plus
even
more,
but
we
add
problems
with
flaky
QA.
We
were
doing
the
the
secure
now.
What
have
we
done?
The
patch
release?
D
We
have
done
a
patch
release,
and
so
we
kind
of
had
a
bit
of
challenges.
Let's
see
in
the
final
part
of
the
week-
and
this
is
yeah
clearly
all
reflected
by
those
things-
those
same
things
are
still
clearly
visible
in
the
lead
time
for
changes,
because
we
I
think
we've
NE
I
mean
probably
with
with
feminine
friends
day.
We
have
seen
similar
City.
No,
we
never,
because,
basically,
we
have
a
single
hole
which
so
usually
when
we
have
a
family
and
friends
date.
D
B
D
Is
it
isn't
it
yep
and
it's
interesting
to
see
this
in
comparison
with
this,
but
they
they
are
on
different
pages.
So
this
is
still
the
sixth,
so
the
sixth,
we
only
did
our
deployment
because
it
was
the
preparation
for
the
Ruby
3
roll
out
so
Monday
morning.
We
just
make
a
package
and
get
it
through
everything,
and
then
everything
was
on
post
and
that's
the
deployment
right,
which
here
has
this
just
then
we
have
a
dip
before
and
after-
and
here
is
just
the
representation
of
on
called
mttp
right.
D
So
we
had
and
one
data
point,
which
is
the
2.7
days,
which
is
the
weekend
and
then
there's
another
Gray
Line,
because
we
were
not
deploying
up
until
the
end
of
the
eight
when,
because,
basically,
that
single
pipeline
that
included
the
Ruby
3
rollout,
was
canceled
and
delayed
for
more
than
one
day,
because
we
were
just
running
every
single
job
in
isolation
and
with
baking
time
in
between
and
so
yeah.
This
is.
This
is
what
happened?
The
mean
time,
the
media
medium.
D
This
is
the
median
the
median
increased
as
well,
because
this
now
we
have
18
hours
because
of
all
of
this
and
yeah.
That's,
basically
what
we
were
expecting
same
things.
We
see
here
dip
zero
deployment
truck
on
seven
and
then,
by
the
end
of
the
eight.
We
had
this
three
departments
and
we
are
getting
back
to
seven,
eight,
nine,
the
regular
numbers
so
and
this
again
we
the
struggle.