►
From YouTube: 2021-06-17 - Deployment SLO
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Here
we
go
awesome.
Well,
thank
you,
amy
for
joining
me
to
another
deployment
slo
meeting.
Well,
I
think
we
have
a
complete
and
large
agenda,
so
we
can
start
so.
The
first
item
is
that
the
other
time
you
ask
me,
okay,
what
is
the
plan
to
achieve
this
ocr
and,
as
I
mentioned
yesterday,
I
didn't
have
like
enough
time
to
think
about
it,
but
I
already
did-
and
I
put
some
broad
steps
of
how
can
they
be
accomplished?
I
don't
know
if
you
already
take
a
look
at
it.
B
Yes,
so
I
think
the
broad
stuff
all
looks
good.
The
order
we
might
like
we
might
want
to
be
flexible
on,
but
maybe
once
we've
gone
through
this
agenda,
we
can
just
like
quickly
check
it.
The
only
things
I'm
wondering
are
about.
B
Let
me
just
see
it's
probably
actually
about
the
dashboards
and
actually,
like,
maybe
maybe
sort
of
like
steps.
B
A
Yeah
yeah
good
point:
that's
what
I
was
thinking
also
like
step.
I
think
step
one
two
and
three
are
probably
sequential
like
they
need
to
be
done
in
that
order,
but
then
the
remaining
steps
they
can
be
like
done
in
parallel,
because
we
are
probably
going
to
get
back
and
fine-tune
our
values
in
the
dashboard
exactly.
B
Yeah,
that's
right
and
I
think
some
of
the
points
you've
got
further
down
in
the
agenda.
May
some
of
them
may
spring
up
into
their
own
tasks
and
others
we
can
iterate
so
yeah.
Broadly,
that's
good.
Let's,
let's
check
in
on
that
again,
just
at
the
end
of
this
meeting
and
check
that
the
order.
B
A
Got
it
so
point
number
two:
we
also
talk
about
sketching
what
deployment
as
a
low
dashboard
looks
in
our
head
and
it
was
a
very
interesting
exercise.
I
think
you
already
had
the
time
to
take
a
look
at
mine,
yeah.
A
A
look
at
yours,
yeah
and
the
source
that
we
are
yeah.
A
B
A
Yeah,
that
is
sure,
okay,
so
I
based
my
idea
on
the
release
manager
dashboard
and
on
some
other
slos
dashboards
from
the
gitlab
dashboards.
A
So
basically,
what
I
want,
or
what
the
idea
that
I
have
in
my
in
my
head-
is
to
have
some
number
of
deployments
that
are
based
on
the
period
that
we
selected
the
average
duration
of
these
deployments
and
the
deployment
slo,
which
should
be
some
sort
of
static
number
in
my
head,
and
then
we
could
have
two
graphs,
one
that
shows
the
deployment
duration
over
across
time,
how
it's
behaving
and
another
one
that
is
showing
the
deployment
as
a
low
optics
in
percentiles,
which
is
something
that
we
discussed
yesterday
yeah.
A
It
is
basically
saying
okay
how
many
of
these
slos
are
kind
of
violating
our
objects
in
in
this
case,
in
this
imaginary
or
fake
scenario,
it
would
be
a
99
right
and
99
of
the
ordinary
deployment
pipelines
should
fit
within
the
for
that
half
and
half
hours.
So
in
this
case
we
have
some
that
didn't
fit
or
didn't
meet
our
criteria,
and
that's
why
this
is
being
plotted
right
here,
yeah
and
very
broadly.
A
That
is
I
in
this
case,
I
am
just
showing
like
the
deployment
duration
from
stage
into
production,
but
I
guess
into
other
iterations.
We
can
also
like
plot
staging
canary
and
production
like
in.
A
B
Yeah
et
cetera,
I
kind
of
imagine
like
you'd,
have
something
like
this
and
then
you
may
be
like
further
down
as
you
go
about
the
dashboard
you
more
or
less
have
the
same
visuals,
but
for
different
sections
of
the
pipeline
right,
yep
awesome
that
looks
good
one
question
I
had
for
you,
so
the
grouping
of
this
data
deployment.
Duration.
Are
you
going
to
like
what
are
you
thinking
about
in
terms
of
grouping
these
up?
A
Yes,
so
since
we
are
using
instagrams,
we
have
some
sort
of
instagrams
already
give
us
two
kind
of
metrics.
The
first
one
is
the
count
that
is
basically
counting
the
events
across
time,
and
in
this
case
we
only
have
positive
numbers,
so
account
can
be
something
like
in
this
case.
We
can
use
the
count
like
to
show
how
it.
B
A
No,
no,
the
account
the
instagram
is
going
to
start
the
duration
in
seconds
right
and
the
instagram
is
going
to
give
us
that
duration
in
seconds
in
kind
of
two
measures.
First,
one
is
going
to
be
duration
in
seconds
underscore
count
that
is
basically
going
to
store
positive
numbers
in
seconds
and
that
is
going
to
store.
Okay,
we
have
the
seconds
that
have
been
behaving
this
way
across
time,
which
is
kind
of
this
graphic
over
here
on
there
on
the
left.
B
B
Okay,
so
it
gets
kind
of
like
it's
the
duration,
it's
like
pipeline
duration,
but
it
will
show
as
a
trend
right
so
exactly.
C
B
Last
week
we
could
see
it
took
a
thousand
seconds
and
then
today
it's
like
three
thousand
whatever
those
numbers
are
right.
Exactly
I
see,
okay
and
and
just
that,
so
that
plotting
per
deployment
is
that
correct?
B
A
Yes,
yes,
it
is,
we
are
storing
data
per
deployment,
but
in
this
case
I
guess,
since
we
are
visualizing
one
week
time
from
monday
to
friday,
we
are
visualizing
like
the
whole
deployments
that
occur
on
the
on
that
week.
I
guess.
If
we
want
to
visualize
one
day,
we
will
be
able,
we
should
be
able
to
just
select
that
day
and
we
will
be
able
to
visualize
in
a
more
specific
way
what
that,
where
the
deployments
that
occur
in
that
day
and
how
much
time
they
took.
B
Well,
we
could
play
around
with
it.
I
guess
once
we've
got
it,
but
I
think
it
will
be
an
interesting
one
to
think
about
whether,
certainly
on
the
app
decks,
I
think,
like
per
deployment,
won't
be
useful,
like
it'll,
be
just
too
spiky
like
a
week,
grouping
them
up
and
having
like
a
week's
worth
plotted
on
there,
like.
That
sounds
like
a
good
number
and
even
on
the
deployment
duration
we
have
to
have.
B
Maybe
you
have
to
play
around
with
that,
whether
it's
actually
useful
to
see
like
this
one
particular
deployment
was
super
long
or
whether
it's
actually
more
useful
to
have
like
week
wide
week
of
view.
Like
you
know,
last
week
it
was
on
average
they
took
this
amount
of
time
or
or
whatever.
We
could
work
that
out,
but
yeah.
A
Yeah,
given
our
pace
of
deployment
that
we
are
just
roughly
deploying
three
times
a
day,
I
think
I
think
for
now
it
is
going
to
be
more
useful
to
visualize
the
data
in
during
the
week
in
a
week
period
rather
than
visualizing
per
day
per
day
might
not
be
very
significant,
but
per
week
it
might
give
us
a
better
a
better
idea.
A
B
C
B
Like
I
think
we
should
probably
work
out,
like
you
probably
are
looking
at,
it
probably
is
like
what
what
number
are
coming
through,
like
what
percentage
are
coming
through
based
on
this
week,
or
something
like
that,
like
it's,
probably
not
necessarily
individual.
B
Oh
okay,
that
all
makes
sense
great
on
the
this
again
kind
of
displays.
I
think
you'll
have
to
have
all
of
this
data
on
here,
which
kind
of
got
through
the
undergender
like.
I
think
it
sounds
like
you
have
to
know
the
number
of
deployments
to
be
able
to
calculate
app
deck.
So
that's
all
good
average
duration.
It's
kind
of
interesting,
it's
probably
not
going
to
be
as
useful
as
app
decks,
but
it's
still
kind
of
good
to
see
it.
B
The
deployment
slo
one
is
interesting
because,
although
we'll
have
to
set
a
target,
I
think
the
big
number
we
should
call
out
is
the
the
actual.
What
aptx
is
so
like
what
percentage
of
pipelines
are
running
below
our
target
like
within
our
kind
of
thing?
So
maybe
it's
an
addition,
but
that's
probably
like
more
of
the
headline
number
is
like
you
know:
98
of
pipelines
are
within
yeah
within
our
target
or
something
like
that.
A
B
Yeah,
exactly
that's
right,
yeah,
okay,
got
it
and
even
maybe
even
deployment.
Maybe
we
literally
call
that
out
and
be
kind
of
like
target.
You
know
make
it
really
clear
that
it's
not
a
calculated
number.
It's
like
a
goal.
Type
of
you
know
our
intended
number.
B
Yeah
because
I
think
maybe
the
aptx
score
almost
sort
of
is
the
deployment
slo,
but
anyway,
it
looks
very
like
that's
kind
of
super,
clear
cool.
Okay.
I,
like
that
yeah
I
had
a
play
around
with
the
the
histogram
one
which,
if
you
just
click
over
like
so
I
had
a
little
play
around
with
like
heat
map,
which
also
you
can
do
in
grafana
yeah.
B
This
one
was
a
very
interesting
one.
It's
an
interesting
article.
I
just
shared
it
yeah
like
I
don't
think
it
necessarily
is
better.
Actually,
like.
I
think
your
abdex
one
is
going
to
be
more
useful
and
one
of
the
downsides
of
the
heat
map.
One
has
changed
andrew
about
this.
If
you're
colorblind,
you
don't
get
great
visuals,
because
it's.
C
B
Bit
of
a
downside,
but
this
might
be
an
interesting
one
to
add
later,
we
could
do
it,
maybe
as
a
future
iteration
if,
if
it
turns
out
to
be
helpful,
but
but
no,
I
think
your
abdex
visualization
is
probably
a
good
place
to
start.
A
A
So
this
is
very
basically
what
we
are
doing
here.
We
are
sending
positive,
numeric
values,
which
is
the
duration
in
seconds,
so
on
instagram.
This
already
offers
us
this
kind
of
metric
right.
It's
going
to
offer
us
deploy
deployment
in
seconds
underscore
some
that
we
can
use
as
a
counter
to
grasp
the
deployment
duration
graph
to
plot
the
deployment.
B
B
B
One
thing
on
buckets:
I
just
actually,
let's
go
back
to
the
agenda
because
I
think
I
think
the
one
you've
sketched
out
like,
let's
make
sure
we
are
plotting,
basically
plotting
the
path
between
the
stuff
you
merged
in
yesterday,
and
this
deployment
slo
appdex
chart
that
you've
got
visualized
there,
because
I
think
those
are
the
two
like
we've
got
where
we
are
at
now
with
what
you've
just
merged.
B
B
C
A
Okay,
so
should
we
go
to
the
questions.
B
Yeah,
let's
yeah:
let's
do
that
because
I
think
your
explanation
of
how
it
works.
That's
absolutely
great,
like
happy
for
you
to
you,
know,
keep
tweaking
away
with
with
robert
and
like
get
that
going,
it
looks
good
so
far,
though.
A
Okay,
yeah
awesome.
I
just
covered
one
box
in
this
morning.
I
don't
know
if
you
saw
the
message
in
the
delivery
channel
like
oh.
This
is
something
that
we
don't
have
a
value
and
exploded.
A
Well,
I
just
enabled
the
plus
future
packages
shouldn't
have
that
problem,
but
instead
I'm
also
going
to
fix
that
detail,
because
we
cannot
just
fail
loudly.
We
need
to
take
care
of
it.
B
Okay,
I'm
less
I'm
slightly
less
worried
about
those
sorts
of
things
because
I
know
like
as
we're
building
stuff
in
we'll
we'll
find
stuff
and
fix
it
that
stuff's
kind
of
fine.
I
want
to
just
make
sure
we,
let's
say
like
have
a
good
path
to
so
that
we
end
up
with
that.
Some
sort
of
abdec
visual,
like
that,
so
awesome.
A
Cool
okay,
so
well,
how
does
it
work?
Just
very
briefly,
we
added
one
job
into
the
coordinated
pipeline.
I
don't
know
if
you
already
saw
it,
but
it
is
one
job
before
the
staging
and
then
another
one
after
production
and
basically
those
jobs
register
the
time
and
do
the
difference
between
those
two
and
that
send
that
difference
into
prometheus
that
allow
us
to
visualize
it
into
thanos
so
yeah.
I
learned
a
lot
of
that
last
week,
which
was
quite
fun:
okay,
great
yeah
and
the
instagram
is
giving
us
the
sum.
The
observed
values.
A
It
is
also
wrapping
the
direction
into
different
package,
which
is
fun
to
see
once
you're
playing
with
it
and
well.
I
think
it
is
a
nice
metric
to
use
for
updatex
and,
of
course,
well.
We
are
skipping
the
package
time.
We
are
not
using
the
package
time
because
that
already
adds
like
one
hour.
B
A
C
B
A
Okay,
awesome
so
moving
on
to
the
question
and
depending
items
so
the
buckets,
the
buckets
these
ones
are
special
and
they
are
basically
the
heart
of
the
instagram.
From
what
I
can
tell
so
I
had
a
little
chat
with
robert
about
okay.
How
should
we
define
the
buckets
because?
Well,
we
know
how
much
a
production
pipeline
lasts,
but
I
didn't
really
want
to
define
them
from
like
the
bottom
of
my
chart
of
the
bottom
of
my
head.
So
what
I
did
is
to
analyze.
A
So,
based
on
that,
I
define
linear
puppets
that
are
ranging
for
forty
thousand
fourteen
thousand
five
hundred
all
the
way
to
twenty
thousand
five
hundred,
and
they
have
a
width
of.
I
think,
one
thousand
and
five
hundred.
B
So
sorry,
just
so,
I
don't
have
to
do
loads
of
masks.
Just
in
my
head
quickly,
like
roughly
how
big
are
the
buckets
in
hours.
B
I
think
I
I
I
guess
I
I
don't
need
to
know
too
much
specific.
I
guess
I'm
just
thinking
so
so.
A
couple
of
things
in
buckets
right
so
andrew
gave
me
a
great
tip
that
we
need
a
bucket
that
exactly
matches
our
our
target
right.
So
the
way
that
the
app
decks
will
work
is
buckets
are
either
exactly
matching
your
your
I'm
sorry,
like
numbers,
are
exactly
matching
your
bucket
size
or
they're,
greater
or
they're,
less
right.
B
So
andrew
was
saying
in
order
to
make
this
work
for
an
aptx.
What
we'll
need
to
do
is
set
our
buckets
and
say,
for
example,
we
said:
okay,
we're
just
gonna
say
our
target
is
five
hours,
we'll
need
a
bucket
that
is
five
hours,
and
then
that
will
map
everything
against
it
and
then
we'd
have
some
buckets
that
are
bigger
and
some
buckets
that
are
smaller
like
more
time
and
less
time.
Okay
for
the
bucket.
A
B
I
think
it's
exact,
so
I
think
I
mean
we
should
double
check
with
andrew,
but
I
I
my
in
my
mind.
I'm
kind
of
thinking
like
based
on
your
numbers,
like
say
it
looks
like
most
of
our
deployments
take
place
between
four
and
five
hours.
B
So
if
we
say
today
like
just
to
get
this
started,
we
can
change
all
the
numbers
later.
But
let's
just
say,
then
that
we
want
all
of
our
deployments
to
complete
within
five
hours.
B
We
would
we
could
maybe
have
a
bucket
that's
like
five
hours.
We
perhaps
have
one
that's
like
less
than
five
hours,
which
is
all
good.
We
could
even
have
less
than
three
hours
for
like
super
speedy
for
like
future,
and
then
we
could
have
like
five
to
six
hours
and
say
like
whatever's,
terrible
right
like
more
than
six
or
something
like
that
and
then
have
something
that
uses
the
aptx.
B
Yeah,
I
guess
what
we're
trying
to
catch
here
is
we
we
want
like.
We
should
set
like
a
kind
of
broadish
range,
like
I
think,
right
now,
it'd
be
totally
fine.
Any
deployment
taking
less
than
six
hours
is
fine
right.
We
know
it's
kind
of
normal,
a
deployment
that
took
eight
hours
or
something
would
be
you
know,
we'd
want
to
kind
of
alert.
C
B
Okay
was
that
kind
of
how
you
were
thinking
of
buckets?
What
sort
of
ranges
were
you
like?
What
sort
of
guess
granularity
were
you
thinking.
A
I
was
thinking
something
differently,
but
yours
makes
more
sense
because
I
basically
graph
the
minimum
and
the
maximum,
and
I
divided
that
into
some
number
of
buckets
five
and
then
I
just
those
were
split.
So
it
goes
from
14
500,
all
the
way
to
16
000
seconds,
which
I
think
it
is
four
hours
five
hours.
A
B
70
of
them
are
pretty
average
and
take
say
around
5
hours,
but
oh,
hey,
look
like
20
of
them.
Take
significantly
more
of
that
and
that's
kind
of
why.
I
think
what
we
it'll
be
interesting
to
pull
from
this
app
deck,
so
the
buckets
don't
necessarily
have
to
map
what
we
what
we
see
like.
I
think
your
analysis
is
really
useful
for
us
just
to
be
able
to
pick
a
a
starting
point
like
I
think
we
should
just
go
with
our
target.
It's
five
hours
right
and
five
hours
feels
pretty
quick
to
me.
A
Yeah
because
we
are
using
bridge
jobs
now,
oh
of
course
we
are
yeah.
I'm.
A
Analyze
the
deployments
before
bridge
jobs,
the
numbers
are
going
to
be
different
right.
C
B
Yeah
that
makes
sense
like
let's
just
say:
okay
for
now,
five
hours
is
the
number
okay.
It
might
be
the
wrong
number,
and
we
absolutely
can
change
that.
But
at
least
we
have
a
thing
like
we
set
that
and
that
can
be
like
the
bucket
that
things
equal
to
and
then
we
can
set
some
smaller
bucket,
like
some
quicker
buckets
and
some
slower
buckets.
A
B
Don't
think
it
matters,
I
was
going
to
say
like
if
it's
a
reasonably
small
change,
so
andrew
had
a
great
tip
where
he
said
he
recommended
us
just
getting
the
data
into
grafana
as
quickly
as
possible,
and
then
he
was
saying,
went
for
ingrifana.
It's
pretty
easy
to
actually
tweak
the
ui
in
there
and
kind
of
like
you
can
do
with
this
excel
spreadsheet.
He
was
saying:
you'll
actually
be
able
to
play
around
with
it,
and
once
we've
worked
out
what
it
looks
like
we
can
codify
it.
B
So
that
sounds
like
quite
a
good
approach,
so
we'll
need
to
define
the
buckets
beforehand
and
you
need
to
obviously
be
pushing
the
data.
But
I
think
if
we
have
those
two
things,
then
we
maybe
should
just
try
and
get
to
grafana.
C
A
B
Awesome
yeah.
That
sounds
great
and
I
mean
I
say
I
think,
like
we
can
just
absolutely
tweak
that
stuff,
but
once
we
have
those
buckets
once
we
have
data
going
into
them,
then
I
think
we
can
start
it'll.
Be
really
interesting
that
I
think
to
see
like
what
percentage
of
deployments
is
falling
in
and
out
of
those
things.
A
A
There
must
be
a
way
to
delete
that
information,
because
it's
going
to
use
all
buckets
and
we
don't
want
to
pollute
the
new
information
with
new
buckets,
but
probably
I'm
missing
something
there's
some.
There
must
be
a
way
to
just
remove
that
and
just
replace
it
with
the
new
buckets,
because
we
are
going
to
use
the
same
metrics.
A
B
A
The
I
mean
when
infrastructure
I
mean
the
code
base-
is
flexible
enough
to
implement
failed
pipelines,
but
that
is
going
to
take
some
more
time.
B
B
Andrew
was
talking
about
where
we
actually
track
percentage
of
mrs
hitting
production
within
certain
times,
and
then
that
would
break
down
and
we'd
have
deployment
slo
in
there,
which
is
looking
specifically
at
pipeline
duration,
but
maybe
it's
part
more
of
the
mttp
work
where
we
actually
go
like
okay,
great.
So
we
know
a
pipeline
is
five
hours
and
we
have
all
these
this
data,
but
how
many
other
pipelines
are
failing
like
it
doesn't
contribute
to
app
decks
right?
It's
not
no,
it
doesn't.
B
Because
are
you
gonna
well
you'll
have
some
of
the
failures,
though?
Won't
you
you'll
have
you'll
have
anything
that
we
do
a
manual
retry
on
which
I
think
we've
mentioned
before
right,
yeah.
B
Recorded
so
it's
only
we're
only
going
to
miss
the
failures
that
basically
result
in.
I
guess
an
mr
to
fix,
because
you
know
the
new
pipeline
yeah.
B
I
think
that's
totally
fine,
like
I
think,
yeah
it's
interesting
data,
but
I
don't
think
I
don't
think
we
have
to
have
it
for
this
version
of
deployment.
Sli,
okay,
got
it
yeah.
Let's,
let's
come
back
to
that!
One.
C
A
Yeah
awesome
so
retries
when
an
environment
fails
whether
it
is
staging
on
canary
and
it
is
retried
that
time
is
going
to
be
implicitly
recorded
because
we
are
recording
from
the
start
to
the
ending.
But
what
we
are
not
going
to
be
able
to
do
is
to
map
that
duration
to
a
specific
pipeline.
That
was
retried
at
some
point,
which
is
kind
of
a
limit
there.
But
I
also
don't
think
it
matters
that
much.
B
A
Yeah,
it
can
probably
when
we
are
analyzing
analyzing,
which
coordinated
pipelines
violated
our
aptx
seo.
We
can't
analyze
if
it
was
because
it
was
actually
retried
and
then
we
can
get
some
sort
of
use
case
for
that.
B
Exactly
yeah,
I
think,
that's
totally
right,
yeah
cool
yeah.
That
sounds
good
enough
kind
of.
I
guess
I
sort
of
put
this,
but
not
fully,
is
at
some
somewhere
some
place.
We
should
make
sure
we
have
documented
like
what
we're
actually
measuring
and
make
it
clear
that,
like
it's
implicitly
we're
cut
loading
retries
and
it's
picking
up
some
like
baking
time,
but
not
having
other
things
right,
just
so
that
in
six
months
time,
when
we're
looking
at
this
data,
it's
really
easy
to
work
out
like
what
is
and
isn't
included
yep.
B
So,
just
like,
I
think
I
think,
in
our
docs
robert
added
something
about
the
release,
manager,
dashboard
and
just
kind
of
had
a
little
bit
of
an
overview
of
how
it
works
and
how
to
kind
of
work
with
it.
So
I'm
sort
of
thinking
something
similar
for
deployment
sla.
Okay,
we
can
add
that.
B
A
Cool
okay,
yep,
so
the
next
question
is:
should
we
keep
track
of
the
number
on
deployments
and
I
think
we
already
answered
that
by
looking
at
the
dashboard.
We
do
need
that
to
calculate
to
help
us
with
the
objects,
and
we
still
need
some
sort
of
investigation,
or
perhaps
we
don't.
A
A
B
Think
you
could
also
have
an
additional
counter.
So
andrew
left
a
comment.
Let
me
see
if
I
can
actually
find
it
here.
We
go
so
in
amongst
this
comment.
B
That
andrew
left,
he
he's
talking
about.
B
He's
talking
about
setting
up
two
counters,
so
we
don't
have
to
do
it
exactly
like
that,
but
that
should
be
the
format
he
actually
has
some
interesting
stuff
because
he
talks
about
rollbacks,
which
I've
added
on
later,
but
from
the
looks
of
it
you
you
can
as
you
go
through,
like
you
could,
just
literally
every
time
you
get
your
completed
pipeline
number,
it
sounds
like
you
could
also
increment
a
deployment,
a
separate
deployment
counter
and
use
that
one.
A
B
Sure
that
makes
sense
okay.
So
let's
let's
get
this
as
a
like.
A
Yeah
and
well
the
part
that
I
think
he
needs
some
investigation
is
to
actually
ensure
the
instagram
has
given
us
what
we
need
to
count
the
number
of
deployments
and
I
think
it
does.
But
again,
I'm
not
a
savvy
in
this
matter.
So
I
probably
need
to
see
the
data
and
play
with
the
dashboard
yeah.
B
That
makes
sense,
okay,
yeah
nice
and
that
all
ties
very
much
into
my
into
five,
and
I
think
I'm
gonna
completely
change.
My
comment
here.
A
Okay,
so
the
thanos
dashboard
pushing
this
data
into
prometus,
push
way
it
give
us
well,
it
allows
us
to
visualize
it
intellis,
it's
not
the
previous
dashboard
dashboard,
but
it
is
quite
useful.
So
I
think,
for
this,
iteration
is
for
this
issue
rather
than
this
iteration.
It
is
probably
enough.
B
So
initially
I
agreed
and
I
was
like
yeah
sure-
that's
fine,
but
I
actually
I'm
not
so
sure
now
so
based
on
trying
to
andrew
and
him
saying
about
how,
if
we
could
just
get
things
through
to
grafana
right.
B
All
of
the
config
in
the
ui
and
get
it
looking
like,
we
need
it.
So
actually
I
I
I
think
I
actually
agree
that
his
approach
sounds
maybe
better
and
we
should
probably
just
not
worry
too
much
about
the
extra
stuff
and
just
let's
try
and
get
what
we
have
already.
So
I'm
just
gonna
have
another
look
at
this
issue
that
we've
got
so,
for
example,
on
the
measured
deployment
durations.
B
I
wonder
if
we
review
what
we
want
to
include
in
here
so
like.
I
think
this
comes
back
to
being
iterative
and
we,
maybe
I
guess,
worry
less
about
this
being
the
I
guess
we
worry
less
about
this
task
being
completed
when
we
have
a
dashboard
with
all
the
numbers
and
everything
ready
to
go.
Maybe
we
frame
this
more
as
like
getting
data
to
grafana
with
the
buckets
and
then
we
can
close
out
the
issue.
B
B
A
We
don't,
we
will
need
to
add
from
what
I
understand.
We
will
need
to
do
something
with
jsonnet,
which
is
another
which
I
was
visualizing
having
that
as
an
another
issue,
but
I
am
not
really
sure
how
it
is
connected.
B
A
A
I
think
we
need
to
implement
something
yeah.
I
think
it's
do
something.
Okay,
I
mean
it
can
be
something
very
raw.
You
know
it
can
be
something
like:
okay,
I'm
just
going
to
connect
thanos
with
rafale
and
try
to
see
it
there,
and
that
would
be
our
target
for
this
issue
and
then
we
can
have
another
issue
about
okay,
let's
make
this
data
significant
and
let's
just
clean
it
up
and.
B
Yeah,
exactly,
I
think,
that's
exactly
it
like.
I
think,
like.
Let's
make
this
measure
deployment
duration,
the
the
task
where
we
do
all
the
work
we
need
to
get
the
data
to
grafana
with
the
buckets
just
the
pipeline
duration,
I'm
not
so
worried
about
deployment
counts
and
then
stop
it
at
that
point
and
then
have
a
follow-up
issue,
which
is
the
right
now.
How
do
we
make
this
dashboard
useful
and
pretty
and
add
in
all
the
extra
stuff.
B
Yeah,
like
you're,
I
think
you'll
need
the
buckets.
But
if
you
have
the
buckets
and
some
data
coming
in
to
grafana,
then
yeah
it
should
be
possible
in
the
ui.
A
B
Exactly
yeah,
okay,
and
so
then
I
think
what
we
can
do
for
this
issue
is
move
this
to
a
separate
move.
The
new
delivery
dashboard
thing
like
we,
we
should
put
in
a
dashboard,
but
it
doesn't
have
to
be
like
the
final
version
right.
B
So
if
you
want
to
have
a
separate
issue
which
is
like
make
the
official
dashboard,
that
would
make
sense-
and
this
bit
could
just
be
implied
right
because
we'll
just
leave
it
running
for
whatever
we
need
but
and
then
I
suppose,
like
the
almost
the
sort
of
like
what
we
gain
is
you'll
have
somewhere
with
all
your
data
where
you
can.
Actually,
you
know
like
model
it
out
like
you
did
on
excel,
but
you'll
be
able
to
do
that
in
grafana,
yep.
Okay.
Does
that
make
sense?
B
Does
that
sound
like
a
useful,
like
kind
of
iteration.
A
B
Sense:
cool,
okay,
yeah.
I
agree
that
if
that
sounds
like
a
a
nicer,
it's
almost
like
a
thinner
like
horizontal
slice
of
the
project
right
because
at
the
end
of
it
we
will
have
something
that
shows
up
decks
and
then
we
can
add
on
the
additional
sort
of
depth
to
it.
B
A
B
A
Okay,
okay,
so
probably
the
next
point
is
breaking
down
the
deployment,
time
and
timing.
It
could
be
like
in
future
iterations,
because
what
we
want
is
to
have
an
updatex
for
the
deployment
duration
and
then
once
we
have
that
and
once
we
have
our
dashboard
all
preview
and
all
we
can
start
breaking
it
out
by
environment
and
then,
probably
by
component.
A
B
So
in
terms
of
kind
of
a
definition
like
not
official
like
wording,
but
it
will
be
saying
something
like
so
deployment
slo
will
be
showing
that
will
be
the
say,
for
example,
that
would
be
so.
I
actually
just
notes
down.
B
So
that
would
be
something
like
the
like
percentage
of
of.
B
So
like
deployment
slo
will
will
be
the
percentage
of
the
percentile
of
deployment
pipelines
above
a
target,
or
you
know
something
that's
kind
of
what
we'd
be
conveying,
though
right,
okay
and
then
the
target
is
basically
set
as
the
a
number
to
show
the
elapsed
time.
Okay,
okay,
I
I
I
think
I
agree
with
showing
both
of
them.
I
think
in
terms
of
the
definition
itself,
the
definition
and
also
more
just
like
I
guess,
for
showing
other
people.
I
think
the
percentile
will
end
up
being
more
valuable.
B
Just
because
you
don't
the
problem
with
averages
is
like
if
we
have
the
outliers
really
affect
it
right,
whereas
like
actually,
if
we're
talking
about
percentiles,
it's
a
little
bit
more
like
we
can
say
you
know
90
or
say,
like
90
of
of
deployment
pipelines
are
within
our
our
targets.
It
doesn't
really
matter
if
the
other
10
are
incredibly
quick
or
incredibly
slow.
You
have
a
much
better
view
of
what
the
majority
look
like,
so
we
could
tweak
all
that.
B
I
just
wanted
to
kind
of
get
it
straight
in
my
head,
but
yeah
I
agree
having
both
of
them
showing,
but
I
think
in
terms
of
the
the
official
updates
deployment,
slo
percentile
will
be
the
way
to
go.
Yeah.
A
B
B
Are
we
going
to
make
sure
so
exactly
that's
right?
Okay,
I
think
you're
totally
right,
yeah,
okay,
awesome
ties
right
back
into
this
project
plan,
so
just
having
a
quick,
just
quick
scan
up
here.
So
it's
like
okay,
for
example,
like.
B
You
know
like
documenting
this,
like
it
comes
right
back
to
what
you
just
said
like
I
think
before
we
have
the
official,
like
definition
fully
fully,
but
we
could
share
it,
but
I
think
it
will.
It
will
change
as
we
see
the
data
right
so.
A
Okay,
so
so
the
purpose
of
step
number
three,
which
is
the
one
that
we
are
right
now,
is
to
measure
the
deployment
duration
and
send
that
information
to
rafana,
like
in
the
basic
stages
like
the
town
and
the
markets.
B
So
I
think
it's
almost
one
of
those
things
where
I
do
think
that
three,
four
and
five
will
become
really
iterative,
because
I
I
think
it's
almost
like
the
little
mini
steps
in
between
this
is
going
to
be
then
like
it's
going
to
be
kind
of
like
like
in
grafana
ui
like
play
with
data,
like
you
know,
we
might
want
to
like,
like
consider
changing
buckets.
B
You
know
like
there's
going
to
be
a
lot
of
iterative
stuff
here
and
then
it's
going
to
be,
I
guess
like
decide
on
dashboard
codify
and
once
we're
there
then
we'll
probably
know
at
this
point.
Like
is
five
hours,
a
sensible
number
or
not
step
four,
maybe
almost
actually
doesn't
have
that
much
work
to
do
with
it.
It's
more
just
like
there'll,
be
a
a
moment
in
time
where
we
can
be
like
okay.
B
This
looks
this
looks
right
for
now
and
we
can
just
set
it
and
go
and
then
even
this
adding
this
dashboard
in,
like
you
know,
I,
I
really
think
that
this
is
gonna
have
like
iteration
like
one
iteration
two
journey.
B
It's
missing
all
this
stuff,
but
hey
here's
version
two
and
we've
added
in
this
and
here's
version.
Three
yeah,
we've
added
in
all
the
other
stuff.
Okay,
will
make
sense,
because
some
point
as
well
you're
gonna,
have
to
find
a
way
to
like
fit
in.
B
You
know
like
it.
It's
almost
like
somewhere,
like
iteration.
Two,
we
like
add
in
deployment
count
display.
You
know
there'll
be
kind
of
like
extra
iterations.
I
think
that
come
in
any
others,
we
said
maybe
like
maybe,
for
example
like
in
average.
B
Like
this
just
examples
but
like
I
don't
think
it
necessarily
has
to
be
the
case
that
we
couldn't
have
the
first
iteration
of
our
dashboard,
with
literally
just
that
single
abdex
histogram,
like
I
don't
think
you
have
to
have
all
five
or
six
pieces
we
can
iteratively
add
those
in
as
we
go
yep.
B
A
Yeah
first,
I
need
to
update
the
measure
deployment
duration,
stop
what
we
talk
about
this
right
now
personalized
pockets.
I.
A
A
A
A
Well,
once
that
one
is
into
rafana,
we
can
play
with
it.
A
Try
to
build
the
first
dashboard
like
the
first
iteration
of
the
dashboard.
B
B
Sense
because
it
should
be
possible
once
you
have
all
the
data
in
there,
it
should
be
possible
to
construct
the
whole
dashboard
via
the
grafana
ui.
B
A
So
when
it
comes
to
assignation
of
these
each
steps,
while
this
all
of
them
are
basically
me.
B
Robert
could
do
the
I
mean
this
one,
this
personalized
buckets
one.
This
could
be
an
issue
if
you
wanted
to
open
that.
A
B
B
You're
totally
right,
okay,
yeah,
so
I
think
once
you've
got
everything
in
grafana.
We
could
spend
some
time
or
you
can
like
pair
it
andrew
play
around,
get
it
looking
how
you
want
it
and
then
I
think
that
would
be
a
really
good
thing
for
us
to
be
like.
Okay,
here's
the
issues,
here's
the
extra
data
we
need
or
here's
what
we
need
to
change
to
actually
get
someone
to
be
able
to
code
out
that
grafana
dashboard.
A
B
Does
that
make
sense,
yeah
yeah?
That
makes
sense?
Okay,
well
I'll,
tell
you
what
so
robert
is
out
tomorrow.
So
how
about
you
get
started
on
this
like
so
it's
like
the
first
two
tasks.
Right,
like
I
mean
you
can
definitely
do
both
of
those
right.
That's
that's
straightforward.
B
The
investigative
sending
data
like
it
might
be
worth
if
you
grab
time
with
robert
later
today
to
actually
ask
him
for
his
thoughts
unless
you
have
something
to
get
started
with
tomorrow,
whilst
he's
out
and
then
maybe
you
can
always
pair
up
with
him
on
monday,
if,
if
needed
and
work
through
this
stuff,
because
he's
been
through
this
before
andrew
also
knows
a
lot
about
this-
that
he
might
be
able
to
help
you
as
well.
If
you
get
stuck.
A
B
Exactly
okay,
so
I
think
like.
Hopefully
it
won't
take
too
much
time
once
we've
got
it
in
grafana
for
us
to
like
it's,
not
infinite
amounts
of
data
right,
so
we
can
work
out.
What
that
looks
like
and
then
see
what
the
next
steps
look
like
from
there
got
it.
Okay
makes.
B
We
have
next
next
steps.
Awesome,
that's
great
stuff,
thanks
so
much
for
putting
this
in
and
yeah.
Let's,
let's
see
how
we
go.