►
From YouTube: 2021-06-29 - Deployment SLO
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Awesome
so
thanks
again
for
joining
me,
the
purpose
of
this
meeting
is
to
discuss
what
are
we
doing
right
now
for
the
deployment
slo
and
what
are
we,
what
are
going
to
be
the
next
steps?
So,
just
to
summarize
right
now
we
discovered
that
the
metrics
are
not
being
persisted.
We
discovered
that
for
the
counter
and
we
did
a
workaround
which
sort
of
works,
but
for
the
instagram
doing
a
workaround.
Well,
it's
not
visible,
because
instagrams
are
very
complicated
type
of
metrics.
A
So
with
that,
we
decided
to
build
our
own
specialized
release
tools,
push
that
away,
that
that
keeps
the
metrics
in
memory,
and
we
really
hope
that
this
is
kind
of
generic
enough
to
handle
all
the
metrics
that
we
are
hoping
to
have
basically
counters
and
instagrams,
because
we
don't
have
couches
at
the
moment.
A
Well,
there
is
still
some
questions
around
this
and
the
first
one.
It
was
regarding
the
metric
retentions
and
the
resets
from
what
I
understood
yesterday,
unless
you,
if
we
have
a
reset
in
this
program,
demon
or
whatever
the
local
storage
of
that
program
will
be
wiped
out.
But
the
local
storage
from
prometheus
will
not
because
prometheus
has
a
different
storage.
Is
that
correct.
B
Yes,
this
is
correct
and
also
on
point
two.
Yes,
prometheus
will
understand
that
there
was
a
reset,
and
so,
if
you
are
just
if
you're,
just
charting
the
value
of
the
metrics,
you
see
the
drop.
B
So
basically,
if
this
is
supposed
to
be
only
increment,
it
should
only
only
increment
and
all
of
a
sudden
you
just
wrap
around
basically,
so
he
comes
back
from
something
about
zero,
which
is
below
the
previous
value.
B
So
this
is
enough
to
figure
out
that
there
was
an
an
application
restart
and
so
that
he
has
to
keep
counting
the
reason
why
this
doesn't
work
for
the
the
thing
that
we
are
doing
if
we
want
to
use
the
push
the
push
gateway.
So
the
the
generic
implementation
is
that,
if
the
value
that
you
are
pushing
is
the
same
that
it
was
before
the
reset,
then
premieres
cannot
understand
that
it
changed.
B
Yeah,
you
would
think
that
is
the
same
value
that
he
had
before,
because
we
only
add
one
on
each
round,
because
we
count
the
number
of
deployments
and
it's
one,
because
it's
at
the
end,
there's
no
way
you
can
understand
what
happened
in
in
between.
We
just
think
that
this
is
the
same
deployment
that
he
already
has
in
histogram
in
his
instagrams.
A
Yeah
got
it
thanks,
so
another
question,
and
I
think
this
is
the
same
question
that
amy's
place
that
amy
growth
on
point.
Three
is:
how
long
is
the
data
retention
policy
in
promote
use?
Do
we
know.
C
Asking
some
of
the
sres
or
posted
in
for
lounge
like
what
we
currently
have
set
and
see
and
see
what
that
looks
like
okay,.
B
No,
I
mean
so
app
decks
in
terms
of
slo
only
makes
sense
in
in
the
now
I
mean
you
just
lose
the
information,
but
you
don't
need
that
information
to
so
just
let
me
rephrase
so
basically,
appdex
compares
the
number
of
satisfaction,
yeah.
B
B
Is
that
if
you
have
a
30
days
retention
policy,
you
can
see
the
updates
at
any
point
in
time
from
now
to
30
days
ago,
but
not
before,
but
the
fact
that
you're,
just
trashing
away
old
values
do
does
doesn't
change
your
ability
to
to
calculate
the
updates
at
the
current
time.
Let's
try
to
figure.
C
Out,
okay,
so
we're
trying
to
calculate
that
we're
trying
to
calculate
the
this
deployment
that
just
happened
like
oh
or
like
what
percentage
of
our
deployments
happen
like
are
faster
than
our
whatever
our
bucket
is
defined
as
right.
So
it's
less
about
it's
less
about.
Did
this
one
particular
deployment
that
just
happened?
Was
it
fast
or
slow?
So
we
already
know
that
from
slack.
So
this
is
more
about
like
what
percentage
of
our
deployments
are
running
at
certain
times,
yeah.
B
B
C
Right,
so
that's
the,
I
think,
that's
that's
kind
of
what
we're
saying
like
the
question
is
like
how
much
data
like
does
prometheus
have.
Could
we
do
this
per
month,
like
I
think
I
mean
feel
free
to
disagree
here,
but
I
don't
think
less
than
a
month
is
a
particularly
useful
number
given
like
the
number
of
deployments
we
do
and
the
fact
that
we
are
trend,
we're
tracking
mttp
on
a
monthly.
B
I
was
thinking
more
about
the
day
as
the
unit,
because
it's
more
actionable
in
terms
of
because
we
are
using
to
to
think
about
sizes
which
fetch
data
delayed,
but
the
value
that
we
get
here
is
that
we
have
real
time
information,
so
it
it's
more.
I
mean
we
can
have
daily
information,
so
I
think
we
should
go
with
daily,
I
mean
so.
We
can
see
how
this
change
over
the
course
of
time,
so
we
basically
what
I'm
thinking
is
that
we
can.
A
We
have
the
real-time
objects
that
give
us
the
metrics
for
right
now
for
the
current
time,
but
we
also
sort
of
needs
to
have
some
sort
of
analysis
for
the
past
metrics,
because,
based
on
that,
we
might
need
to
take
decisions
about
what
are
going
to
be
the
s,
at
least
for
italy
or
for
of
regular
migrations
or
for
post
migrations,
which
we
kind
of
need
to
have
past
data
to
analyze
those,
because
we
cannot
have
just
like
today's
data
to
decide.
C
Someone
summarize
this
up
and
put
it
in
an
issue,
because
I
think
we're
talking
about
slightly
different
things.
I
don't
mind
if
this
is
an
aptx
or
not
an
appdex.
I
I
mean
like.
I
think
we
should
just
make
sure
that
we
we're
gathering
and
showing
the
data
that
will
allow
us
to
make
the
right
decisions
right.
So
I
think,
what's
interesting
is
seeing
pipelines
over
time.
I
think
robot
had
a
real,
clear
visualization
of
they
are
getting
slower
over
time.
So
how
can
we
visualize?
C
How
can
we
keep
track
of
that
stuff
plus,
as
you
talked
a
bit
about
myra
like
I
suspect,
certain
pipelines
at
different
times
of
day
may
behave
differently
or,
like
we
know,
post
deployment
migrations
affect
them.
I
think
it
would
be
interesting
to
see
the
patterns
of
pipelines
kind
of
like
we
do
with
mttp,
like
I'm
less
I'm
very
much
less
concerned
with
this
last
pipeline
was
slow
or
the
last
two
were
slow
like.
I
think
we
can
absolutely
see
that.
I
think
it's
much
more,
the
slightly
longer.
B
Okay,
so
regarding
retention
policy,
I
was
just
checking
in
the
meantime.
So
it
looks
like
is
a
global
configuration
for
the
whole
primitive
server.
So
everything
that
is
scraped
by
a
specific
server
will
fulfill
the
retention
policy.
It
can
be
expressed
in
number
of
days
or
amount
of
storage
default
is
seven
days.
I
don't
know
how
we
are
handling
this.
B
So
if
this
is
global
and
we
want
to
have
longer
retention,
we
probably
need
prometheus
server
only
for
our
own
metrics.
C
Or
is
prometheus
the
right
tool
right?
That's
the
other
thing
here
is.
We
should
like
make
sure
like
we
already
use
psi
sense.
We
use
other
tools,
like
I
mean,
there's
definitely
disadvantages
of
each
one,
but
I
think
we,
I
don't
think
we're
going
to
necessarily
use
this
as
a
normal
app
dex
like
I
don't
think
we
would
have
an
alert
which
fires
to
say:
hey
current
pipeline
is
running
an
hour
slower
right
like
I
would
think.
C
We
almost
always
know
that
already
so,
based
on
that,
like
I
don't
think
it
has
to
be
this
way,
but
I'm
completely
open
to
like
what
options
we
have
and
where
the
best
place
to
store
us.
C
C
These
will
be
the
things
we'll
be
able
to
see,
and
these
will
be
the
decisions
we
can
make
or
we
have
other
options
like
here's,
what
happens
in
size
sense,
a
big
disadvantage
of
size
is,
if
we
don't
have
the
data
in
there
already
it'll
take
us
a
super
long
time
to
get
it
there,
that's
a
major
disadvantage,
but
it
could
be
more
of
a
longer
term
thing
and
there
may
be
something
short
term.
We
could
do
prometheus
with
long
term,
site
sense
or
but.
A
So
this
fetch
from
from
the
database
so.
B
A
I'm
not
sure
if
there
are
options,
but
for
mttp
it
is
reading
the.
C
In
via
other
ways,
though,
because
we
have
a
lot
of
data
in
sci-sense
and
it
comes
from
all
kinds
of
different
ways
we
can
ask-
there
is
the
whole
data
team.
I'm
not
sure
that
actual
name
is
data
efficiency.
I'm
not
sure.
I
could
need
that
team,
yeah,
yeah,
there's
a
whole
team
that
do
this.
We
can
certainly
ask
them
what
data
is
already
there,
and
if
we
wanted
additional
stuff,
we
could
ask
them
like
how
could
we
go
about
getting
it
like?
C
C
Right
why
it
takes
why
it
takes
long-term.
It
doesn't
get
prioritized
very
often.
I
don't
think
to
actually
do
that
stuff.
We
can.
We
can
certainly
make
a
request
and
work
with
them
to
to
see
how
we
can
get
it
done.
I
think
at
this
point
I
think,
let's
not
worry
too
much
about
the
the
like.
How
would
we
do
it,
but
let's
make
sure
we
actually
decide
like
what
we
want
to
do,
because
I
think
that's
I'm.
C
I
I'm
not
convinced
right
now
from
this
conversation,
I'm
not
super
convinced
and
prometheus.
I
don't
know
if
a
week
will
give
us
if,
if
we're
limited
to
a
week,
I'm
not
sure
how
useful
that
will
be.
C
A
C
Not
so
we
need
to
investigate
first,
what
is
the
value?
Okay,
yeah,
let's
find
out
that
I
think
a
month
is
probably
the
minimum.
I
think
three
months
would
be
great
like
I
think
that
would
probably
be
his
maximum
that
we
would
necessarily
need,
but
I
think
a
month
would
be
the
minimum
if
we
could
just
get
one
month.
That
would
be
fine.
A
A
Okay,
so,
based
on
that,
we
can
decide.
If
we
I
mean,
I
think
we
can
use
prometheus
to
to
have
the
objects
that
one
doesn't
change,
because
we
need
to
calculate
the
objects.
But
perhaps
we
need
another
tool
to
actually
demonstrate
the
different
patterns
from
for
the
coordinated
pipelines.
C
C
C
A
C
Yeah,
I
think
so,
and
it
might
be
that
we
do
use
a
couple
like
I
think,
it'd
be
worth
us
trying
to
think
about
like
if
we
put
the
data
in
one
right,
if
assuming
we
can
get
the
data
in,
if
we
had
that
data
there,
what
would
that
show
us
like
what
what
decisions
would
we
be
able
to
make?
Based
on
that?
C
B
Think
it's
the
global,
because
if
I
remember
I
was
looking
at
your,
your
dashboards
and
prometheus
ds
was
set
to
global
whatever.
That
thing
means,
because
I'm
just
trying
to
extend
the
window
on
some
of
our
dashboards
on
grafana,
not
our
as
a
company.
B
Now
as
a
delivery
team
and,
for
instance,
the
web
overview,
I
was
able
to
get
data
from
90
days
old
now
I'm
trying
to
get
on
the
sixth
month,
but
I'm
I'm
afraid
we
just
would
not
be
able
to
calculate
all
this
all
this
information
at
the
same
time,
so
probably
just
kill
my
requests,
but
so
the
data
is
okay.
Six
months
it
is
so
it
was
able
to
show
me
six
months
of
data
and
just
share
my
screen.
That's
six
months
good.
B
A
B
And
as
we
can
see,
we
have
stuff-
I
mean
I
don't
know
how
looking
at
the
amount
of
information
that
we
have
collapsed
over
here
compared
to
the
general.
Probably
many
things
were
evolved
in
this
period
of
time.
So
maybe
it's
not
that
accurate,
maybe
they're
missing
some
data
points.
I
think
these
are
feature
flex
deployments
and
maybe
things
that
we
were
not
checking
earlier
on,
but
I
mean
the
web
optics
is
here.
We
can
see
interesting.
A
B
B
A
B
Awesome,
in
any
case,
the
things
that
I
wanted
to
point
out
is
that
prometheus
is
so
imprometers
is
designed
by
most
having
multiple
prometheus
server.
B
Everyone
is
more
close
to
the
things
that
you
want
to
scrape,
and
then
you
have
some
kind
of
hierarchy
of
information,
so
you
are
scraping
things
from
other
servers,
scraping
old
servers,
and
so
you
have
things
the
same
things
in
many
places.
So
what
I
want
to
say
is
that
if
we
end
up
having
specific
retention
policy,
they
are
incompatible
with,
say
company
default.
B
We
can
always
have
our
own
primitive
server
so
that
it
is
scrapes.
You
can
collect
all
the
information
for,
let's
say:
maybe
we
want
to
keep
it
for
10
years
right
I
mean,
because
the
point
is
that
each
server
will
will
scrape
several
things
and
if
all
of
them
have
the
same
retention
policy,
while
it
was
able
to
calculate
one
year
of
updates
on
web
overview.
So
it
is
there
okay.
C
Cool,
could
we
also
just
cover
sorry
it's
not
on
the
agenda,
but
I
can
add
it
onto
the
agenda.
Actually,
let
me
just
do
that.
I
just
want
to
talk
a
little
I'll
add
it
on.
I
just
want
to
talk
a
little
bit
about
app
decks,
because
I'm
not
totally
sure
I
just
want
to
make
sure
we're
all
kind
of
thinking
about
the
same
thing
on
what
that
will
actually
be
like,
but
it's.
C
Do
you
want
to
go
through
any
of
the
other
points
like
in
fact
just
to
unblock
this
stuff,
like
myra,
like,
I
think,
if
we,
if
we
get
to
the
point
where,
like
the
retention
policy
is
more
than
a
month,
let's
just
go
ahead
and
like
keep
going
with
this
stuff,
like
I'm
happy
for
sci
sense
to
be
a
long,
long
term
solution.
If
it's
helpful,
I
don't
think
we'll
get
data
in
there
super
quickly.
C
So
if
we,
if
we
can
get
a
month
of
this
stuff,
then
let's
go
ahead
and
get
that
on
the
technology
side
like
on
the
go.
What
I
was
going
to
say
to
you
my
rise
as
dri,
make
sure
make
sure
you're
happy
with
that
one
like.
If,
if
it's
something
you
can
understand
and
edit
then
like
I'm,
I'm
fine
with.
C
But
make
sure
you're
happy
with
that.
A
C
That
awesome,
okay
and
I
think,
we've
covered
all
of
three
so
I'll
just
delete
this.
I
think
all
of
my
points
are
covered
in
point
three,
so
it's
all
good
on
app
decks.
So,
let's
just
so,
could
you
just
talk
us
through
like
what
are
you
imagining
the
appdex
chart
will
look
like
and
like
kind
of
how
we'll
interpret
that
if
you
want
to
check
that,
that
is
what
you
were
thinking
about
at
last.
Here
is
we,
as
we
implement
this
stuff.
A
Yep,
just
one
second.
A
I'm
gonna
share
my
screen.
It
might
be
easier,
okay,
so
just
for
context,
unless
you,
this
is
the
wrong
sketch
that
amy-
and
I
did
a
couple
of
weeks
ago
about
showing
what
is
the
end
result
about
this
deployment
slo
metric
and
what
we
want.
The
metrics
that
we
basically
want
is
to
show,
like
the
deployment
duration,
to
see
what
are
the
patterns
in
these
coordinated
pipelines?
A
It
might
last
longer
for
because
some
reason-
and
then
this
is
this
one-
the
deployment
slo
optics-
that
I
think
the
definition
of
an
updex
is
the
number
of
occurrences
that
will
occur
successfully,
split
or
divided
between
the
number
of
occurrences,
and
that
should
give
us
some
kind
of
percentage,
which
is
what
we
are
looking
for
right
now
and
based
on
some
data
that
we
analyze,
it
seems
that
most
of
the
deployments
were
about
four
hours
and
a
half,
so
we
decided
okay,
we
are
going
to
take
the
the
target
of
four
hour
and
a
half,
maybe
five
hours
and
decide
that
at
least
95
percent
of
our
deployment
should
fit
within
that
bucket
of
less
than
four
and
a
half
hours.
A
So
I
think
what
we
want
at
the
end
is
to
have
this
graph.
That
is
basically
saying.
Okay
in
this.
In
this
case,
I
put
99,
but
it
can
be.
95
of
the
data
can
be
adjusted
in
future
iterations,
and
then
we
can
see
how
it
is
well
behaving
across
time.
B
Yeah,
I
have
a
question
and
actually
it's
a
precision
about
the
update
definition
and
then
a
question
which
is
the
updatex
definition
is
exactly
what
you
tell
but
there's
a
bit
which
is
over
a
given
period
of
time,
and
so
usually
it's
I
mean
you
just
you
can
date
over
the
last
five
minutes.
Last
10
minutes
last
I
mean
this
makes
no
sense
for
deployments.
So
in
terms
of
deployments,
we
could
say
over
the
last
30
days
over
the
last
week.
B
Over
the
last
day
I
mean
this,
is
we
have
to
think
about
the
default
value
we
want
to
show,
because
then
we
open
the
dashboard
and
there
will
be
an
just
as,
as
you
pointed
here
here,
there's
an
aptx
score,
so
we
alway
we
all
of
us.
B
We
need
to
be
sure
what
we
want
to
show
there
right
because,
for
instance,
if
you
going
back
to
the
trending
idea
that
amy
was
talking
about
before
is
kind
of,
if
you
have,
if
you
have
a
window
of
one
month,
then
basically,
if
you
daily
check
the
the
numbers,
you
you
definitely
see
the
trend
right,
because
the
window
is
so
large
compared
to
the
to
the
to
the
point
in
time
where
you
are
doing
your
observation,
where
you,
basically
it
is
not
like
an
app
that's
in
in
a
production
environment
where
you
just
have
the
six
hour
window,
and
so
probably
we
can
do
something
like
this
or
we
can
maybe
check
with
andrew
what
we
can
learn
about.
B
How
adjust
his
forecast
thing
to
something
that
makes
more
sense
for
us.
So
maybe
something
like
we
can
still
have
a
daily
updates,
but
our
burn
when
it's
abandoned
wraith.
I
don't
remember
there
are
some
sections
where
you
say.
If
you
continue
on
this
trend,
you're
gonna
get
outside
of
your
updex
slo
by
six
hour.
I
think
our
definition
is
six
hour
and
the
end
of
month
in
for
what
we
have
in
in
in
our
alerts
and
probably
then
we
want
to
maybe
adjust.
C
So
we
I
think,
that's
like
sounds
like
future
iteration
stuff,
like
I
think
I
I'm
definitely
a
lot
less
concerned
about
alerts
and
things
like
that,
like
I
think,
from
channeling
of
mine.
A
B
End
of
the
month,
if
something
doesn't
change,
we
will
not
sustain
achieve
our
goals
by
the
end
of
the
week.
The
end
of
the
month,
okay,.
C
Yeah,
okay,
that
definitely,
I
think,
makes
sense
we're
saying
that
we're
roughly
kind
of
saying
that
number
of
deployments
over
the
last
month-
let's
just
start
there
and
see
what
that
looks
like
if
we
can
get
that
data
and
we
are
aiming
to
determine
like
what
that
outdex
score.
So
what
percentage
of
pipelines
are
completing
ahead
of
our
target
deployment,
slo
and
all
of
these
numbers
can
be
adjusted
right.
If
we
end
up
like
apt
score
is
like
50
sure
we
can
adjust
this.
C
A
B
A
Okay,
yep.
B
Yeah,
I
was
going
to
say
that
the
challenge
that
I'm
have
right
now
and
I
think
that
it
can
be
shared
as
a
challenge,
because
I
think
everyone
has
the
same-
is
that
we
are
talking
about
visualization
of
information
that
we
don't
have
so
that
we
I
mean
I
was
trying
to
figure
out
what
the
dashboard
was
showing
and
decided.
This
doesn't
work.
So
I
just
let
me
get
some
fake
data
that
I
know
the
data,
so
I
can
see
what
I'm
looking
at,
because
I
was
looking
at
those
things
like
I
don't
understand.
B
If
there's
no
data,
if
the
data
is
wrong
or
because
I
don't
know
what
they
should
look
like
so
this
is,
I
think
this
is
one
of
the
biggest
challenges,
because
the
time
window
is
so
large
because
we
were
talking
about
api
request
rate
on
github.com,
you
can
just
start
tracking
and
then
two
hours
later
you
have
enough
data
to
just
start
looking
or
how
does
it
look
like
deployments?
No,
they
are
kind
of
longer
term.
A
Yeah,
that's
why
I
did
this
pushing
fake
data,
so
I
can
try
to
move
things
along
and
I
guess
to
unblock
you.
We
might
I'm
not
sure
if
we
need
to
figure
it
out
what
is
going
to
be
the
retention
policy
or
if
you
can
start
working
independently
on
these
subjects,
with
fake
data
that
you
can
do
on
your
local
machine.
A
A
So
just
we
are
way
out
of
time.
I'm
sorry
for
that.
Just
to
wrap
up
the
meeting,
I'm
going
to
investigate
what
is
the
retention
policy
in
prometheus
and
when
it
comes
to
implementing
the
specialized
release
to
square
away.
A
B
B
We
have
this
project
where
you
can
just
spin
up
a
machine
right,
so
I
just
took
one
of
those
five
euro
mount
machine
and
is
probably
it's
absolutely
overrated
compared
to
the
things
that
it
has
to
do,
but
it's
the
smaller
unit
that
we
have
so
I
think
we
should
do.
We
can
do
something
different,
which
is
we
keep.
We
can
start
tracking
data
outside
of
our
infrastructure
because
we
don't
care
and
it's
it's
public
information.
It
kind
of
public
information,
there's
no
secret,
nothing
around.
B
So
I
was
kind
of
asking
a
very
simple
token
system
so
that
there
is
a
secret.
So
you
cannot
publish,
publish
information
if
you
are,
if
you
don't
know
the
secret
and
just
let
it
run
there,
even
if
we
scrape
it
from
parameters
on
our
machines,
I
don't
care.
I
just
want
to
have
those
numbers
so
that
we
can
look
at
graphs
and
things
like
that,
and
then
we
can
figure
out
how
to
actually
deploy
this
thing
properly
and
things
like
that.
B
A
B
Start
with
this
histograms
I
mean
I
don't
I
don't
want
to
go
the
round
down
the
route
of
rewriting
all
the
metrics
that
we
have.
Otherwise,
it's
gonna
take
too
much
time
and
let's
validate
if
these
things
works
and
see
how
power
deployments
looks
like
in
this
thing,
because
maybe
just
one
week
of
things
running
is
enough,
a
couple
of
days
is
enough
right,
just
if
you
have
three
deployments
in
a
day,
three
three
real
data
points
is
better
than
no.
C
One
sure
yet
as
well,
the
get
environment
should
also
be
a
relatively
easy
way
to
spin
up
a
standalone
environment.
If
I
understand
get
correctly.
C
Get
is
the
get
lab
environment
toolkit?
I
think
it's
a
relatively
similar
thing
where
you
can
just
kind
of
like
quite
easily
spin
up
a
stand-alone
environment.
B
B
Great-
and
so
I
mean
it's
it's
something
simple,
so
I
might
just
my
action
plan
myra,
so
that
was
something
like.
B
We
had
things
too
many
things
to
make
sure
that
were
aligned
like
mercedes
on
one
side
and
making
sure
that
you
merged
the
other
one
on
rails.
While
just
reviewing
everything
together
I
mean
we
are
a
small
team.
So
probably
it's
going
to
be
more
useful,
I
mean
we
can
start
hacking
with
this
and
collect
some
data
and
if
it
doesn't
work,
we
just
trash
the
folder
and
yeah.
A
A
No
cool
awesome.
Well,
thank
you
both
for
your
time,
and
I
will
see
you
around
great
thanks
very.