►
From YouTube: 2023-02-21 Delivery team weekly EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
20th
Feb,
2023
delivery,
weekly
I'll.
Let
you
all
read
the
announcements
yourselves
discussion,
whose
discussion
item
is
this.
B
We
went
through
announcements-
no
sorry
I
forgot
to
add
my
names,
yeah
I
discussed
with
both
with
Ahmad
and
Reuben.
B
The
way
how
we
want
to
track
the
the
the
deployments
for
this
epic,
that
kind
of
measure,
deployment,
efficiency
and
the
pipeline
efficiencies,
and
we
discussed
that
it
would
be
good
to
have
a
dashboard
for
each
deployment
where
we
have
all
the
info
that
is
related
to
the
deployment,
and
you
can
have
like
a
select
box
and
switch
between
deployments.
B
B
Mr
lead
time
to
metric
server
and
now
I
will
before
actually
adding
this
changes
to
release
to
I,
wanted
to
double
check
with
with
team
that
that
we
are
in
a
right
track
so
like
we
really
want
to
have
the
dashboard
that
will
show
there.
The
info
about
like
each
particular
deployment
and
deployment,
might
also
have
like
multiple
merge
requests
Associated
and
we
we
track
for
each
deployment.
We
track
the
the
merge
requests
lead
time
for
each
deployment.
B
I
would
say
that
means
we
need
to
add
to
this
metric
server
and
we
need
to
add
the
tour
deployment
kind
of
lead
time
metric.
We
need
to
add
two
additional
labels,
deployment,
ID
and
merge
request.
Id,
that
will
might
you
know,
create
a
bunch
of
additional
metrics.
B
Sorry
Time
series,
but
I
Rubin
mentioned
that
and
I
looked
at
our
kind
of
deployment
currency
and
and
compare
it
to
metrics
that
we
have
like
in
general,
I
found
that,
for
example,
the
metrics
for
ports,
where
you
can
select,
like
particular
like
individual
pots
in
on
on
production
environment.
It's
kind
of
100
times
has
100
times
more
metrics
and
time
series
that
we
expect
to
have
by
adding
this
IDs
to
the
to
the
metric.
B
What's
I
kind
of
waiting
for
the
feedback,
do
we
want
to
have
it?
Do
we
don't
want
to
have
it
and
it
will
definitely
increase
the
amount
of
Time
series,
but
not
I
wouldn't
expect
that
it
will
be
Expo
like
the
the
the
the
Met
time
series
DB
will
be
exploded
so.
D
I
have
a
question
Vladimir,
because
the
adding
a
label
with
deployment,
ID
kinds
of
I
I
can
follow
that
idea,
but
you're
thinking
about
adding
a
label
for
each
merge
request
in
each
deployment
or
something
like
that.
How
do
you
plan
to
track
the
merge
request
level.
B
At
merge
request
label
in
on
the
release
tool,
side
of
the
house.
B
We
will.
We
will
hit
the
API
by
the
end
of
the
pipeline
and
at
least
all
Associated
merge,
requests
and
their
start
time,
and
for
each
merge
request.
We
will
send
to
the
metric
server
the
the
elite
time
for
this
merge
request
and
yes,
they're
gonna,
be
they're
gonna,
be
additional
label
for
the
merge
request
as
well.
I
honestly
I
prefer
to
not
to
have
more
data
rather
than
making
assumptions
and
the
calculating
averages
on
on
the
pipeline
side.
B
So
with
that,
there
are
two
options
to
to
achieve
this
right.
The
first
is
to
calculate
the
average
lead
time
for
the
merge
request
on
the
pipeline
and
send
only
average
to
the
to
the
metric
server
or
send
the
whole
data
and
and
and
use
Prometheus
to
calculate
the
averages
mean
times
whatever
I
I
honestly
I
prefer
to
have
like
a
second
option:
more
data.
We
have
more
kind
of
visibility,
we're
gonna
have.
But
again,
this
is
kind
of
you
know.
We
need
to
find
the
balance
more
more
time
series.
Okay!
B
Can
we
can
we
tolerate
more
times
time
series,
or
you
know
we,
we
have
less
visibility.
A
C
D
So
I
think
this
is
a
question
that
will
be.
We
find
a
better
answer
with
other
folks.
Maybe
scalability
has
a
lot
of
knowledge
about
this
or
reliability
in
general,
because
I
mean
I
I.
Don't
have
enough
knowledge
on
the
topic
to
make
a
call
about
this.
D
It
depends
on
it
should
depends
on
the
on
the
time
window
of
the
queries,
so
the
number
the
explosion,
the
so
that
the
hardware
is
in
itself
within
the
boundaries
of
the
query.
So
how
many
times
that
value
change
in
in
the
time
frame
you
are
querying
for
so
I,
don't
know
if
this
is
a
problem.
Yeah,
that's
that's
another.
B
That's
another
kind
of
reason.
From
my
point
of
view:
that's
another
reason
not
to
worry,
because
amount
of
metrics
in
total
doesn't
really
matter
what
matters
is.
B
What
is
the
amount
of
Time
series
within
kind
of
Prometheus
retention
period
right
because
everything
else
goes
to
a
Google
bucket
and
store
it
in
in
the
Google
bucket
in
in
Thanos
yeah,
but
we
are
increasing
time
series
like
operational
time
series
little
bit
but
kind
of
you
know
it's
it's
not
it's
not
the
same
as
ADD
yet
another
metric
to
to
the
ports
themselves,
because
we
are
creating
destroying
like
a
hundreds
or
thousands
of
pots
across
the
whole
system.
B
But
here
I
I
looked
at
the
merge
requests
on
gitlam
gitlab.com
code
base
and
we
made
like
a
since
since
the
very
beginning
we
made
like
83
000,
merge,
requests
to
this
to
the
gitlab
to
gitlab
code
base
itself
and
I.
Think
even
if
we
have
a
80
000,
merge
requests.
You
know
in
in
a
month
not
in
a
how
many
years
did
love
exist.
B
D
Yeah
I'll
get
your
point
right.
So,
even
if
you
think
a
single
deployment
will
have
a
label
for
the
deployment
itself,
then
a
label
for
each
merge
request
into
it,
but
then
that
same
deployment
will
trigger
kubernetes
deployment.
So
each
cluster
will
have
hundreds
of
PODS
each
one
has
own
pod
label.
So
it's
kind
of
nothing
compared
to
the
the
number
of
metrics
generated
on
the
Primitive
side.
D
Yeah
I
will
repeat
myself:
I
I,
don't
have
enough
knowledge
of
the
of
Prometheus
to
judge
this
proposal.
So
I
mean
this
is
as
much
as
I
can
tell
you
right.
So
I
I,
don't
understand
your
reasoning,
but.
A
I
think
if
you've
got
an
issue
but
I
mean
I'd,
suggest
pinging,
someone
from
scalability
someone
from
like
someone
like
job
from
reliability
and
and
asking
for
their
thoughts.
F
B
Yeah:
okay;
no,
no
wait!
A
second
retention
like
like
there
are
two
retentions,
the
the
one
retention
in
in
parameters
itself,
which
runs
from
every
single
cluster,
and
another
attention
is
on
Thanos.
Okay,
if
you
are
talking
forever
that
you
are
talking
about
real,
but
what
really
matters
is
how
many
metrics
and
time
series
we
have
on
Prometheus.
E
E
We
did
have
the
same
question
when
we
were
preparing
the
deployment
SLO
and
there
is
some
data
that
I
just
put
in
the
chat
and
the
retention.
I
think
it
is
within
one
year
and
five
years
for
Thanos.
A
D
Data
one
year,
then
it's
five
minutes
downstairs
resample
in
five
minutes
for
five
year
and
then
resample
on
one
hour
forever.
So,
depending
on
how
often
we
populate
those
information
so
deployments,
for
instance,
they
will
just
keep
the
same.
They
would
never
be
resampled
because
you
don't
deploy
more
than
once
an
hour.
C
Okay
good
morning,
I'll
share
some
things.
D
I
have
a
question
not
terrible
in
I'll
just
say
in
UK
or
in
the
US.
D
A
B
But
we
come
back
to
observability
since
we're
waiting
for
starvec
I
I
just
wanted
to
share
some
idea
that
we
discussed
about
the
liver,
absorbability.
A
B
So
again,
like
the
idea
of
this
deployment
dashboard
is
that
we
might
have
a
dashboard
for
for
each
deployment
and
also
for
each
deployment,
not
only
the
metrics
that
it's
that
is
usable,
but
also
it
would
be
nice
to
add
traces
and
logs
and
if
something
happens
with
I,
don't
know
we
have
Spike
or
we
have
like
a
long
living
Pipeline,
and
we
really
want
to
like
track
that.
B
We
will
be
able
to
drill
down
to
the
to
the
locks
and
and
traces
and
instead
of
having
like
a
Jagger
and
whatever
we
use
for
locks
kibana,
we
can
actually
add
elasticsearch
and
a
jagger
as
data
source
for
grafana
and
then
in
One
dashboard.
We
can
have
like
everything
visible
with
all,
with
drill
Downs,
with
comparison
with
with
the
relations
between
spikes
in
in
the
metrics
and
traces
and
the
logs.
This
is
one
thing
that
kind
of.
B
We
discussed,
and
also
second
idea,
was
I
think
I
already,
and
you
already
saw
my
comment
on
this
ethics
about
Dora
metrics.
B
Currently,
I
think
that
our
deployment
metrics
and
the
dashboards
they
are
kind
of
sparse
and-
and
the
idea
would
be
not
not
within
this
epic,
because
this
epic
is
only
for
about
deployments
themselves
and
tracking
inefficiency
in
in
the
deployments,
but
in
general
I.
Think
that
would
be
nice
to
have
Dora
metric
dashboard
as
overall
overview
of
the
status
of
the
of
the
delivery,
because
I
honestly
I
know
only
one
good
kind
of
of
status
overview,
which
is
Dora
metrics.
B
If
you,
if
you
know
better
like
if
you,
if
you
know
more
standard
ways
to
track
the
delivery,
where
I
like
feel
free
to
add
your
comments,
but
I
think
the
Dora
Matrix
is
like
a
it's
like
an
industry
standard
for
for
delivery
for
tracking
the
delivery,
and
the
idea
would
be
to
have
like
a
like
bird
eye
view
for
the
Dora
metrics,
where
you
can
see
like
the
overall
status
and
delivery,
and
you
can
drill
down
within
a
particular
deployments
and
see
like
the
house
like.
B
If
you
have
like
spiking
delivery
or
whatever
you
can
like
drill
down
to
the
deployment
metric,
and
you
can
also
correlate
the
dashboard
within
deployment
within
deployment
dashboards,
sorry,
you
can
also
correlate
the
the
metrics
with
logs
and
traces
within
one
dashboard
so
not
have
like,
and
the
idea
is
not
have
like
a
multiple
dashboards
that
are
not
connected
together.
But
having
like
the
overview
dashboard
with
possibility
to
you,
know,
get
more
info
from
other
dashboards
which
is
linked
to
the
main
dashboard.
B
That's
what
idea
where,
like
the
flying
around
and
we
discussed
with
Ahmad
and
Ruben.
A
So,
just
in
the
interest
of
time,
I'm
gonna,
say
I
think
it
sounds
very
interesting
and
maybe,
if
there's
an
issue
or
someone
like,
we
could
ping
ping
delivery,
we
can
discuss
I
love
the
idea
of
joined
up
dashboards.
Let's
take
care
around
dog
fooding,
since
we
have
the
Dora
metrics
in
the
product.
That's
probably
the
only
one
where
we
might
just
want
to
think
about
how
those
will
line
up.
B
A
B
A
G
G
So
we
had
a
pretty
good
week
in
terms
of
lack
of
incidents
but
I
think
on
15th
and
on
17th
we
missed
promoting
quite
a
lot
of
packages,
so
on
15th
on
both
the
days
I
think
we
had
like
three
packages
that
could
have
been
promoted,
that
we
didn't,
and
we
also
had
a
couple
of
packages
that
had
failures
that
either
passed
on
retry
or
yeah
I
think
they
passed
on
retry,
but
it
was
the
next
Auto
deploy
pipeline
was
already
running.
G
D
So
this
is
an
interesting
point
right,
so
I
I
think
that
this
point
is
not
really
important:
the
cost
of
packaging
but
more
the
cognitive
load
of
release
managers
right.
So
when
I
was
doing
my
last
shift,
we
were
more
concerned
about
every
release
manager
having
a
kind
of
us
comfortable
workflow
throughout
their
day
and
so
that
they
will
not
receive
more
packages
than
what
they
could
actually
manage,
and
things
like
that.
So
this
is
probably
another
another
way
to
look
at
the
the
number
of
packages.
G
Yeah
I
think
I
think
this
will
affect
Graham
the
most
because
since
he
he's
like
a
permanent
release
manager,
he
has
like
days
where
he
concentrates
on
his
work,
and
so
this
will
probably
affect
him
more.
G
G
Looks
the
same
as
last
week:
no
big
reduction,
no
big
increase
either.
A
She
looks
slightly
better
I
think
it's
just
the
lines
just
under
the
red
I
think
it
is
significant,
so
yeah,
nice
last
week.
A
Yeah,
oh
and
then
just
go
back
started
on
the
agenda,
a
couple
of
things
to
celebrate,
which
is
awesome
and
oh,
let's
go
back
spec.
Let's
go
back.
C
And
then
I
didn't
want
it,
but
now
I
just
wanted
to
point
out
a
few
things
that
I
think
was
pretty
good.
Like
you
know,
chat
UPS
going
down
was
kind
of
painful,
but
that
was
only
really
impacting
our
release
process.
It
didn't
really
impact
all
the
deploys
since
most
of
that's
automated
and
then
the
dev
instance
went
down
a
few
times
last
week,
but
the
one
that
really
impacted
us
was
not
really
impactful
at
all
like
if
anything
builds
may
have
been
delayed.
But
you
know
that's
not
something
we
could.
C
You
know
see
if
we
can't
log
into
the
dev
instance,
but
at
least
production
and
deployments
that
were
ongoing
when
the
dev
instance
went
down
were
not
impacted,
which
was
really
good.
I
did
pause
deployments
because
I
freaked
out
at
the
SEC,
the
second
the
incident
was
open
but
later
on,
I
just
re-enabled
the
deploy
to
continue,
because
I
forgot
that
we
did
this
Dr
work.
C
C
I
did
finally
start
seeing
this
morning
that
there
are
some
git
alerts
that
are
pinging
the
sres,
so
maybe
we'll
be
able
to
get
some
traction
from
the
get-go.
But
realistically,
since
we're
the
one
that
used
this
metric,
we
probably
need
to
start
looking
into
this
and
trying
to
figure
out
what
we
need
to
accomplish
for
that.
A
Then
there's
already
a
bit
of
a
bit
of
conversation
going
on
about
that,
because
it's
affecting
error
budgets
as
well,
so
it
is
linked
to
that
one
then
I.
We
should
get
involved
if
we
can,
but
also
it
won't
only
be
our
problem.
C
Yeah-
and
it
certainly
is
not
already
because
we're
not
the
only
persons
that
use
this
metric,
so
the
second
one
was
just
something:
I
noticed
in
some
of
our
Auto
deploy
jobs.
Last
week
it
looks
like,
unless
you
already
saw
that
this
was
a
duplicate,
so
may
mostly
be
not
an
actionable,
but
I
just
thought:
I'd
wear
race
Awareness
on
it.
So
that's
all
thanks!
Reuben
for
taking
over.
For
me.
G
No
problem
I
will
point
out.
We
got
to
test
the
situation
where
you
know
slack
is
down
because
when
chat,
Ops
the
chat,
Ops
account
was
deactivated
or
whatever
happened
there.
All
slack
notifications
are
failing,
so
what
I
noticed
was
the
notify
jobs
are
all
allowed
to
fail,
so
nothing
happens
if
they
fail,
but
the
auto
deploy
start
job
has
a
notification
in
that
and
it
fails
and
we
also
got
to
test
out
the
slack
down
feature
flag,
the
so
it
works.
A
G
Yeah
so
yeah
the
slackdown
feature
flag
worked
as
intended
and
yeah
the
auto
deploy
start.
Job
is
the
only
one
that
seems
to
fail.