►
From YouTube: 2023-01-25 Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Welcome
everybody:
this
is
the
25th
of
January
delivery
system
sink
and
demo.
We
have
several
discussion
points
today,
so
I'm
gonna
start
with
those.
The
first
one
is
obviously
we
are
at
the
end
of
Q4,
beginning
of
q1
Financial
year
of
24,
so
we
have
some
discussion
around
the
yards.
Thank
you,
everybody
for
feeling
and
proposing
ideas
on
the
O'hare
issue
that
I'm
going
to
link.
B
We
already
had
some
discussion
this
morning
on
delivery,
stuff,
weekly
and
trying
to
understand
also
what
can
be
achieved
or
cannot
be
achieved
during
this
quarter.
I
mean
yeah.
We
are
plenty
ideas,
plenty
of
work,
I
guess
we
probably
need
a
team
of
so
many
people
if
you
want
to
achieve
everything.
B
Actually
what
we
have
been
proposed
this
morning,
based
on
the
on
the
proposal
on
the
hair
issues,
but
also
based
on
like
on
the
work
that
we
achieved
in
the
previous
quarter,
is
something
so
a
bit
that
is
still
contributing
to
the
reducing
range
management,
toil
and
effort
and
at
the
same
time,
is
an
exaggerationary
work.
We've
done
so
far.
So
initially
the
idea
was
to
have
an
O'hare.
That
was
a
kind
of
shared
call
between
the
two
groups,
orchestration
system
and
especially
in
reducing
risk
management
toil.
B
B
So
I
think
this
is
actually
the
okr
that
is
proposed
here.
That
is
the
next
saturation
of
the
pipeline.
Observative
Foundation
is
actually
going
to
contribute
to
both
like
system
specific
Parts,
since
Matrix
is
also
one
of
our
domains
and,
at
the
same
time
helping
to
reduce
at
least
to
highlight
what
we're
spending
time
on
release
management
and
how
we
can
make
take
next
steps
to
try
to
improve
that.
B
So
the
wording
of
the
objective
needs
to
be
changed.
Feedback
that
got
like
I,
don't
know
half
an
hour
ago.
Initially
was
this
is
a
build
pipeline
delivery
device,
delivery,
pipeline
intelligence
through
pipeline
observerability
as
a
second
iteration,
so
I'm
gonna,
add
this
on
money,
suggestion
to
highlight
better
the
value
objective.
B
And
the
key
result,
the
main
key
result
that
we
see
there
was
having
the
capability
of
visualization,
alerting,
formatic
and
traces.
This
is
something
that
right
now
we
can.
We
are
sending
traces
to
to
the
group,
obserability
backend
and
the
open
Telemetry
format,
and
we
can
visualize
them.
What
is
what
is
still
problematic
because
of
the
tooling
that
being
offered
so
far
is
to
group
them
a
stage
in
the
job.
So
we
cannot
really
like,
maybe
understand.
B
Oh
all,
UA
jobs,
how
long
we
are
taking,
or
only
a
particular
inside
of
our
pipeline,
so
long
is
taking
right,
at
least
not
in
an
easy
way
and
having
like
understanding,
Trends
understanding
if
these
are
increasing
decreasing
and
having
capabilities
or
maybe
alerting
when
we
see
like
an
increasing
Trend
or
something
like
that
would
be
a
nice
to
have
the
second
key
result
in
Turkey
result.
This
is
something
that
we
discussed
like
this
morning
with
Reuben
I,
don't
want
during
our
111
and
thing
was
during
the
Apec
and
emia
delivery.
B
Weekly
measuring
time
lost
due
to
pipeline
failures,
contributed
to
release
measurement
oil.
So
this
is
something
that
everyone
here
correct.
If
I'm
wrong
is
about
when
the
time
that
we
spend
between
the
pipeline
failure
and
when
we
actually
electrical
pipeline
or
the
pipeline
is
kind
of
recovering,
so
it
would
be
really
great
to
measure
and
have
like
a
good
understanding
of
how
this
is,
which
kind
of
factor
is
playing
on
the
overall
release
management
and
another
okr
would
be
to
capture
measure
report.
B
Instant,
related
employment
blockers
is
I,
think
something
that
was
still
understanding.
It
was
possible
to
do
using
web
books
and
understanding.
It
is
possible
to
understand
from
where
the
incident
is
declared
to
when
the
labels
are
applied
and
when
labs
are
removed.
How
much
time?
Actually
we
are
actually
spending
on
deployment
blockers.
B
Another
suggesting
that
we
got
this
morning
on
the
on
the
delivery
stuff
weekly
post
to
try
to
have
some
other
project,
ready
in
case
that
we
have
some.
You
know
we
are
actually
extremely
efficient
and
actually
achieving
this
okay,
pretty
quickly
in
in
this
kind
of
in
the
entire
planning
that
we
had
so
far.
This
also
is
taking
into
consideration
some
parts
of
how
much
of
an
ability
we're
going
to
have
in
the
next
quarter.
B
So
we
have
some
people
having
some
time
off
for
a
very
nice
reasons
like
sky
black
and
some
people,
like
you
know,
being
here
and
working
or
like
people
like
Vladimir
still,
you
know
on
boarding,
will
have
to
train
for
release
management
on
during
15.10
and
maybe
go
into
a
rotation
in
15
or
11.
For
this.
B
So
we
have
also
to
be
mindful
of
of
the
availability
that
we
have
and
we
kind
of
like
a
Firepower
along
with
the
world
that
we
have
to
talk
about
this
work
right,
so
Jenny,
Ahmad
Ruben,
don't
feel
too
much
depression
because
you
are
the
one
not
having
God
competing
time
off.
B
I
mean
all
of
this
obviously,
is
always
a
a
good
thing
to
have
any
any
thoughts
about
this
O'hara
proposal
proposed
anything
to
add
anything.
You
disagree
with
or
any
other
proposal
you
might
have.
C
I
enjoy
the
current
proposal
related
around
observability
I
would
love
to
see
that
through
because
I
feel
like
it's
going
to
serve
a
very
useful
purpose,
especially
when
it
comes
and
comes
to
our
future
and
deciding
what
Target
pain
points
we
want
to
address.
C
So
having
that
information
as
soon
as
possible
will
be
tremendously
beneficial
not
only
for
us
but
for
maybe
also
engineering
teams
where
we
integrate
with
other
things
such
as
QA
or
you
know,
working
with
development
teams
and
what
they
need
for
us
when
it
comes
to
deployments
and
such
as
far
as
future
items
I
feel
like
we
could
go
a
lot
of
different
directions.
B
Anyone
has
anything
to
propose
for
other
items.
D
I
I:
it's
not
proposal,
but
mostly
question
about
suggestion
from
Marine
to
keep
some
Project
work,
ready,
scoped
and
refined.
What
sort
of
projects
we
we
are
talking
about?
Are
we
talking
about
like
a
project
outside
of
okr?
So
what?
What
is
that.
B
A
B
B
One
of
the
things
that
keeps
coming
up
at
least
since
I
enjoy
this
company
is
the
deployments
of
software
changes
or
image
changes
and
infrastructure
changes,
and
so
this
is
something
that
it
caused
some
problems
or
also
like
a
couple
of
months
ago,
so
it's
always
like
causing
some
incidents
or
some
like
small
hiccups
during
our
deployment
process.
So
it's
having
something
like
that
fully
scoped
and
fully
ready
to
be
picked
up.
B
In
the
case,
you
know
we
have
extra
capacity,
I,
think
it's
something
that's
going
to
be
like
extremely
useful
now
and
I
would
like
having
some
of
the
focused
effort
to
to
use
for
the
next
for
the
next
quarter
as
well,
clearly
not
seen
as
an
AVR,
because
we
can
also
I
mean
I.
Think
if
we
also
add
that
maybe
we'll
do
over
from
it
a
bit
too
much
but
having
something
really
that
could
be
picked
up.
That
is
already
kind
of
okay
Irish,
along
with
the
world.
B
B
C
There's
a
few
things
that
come
to
mind,
but
I
don't
know
how
to
prioritize
this,
nor
which
direction
will
be
the
best,
but
I
think
all
of
us
on.
This
call
are
aware
that
I'm
interested
in
seeing
what
experimental
deploys
look
like
in
the
deep
dark
future
I
would
love
to
see
what
we
could
do
as
a
whole,
because
there's
a
lot
of
groundwork
that
needs
to
be
laid
before
this
is
something
that
we
could
enable
for
people
in
the
future,
so
I
feel
like
if
we
wanted
to
pull
work
related
to
this.
C
It's
only
when
you
groundwork
level,
and
it's
not
going
to
be
something
that
we
could
shove
into
an
okr,
because
there's
too
much
work
that
we
need
to
accomplish
before
we
start
integrating
with
Team
orchestration,
to
enable
the
capabilities
right.
You
know
we
need
to
do
stuff
towards
our
infrastructure
and
stuff
to
the
gitlab
com
repository
before
we
enable
this
to
be
a
feature
that
anyone
could
use.
C
C
This
is
something
I
recently
just
kind
of
thought
to
myself,
but,
like
the
gitlab
operator,
this
is
something
that's
not
yet.
Production
ready
distribution
is
totally
busy
working
on
this
they're
trying
to
hire
additional
people.
For
this
there
are
external
customers
that
leverage
the
gitlab
operator
and
I
would
love
to
see.
If
this
is
something
that
would
be
worth
at
least
checking
out
to
see
whether
this
could
replace
our
use
of
helm,
I
feel
like
kelm
is
a
very
contested
thing.
C
I
think
this
is
one
of
the
reasons
why
we're
looking
at
other
pocs
from
reliability
with
the
use
of
Argo
reflux
and
such
I
would
love
to
see
what
capabilities
the
operator
could
provide
for
us
and
see
if
we
could
leverage
the
operator
to
make
certain
pain
points
that
we
experience
easier
for
ourselves,
because
we
are
then
contributing
to
the
distribution
team
and
helping
the
operator
at
that
point
and
I
think
that'll
be
a
good
cross
collaboration.
C
If
that's
the
thing
that
we
want
to
Chase
and
then
the
other
thing
that
was
on
my
mind
and
Michaela,
you
kind
of
already
mentioned
this,
but
you
know
we
do
have
issues
in
our
backlog
related
to
workflow
management
in
the
gitlab.com
repo
that
you
know
if
we
don't
solve
them,
eventually
we're
going
to
cause
an
incident
and
I
want
to
avoid
that
and
the
one
that
you
mentioned
about
ensuring
that
we're
keeping
configuration
changes
and
all
the
deployments
completely
segregated
is
very
important.
C
I
don't
want
to
be
I,
don't
want
us
to
be
in
a
situation
where
we
cause
an
incident
and
we've
got
this
corrective
action.
That's
been
in
our
backlog
for
over
a
year
now
that
we
have
yet
to
touch.
But
I,
don't
really
it's
hard
for
me
to
think
about
that
kind
of
thing,
because
I
don't
know
what
we
could
do
to
fix
that
at
the
moment.
D
Are
not
that
polite
no
worries
I
just
get
used
to
it?
I
I
wanted
to
say
that
gitlop
operator
sounds
like
a
very
good
idea.
It's
still
in
my
to-do
list
to
review
that
and
see
how
it
works
actually,
but.
D
I
have
a
question:
is
it
like
gitlab
operator,
installs,
gitlab
itself
or
it
configures
the
gitlab,
like
repositories
or
something
you
know
on
existing
GitHub
installation.
E
C
The
gitlab
operator
to
both
manage,
install
and
configure
gitlab
itself
inside
of
our
kubernetes
clusters.
We
would
still
have
the
situation
we're
running
both
Omnibus
and
a
kubernetes
infrastructure,
but
the
gitlab
operator
would
replace
our
use
of
Helm.
The
operator
itself
use
this
Helm
under
the
hood
in
some
way
shape
or
form.
This
is
something
I'm
not
fully
up
to
speed
on,
because
I
have
not
looked
on
the
code
base
either
now
it
works.
But
those
are
the
details.
I
know,
at
least
at
this
moment
in
time.
D
D
Having
gitlop
operator
and
crg
deployed
on
on
on
different
main
spaces,
for
example,
that
gonna
help
us
with
this,
you
know
to
say
this
experimental
deployments
or
something,
but
in
general
it's.
D
It's
more
or
less
the
same
as
we
we
have
Helm,
charts
and
and
the
value
files
and
and
and
the
operator
TV.
But
the
thing
is
that
we
might,
from
my
point
of
view,
the
setup
where
we
have
like
Helm
files
and
Tanaka
and
terraform,
and
this
is
like
a
whole
bunch
of
zoo
of
Technologies
I,
think
and
it's
all
controlled
by
push-based
pipelines,
which
also
like
5000
lines
of
generated
Auto
generated
code.
I.
D
Think
it's
it's
way
too
complicated
and
I
do
believe
that
one
of
the
you
know
one
of
the
okrs,
maybe
not
that
that
quarter
or
maybe
next
quarter
would
be
to
some
somehow
simplify
the
things
and
gitlab
operator
sounds
like
it
will
simplify
the
things
a
lot
or
I,
don't
know
again,
like
maybe.
D
C
I,
do
not
disagree,
I
think
so.
Your
your
terminology
of
the
zoo
of
tools
resonates
with
me
very
well,
because
I
hate
the
fact
that
we've
got
so
many,
and
this
is
one
of
the
reasons
why
I'm
really
eager
to
see
what
the
reliability
team
wants
to
do
in
the
future,
because
I
want
to
make
sure
that
we
don't
have
more
animals
in
our
zoo,
I
guess
but
yeah
I
think
I,
don't
know
like
there's
a
lot
of
options
for
us
to
go
in
terms
of
project
work.
C
I
just
don't
know
what
the
best
direction
to
go
to
is
at
the
moment,
but
I,
don't
know
I
kind
of
lean
towards
trying
to
figure
out
what
we
want
to
do
with
the
getupcom
repository
as
a
whole
to
kind
of
set
ourselves
for
a
solid
Future
Foundation
for
what
we
understand
and
how
to
maintain
maintain
that
repository
going
forward.
I
just
don't
know
what
that
looks
like
at
this
moment
in
time,
foreign.
C
Even
if
we
try
to
like,
if
we
want
to
move
forward
towards
trying
to
figure
out,
what's
closest
towards
getting
us
towards
us
experimental
deploys
I,
don't
know
what
work
would
lend
itself
for
like
small,
easy
chunks
of
work
that
we
could
pull.
That
gets
us
closer
to
that
goal
at
this
moment
in
time,.
D
I
think
that
what
will
definitely
enable
experimental
deploys
is
that
how
to
say
is
is
the
way
to
move
away
from
this
big
installation
and
treat
it
as
like
a
pet
to
towards
the
the
easier
installations
that
we
can.
Just
you
know,
spin
up
on
demand
and
I.
D
Think
gitlop
operator
would
be
one
of
the
option
and
another
option
would
be
githubs
as
well
and
and
our
goal,
which
is
we
which
reliability
team
is
introduced,
but
knowing
the
fact
that
gitlab
will
have
support
for
other,
like
rethinking
GitHub
support
for
for
the
whole,
you
know
for
the
for
the
whole
product,
I
think
I
I,
don't
think
we
need
to
go
our
to
Argo.
That's
that's
kind
of
how
it
calls
Suncoast
fallacy
yeah.
So.
B
B
D
Yeah
yeah,
but
but
the
thing
is
that
you
can
the
the
only
difference
between,
like
the
current
approach
with
pipelines
and
and
and
the
gitops
approach
with
code,
that
it's
actually
pulling
the
things
like
it's
push
and
pull
and
pull,
and
you
can
run
the
same
pipelines
or
combine
like
the
really
complicated
deployment
mechanism
which
includes
Pipelines
into
into
githubs
as
well
I.
There
is
no
like
a
conceptual
conceptual
conflict
here.
D
It's
only
the
way
from
where
it
executes,
like
even
like,
is
it
executing
from
like
at
Central
Point
or
execute
the
same
thing,
but
from
from
the
from
the
local
github's
agent,
not
local,
but
from
from
the
GitHub
station
that
runs
on
on
every
single
cluster.
C
It's
not
made
clear
to
me
where
or
how
rather
gitlab
is
integrating
with
flux,
so
I
almost
wonder
if
it
might
be
worth
making
whatever
changes
are
necessary
to
the
gitlab.com
repo
such
that
it's
it
supports,
get
Ops
in
some
way,
shape
or
form
and
keeping
track
of
what
the
configure
team
is
doing,
because
then,
maybe
we
could
be
ready
for
the
transition
towards
using
that
integration
when
it
starts
to
become
a
little
bit
more
mature,
but
I
I,
I,
guess
the
the
hesitation
I
have
is
you
know
we
understand
how
things
work
today,
even
though
it's
got
all
kinds
of
interesting
issues,
my
concern
would
be
you
know
if
something
isn't
mature
and
we
lose
features
as
we
migrate
towards
that
solution
that
we
have
today
that
we
may
not
have
available
to
US
during
their
first
couple
of
iterations
of
said
feature
so
I
think
it's
there's
going
to
be
some
trade-offs
that
we
need
to
make
as
we
pay
attention
to
that
project.
D
I
think
now
we
like
nothing,
stops
US.
Currently,
if
we
decided
to
go
that
direction,
Nothing
Stops
us
to
move
existing
stuff
to
githubs.
We
don't
even
need
to
get
rid
of
this
Helm
files.
You
can,
as
I
already
showed
you
can
generate
values
from
the
from
these
Helm
files
and
then
you
can
use
those
values
in
in
github's,
workflow.
D
Yeah
well,
I
think
the
whole
flux
story.
It
might
take
few
months
up
to
half
a
year,
I
think
because
they
plant
like
a
UI
they
plant
like
authentication
and
the
secret
management
and
etc
etc.
D
So,
like
I
in
each
letter
like
okay,
it's
going
to
be
like
a
very
simple
proof
of
concept,
but
they
just
blown
up
the
scope
drastically
and
they
added
like
a
lot
of
functionality
but
again,
like
Nothing
Stops
us
to
start
now,
if
we
decide-
or
at
least
like
make
proof
of
concept
for
that
and
be
ready
for
like
who
who
blocks
integration
with
the
GitHub
product.
C
Perfect
just
doing
that,
we've
got
an
issue
in
our
backlog
and
that's
the
one
that
kind
of
tracks
it
may
need
to
be
refined,
because
it's
quite
old,
but
that's
the
one
where
Graham
had
the
initial
idea
of
swapping
how
we
manage
that
current
repo
and
making
it
more
git,
op
Centric
but,
like
I,
said
it's
it's
kind
of
old.
At
this
point
it
probably
needs
a
good
amount
of
refinement
before
you
try
to
figure
out
what
we
want
to
do
and
pull
it.
B
Okay,
so
we
don't
have
to
you
know,
scope
new
Something
New
by
Monday,
or
something
like
that
for
these
extra
projects.
What
we
are
doing
you
know
Monday,
we
actually
have
you
know
we
need
to
have
our
own
Arts,
aligned
and
I.
Think
if
we
all
agree
about
the
pipeline
foundations,
one
and
everything
I
think
if
you're
an
agreement
there
that's
going
to
be
like
you
know
our
official
here,
something
like
that.
B
I
guess:
I'm
gonna
also
put
some
times
for
us
together,
maybe
next
week
or
like
in
10
days
from
now
where
we
can
actually
keep
keep
this
discussion
on
going
and
understand
what
we
could
pick
as
a
team
as
extra
as
an
extra
project
to
a
bit
there
already
where
everyone
is
aligned
with
that.
D
F
B
Seconds,
oh
my
God,
guys
I
said
I
said
so
many
things
in
20
seconds.
I
said
that
on
Monday,
we're
gonna
put
our
pipeline
foundations
like
next
iteration.
B
If
everyone,
you
know,
is
an
agreement
here
and
then
maybe
like
in
a
week
from
now
something
like
that,
we
can
have
a
kind
of
blackstorming
session,
as
a
team
I
think
would
be
useful
where
we
can
actually
understand
also
where
we
want
to
go
with
extra
Project
work
and
continue
this
cash
lecture
here
today
about
you
know
how
we
handle
it.com,
how
we
can
just
be
more
like
a
look
at
what
good,
directional,
githubs
and
and
everything
else
and
like
how
gitlab
operators
one
and
everything
right
so
I
think
it's
worth
it
to
have
a
separate
session
for
that
where
we
actually,
you
know,
cannot
come
up
also
with
energy
on
some
project.
B
G
I
gone
again,
no
yeah
good
I
think
we
had
an
issue
open
for
skavax
third
idea
right:
the
separation
of
image
and
software
changes.
A
B
There
is
there,
is
this:
one:
I
think
is
751
I,
don't
know
how
my
brain
remembers
all
the
numbers
and
not
titles,
but
let
me
add
it
here.
C
I
guess
just
really
quickly
before
you
start
sending
an
invite
for
a
segregate
meeting
for
this.
Should
we
try
to
gather
any
more
opinions
about
this
overall
on
the
current
okr
issue
that
we've
got
open
and
currently
has
a
lot
of
discussion
items
around
all
this
stuff,
I.
B
Mean
that
one
is
we
can
I
just
I
mean
the
the
scope
over
to
the
meeting
today
is
to
set
on
our
okr
for
our
team
for
this,
for
this
quarter
right.
So
if
we
are
setting
the
O'hara
on
these
next
generation
of
pipeline
obserability,
yes,
we
can
continue
like
working
there
as
a
gathering,
more
ideas
as
a
extra
Project
work.
We
can
think-
and
it's
I
think
it's
going
to
be
useful,
so
maybe
for
the
next
word,
if
you
need
to
pick
up
something
yeah,
let's
continue
this
question
there.
B
C
B
Do
you
want
to
take
sure.
B
B
So
anyway,
I
mean
I
mean
the
company,
since
you
know
eight
months
now
something
like
that
so
and
we
still
are
discussing
that
for
surely
I.
E
A
D
I
have
a
question
regarding
that.
So
as
far
as
I
know,
the
pro
like
the
the
root
of
the
problem
is
that
we
had
some
Mr
on
infrastructure,
deeplab.com
or
whatever,
and
this
Mr
contained
the
conversion
update
for
some
services
and
the
infrastructure
changes
in
the
same
time
right
and
then
we
just
nourish
this
and
how
to
deploy
it
and
or
deploy
it.
Everything,
and
that
was
a
big
mistake,
because
it
somehow
conflicted
right.
C
Not
conflicted
but
more
like
we're,
pushing
two
changes
at
the
same
time
when
we're
not
meaning
to,
and
we
want
to
avoid
those
situations,
because
if
it
were
to
cause
an
incident,
we
want
to
know
whether
it
was
a
software
change
or
a
configuration
change
and
because
of
the
right
timing
of
those
style
of
merges
happening.
We
are
introducing
too
much
variability
inside
of
a
deployment.
So
what
we're
trying
to
attempt
to
do
is
segregate
those
items
to
prevent
additional
workload
when
it
comes
to
Incident,
Management
and
trying
to
identify
the
recalls.
A
D
I
I'm
just
wondering
shouldn't
we,
like
I
I,
know
there
is
like
a
Technical
Solutions
for
for
that
things,
like
I,
don't
know
like
open
policy
agent
and
Kai
Verno
and
etc,
etc,
and
write
the
tests,
but
I
think
it's
just
my
experience
with
open
policy
agent,
I,
don't
know
if
we
are
using
that
or
not,
but
it's
freaking
complicated.
D
It's
like
really
really
complicated
topic,
and
would
it
be
overkill
for
such
a
thing
and
introduce
open
policy
agent
for
for
for
this
kind
of
thing,
that
can
be
simply.
You
know,
Changed
by
process.
Somehow,
like
a
during
the
new
process,
you
can
just
say
like
okay,
like
a
police
check,
if
version
changes
at
the
same
time
as
as
configuration,
that
would
be
like
very
simple
solution.
Changing
in
issue
template
right,
adding
one
checkbox
rather
than
really
digging
into
this
rabbit
hole
with
open
policy
agent.
C
I
think
we
should
continue
this
conversation
elsewhere,
but
the
problem
comes
in
the
fact
that
we
don't
have
the
capacity
to
ensure
that,
when
we're
merging
a
configuration
change
that
there's
not
already
an
auto
deploy
running
or
going
to
start
like,
we
need
something
that
says.
There's
an
auto
deploy
happening,
don't
merge
this
yet
or
we
need
some
sort
of
locking
mechanism
somewhere.
That
says,
you've
merged
this,
but
we're
not
going
to
push
it
out
because
something
is
in
your
way
on
purpose,
or
vice
versa,
with
an
auto
deploy,
I.
B
D
B
Thank
you,
I
think
also
good
for
everybody
too,
that
again
scope
on
that,
since
this
keep
coming
up.
G
So
during
today's
meeting
well,
first
when
and
alessia's
office
hours
and
then
the
delivery
weekly
there
were
a
couple
of
ideas
for
new
metrics.
So
one
was
a
metric
to
track
the
time
lost
when
a
job
in
a
deployment
pipeline
fails.
G
So,
for
example,
say
a
job
fail,
say
a
QA
job
fails
and
you
retry
it
twice
and
after
the
second
retry
it
passes
so
the
time
from
the
first
job
when
it
started
to
the
time
when
the
successful
job
started.
That's
all
lost
time,
so
that
can
be
automatically
added
to
a
metric
and
you
know
build
some
kind
of
you
can
use
that
number
to
track
like
release
manager
toil.
A
B
G
G
E
Yeah
well,
I
was
in
my
arm
rotation.
It
was
manually
tracked,
I,
don't
know
if
we
still
do
it,
but
basically
yeah
that
would
get
rid
of
that
need
to
manually
track
how
many
like
hours
have
been
lost
because
of
those
retries,
because
it's
like
not
an
incident,
obviously
right,
but
it
is
still
time
lost.
So
it'd
be
no
nice
thing
to
check
automatically.
E
G
We've
never
tracked
it
not
even
manually
so
yeah.
It
will
remove
One
manual
task
from
the
RM
workload,
so
yeah
that'll
also
help.
G
The
other
one,
the
other
metric
is
to
track
to
track
when
the
Block's
deployment
label
is
added
to
an
incident
issue.
G
So
if
we
can
automatically
track
it
like
using
issue
web
hooks,
if
we
can
automatically
track
it,
then
when
the
label
is
added,
we
add
it
to
a
metric
and
when
the
label
is
removed,
we
remove
it
from
the
metric.
So
that
will
allow
us
to
have
like
a
timeline
for
when
deployments
were
blocked
by
incidents.
G
That
will
also
help
with
removing
the
manual
work
of
adding
blocks
labels
to
all
incidents.
C
You
know
these
are
just
ideas
for
like
new
or
other
things
that
we
could
track,
but
this
is
not
related
to
Auto
deploy.
This
is
specifically
related
to
like
our
release
procedures,
which
I
have
no
clue
how
you
would
do
this
at
the
moment,
but
I
feel
like
this
is
also
really
good
stuff
that
we
could
use
for
the
same
reasons.
C
You
know
we're
tracking
release
toil
in
other
areas
of
our
processes
so
like
if
we
had
a
like
an
a
Jaeger
trace
for
starting
the
upcoming
security
release,
for
example,
you
know
the
start
of
the
trace
is
when
the
procedure
is
open
and
the
end
of
the
trace
is
when
the
procedure
is
closed.
And
then
we
have
like
a
timeline
that
captures
every
individual
step
and
how
long
it
took
like
I.
C
Don't
know
what
that
would
look
like
because
there's
a
lot
of
variability-
and
you
know
when
we
check
things
off
for
example,
or
like
why
pipelines
are
the
way
they
are
but
like
I,
feel
like
there's
a
use
case
for
us
to
track
this
in
some
way.
I,
just
don't
know
how
we
would
do
that
or
such,
but
just
some
thoughts,
and
those
were
some.
The
items
I
listed
were
things
that
I
thought
were
interesting.
G
G
G
A
C
The
thing
that
I
for
caution,
though,
is
that
you
know
like
weekends
occur.
You
know
family
and
friends,
days,
interrupt
us
and
like
there
are
these.
There
are
other
reasons
outside
of
you
know
not
having
working
hours
for
why
certain
things
take
longer
than
others.
So,
like
you
can't
compare
a
security
release
from
last
month
to
a
security
release
from
this
month,
because
the
whole
timeline
would
have
been
shifted,
so
I
feel
like
we
need
to
dive
into
those
Target
items
to
determine
what
slows
us
down
and
what
builds
that
release
toil.
C
E
So,
in
terms
of
the
pipeline
of
a
release
going
through
I
think
that
metric
can
be
pretty
easily
taken
from
the
tracing
that
we
have
set
up
right.
E
If
it's
going
from
like
the
beginning
to
end,
then
we
can
just
get
the
timestamps
I
think
in
terms
of
where
we
I'll
get
release
toil
in
terms
of
like
the
bot
not
getting
signed
well
and
stuff,
like
that,
it's
a
bit
more
ambiguous,
I
feel,
and
that
will
be
something
that
we'll
have
to
discuss
further
on
how
to
track.
E
Because
that's
like
you
know,
do
we
track
the
going
back
and
forth
between
people
and
and
their
time
zones
like.
That's
not
like
you
know,
maybe
feasible
as
much
yeah.
G
G
No,
you
could
productivity,
you
could
set
up
like
a
merge
request.
Web
hook,
I
think
there
are
much
because
my
books,
if
there.
G
A
G
E
G
G
Unless
you
had
suggested
in
the
weekly
meeting,
but
for
the
issue
where
we
want
to
track
release
on
the
toil
ends
up
trying
to
condense
it
right
away
into
one
metric,
we
can
have
like
a
dashboard
of
multiple
metrics
and
for
now
use
you
know
human
judgment
to
decide
one
number
out
of
all
those
and
then
slowly
we
can
start.
C
I
did
and
to
address
stock
Mod's
question
this
that's
geared
towards
all
the
deployments
but
I'm
looking
at
I
guess
my
idea
with
this
was
last
month
we
abandoned
one
of
our
security
patch
releases
and
we
end
up
re-tagging.
C
What
I
was
gearing
towards
with
that
particular
comment?
I
know
that's
a
very
rare
thing
to
occur,
but
currently,
as
we
saw,
it,
creates
a
huge
amount
of
toil
to
address
the
fact
that
we
skipped
a
version.
You
know
that
created
a
lot
of
problems
to
get
that
release
completed
so
I
think
just
knowing
that
information
would
be
kinda
crucial,
I,
don't
know.
G
Both
will
be
useful
actually
by
the
way,
just
an
interesting
thing
about
that.
So
the
reason
that
happened
is
because,
when
we
say
publish
this
security
release,
we
don't
give
it
the
versions
in
the
chat.
Ups
command
it
just
we
just
tell
it
to
publish
and
then
release
tools
tries
to
determine
which
is
the
next
version,
and
it
does
that
by
looking
at
version.gitlab.com.
G
B
B
Yeah
yeah
yeah
yeah,
so
maybe
we
could
maybe
we
can
have
an
issuous
part
of
744
so
track.
Reducer
is
management
toil
just
a
sub
issue
of
that
Epic
that
we
can
use
for
tracking
some.
G
I'll
create
that
issue.
We
already
have
an
issue
for
tracking
rollback,
related,
metrics
yeah,
so
yeah
we
can
have
a
similar
one
or
we
can
just
have
one
issue
for
all
types
of
metrics
yeah.
B
I
think
I
mean
if
it's
just
for
tracking
ideas,
I
think
we
can
adjust
one
issue
for
everything,
maybe
and
then
I
know.
Then
we
can
categorize
them
there.
Then
you
know.
If
then
we
have
a
full
category
of
metrics.
That's
going
to
be
useful.
Electron
can
become
his
own
epic
and
we
can
work
towards
this
on
Epic
and
so
on.
Just
but
at
least
we
have
a
place.
That
is
not
this
Google
Document.
We
are
using
right
now,
where
I
can
have
all
the
listed.
B
G
Yeah
so
I'll
change
the
description
of
that
issue,
which
is
already
there
I'll
link
it
here
as
well.
B
Thank
you,
Ruben
Vladimir,.
D
I
was
I,
don't
know,
I
was
I
was
about
to
you
know,
point
to
the
to
to
to
the
issue
that
I'm
currently
facing
and
I
had
some
ideas
how
to
solve
it.
Basically,
because
we
do
have
this
internal
API,
a
new
service
that
it's
used
for
that
it's
going
to
be
used
for
blob
shell,
instead
of
like
going
to
the
like
a
public
API,
it
will
go
to
to
internal
API
and
it's
deployed
on
pre-environment
only
currently
and
before
rolling
this
to
production.
D
We
need
to
actually
make
sure
if
it's
bringing
the
value,
if
it's
I,
don't
know.
If
it's
worth
to
do
that,
and
so
we
need
to
collect
some
data,
and
the
thing
is
that
I
created
the
metrics
and
they
created
the
dashboards.
D
But
there
is
no
way
to
to
to
reduce
the
blast
radius
only
to
pre-environment
and
like
and
deploying
the
the
metrics
and
the
dashboards
like
on
on
plot
and
staging
and
not
like
on
the
environments
where
the
service
isn't
running,
might
lead
to
some
false
alarms
and
you
know,
might
wake
up
the
the
on-code
person
and
even
even
John's
suggestion
to
kind
of
put
the
thresholds
to
zero
I.
D
Think
it's
a
little
bit
terrapron,
because
there
is
a
lot
of
auto-generated
code,
which
you
never
know
what
it's
gonna
generate,
so
yeah
kind
of
I'm.
Thinking
what
to
do
with
that.
Ideally,
we
should
be
able
to
deploy
like
a
custom
branch
on
pre-environment
from
this
round
books
repository
and
see
because
you
can,
you
can
go
and
in
in
grafana
in
the
dashboards
you
can
actually
select
not
like
Global,
but
particular
Prometheus,
particular
Prometheus
Target.
D
And
you
can
select
pre,
but
I
don't
know
if
anyone
has
any
ideas
how
to
do
that.
It
would
be
super
useful
for
me.
D
One
question:
do
we
use
Prometheus
operator
or
we
use
parameters
operator?
So
it's
basically
I
can
just
go
and
apply
those
Auto
generated
rules
and
alerts
from
local
machine
right.
D
D
And
and
the
local
Prometheus
is
local
Prometheus
installed
using
Prometheus
operator
as
well,
then
the
then
it's
going
to
be
simple.
I
can
just
like
grab
all
this
Auto
generated
files
and
apply
them
manually
from
my
local
branch.
C
And
if
I
remember
correctly,
grafana
is
pointing
towards
some
sort
of
global
Thanos
instance,
but
it
has
the
knowledge
of
what
environment
to
send
queries
to
so
like
it
should
know.
If
you're
querying
a
pre-prod
environment
query,
it
knows
to
go
talk
to
that
specific
Prometheus
for
that
data.
So.
D
B
B
Anything
else
want
to
add
thank
you
for
the
great
meeting
and
discussion
today
really
enjoyed
it.
Yeah
I
guess:
I'm
gonna
speak
to
you
later.
I
want
a
great
rest
of
your
day.