►
From YouTube: 2022-09-07 - Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
This
is
22nd,
sorry,
7th
of
September,
2022
delivery,
sync
and
demo
is
actually
based
on
the
first
meeting,
we're
having
with
a
new
with
the
legal
assistant
team
and
using
the
the
new
format.
A
So
we
have
also
a
new
agenda
format
that
has
been
set.
So
just
let
me
give
a
brief
introduction.
I
would
like
that
everybody's
party
actually
participate
into
that
and
adding
agenda
topics
and
thinking
ahead
of
time,
which
kind
of
topics
we
want
to
discuss
in
the
meetings
or
feel
free
to
add
the
topics
at
your
earlier
convenience,
but
also
maybe
give
some
time
to
the
people
of
the
team.
Sometimes
you
understand
the
read
that
gender.
A
A
Youtube
link,
I
guess
you
can
start
with
the
agenda
Ahmad.
You
want
to
verbalize
your
your
first
discussion
item.
B
Yes,
the
cluster
Recreation
status,
so
I
just
wanted
to
give
a
brief
status
about
where
we
are
would
be
classification,
so
I'm
still
in
the
process
of
untangling.
One
of
the
terraform
modules
got
some
help
from
Scarborough
and
discussed
that
yesterday,
if
I
remember
about
removing
this
part
from
terraform
and
create
the
cluster,
and
then
we
adding
this
part
so
we'd
like
so
we
basically
start
creating
the
cluster
itself,
and
this
part
is
still
in
discussion
on
the
Mr.
B
So
hopefully,
today
we
have
some
progress
and
we
are
almost
there.
I
know
that
this
should
be
a
should
have
been
done
before
the
the
current
meeting,
but
I
think
we're
in
a
good
progress.
Anyways.
A
B
I
think
we
are
good.
Scarbox
has
already
helped
us
a
part
on
anything
I,
just
like
playing
him,
but
I
think
we
are
in
a
good
path.
Anyways,
like
the
hard
part,
is
done.
So
we
just
like
need
to
resolve
the
discussion
items
and
then
proceed
with
the
next
part.
A
A
I'm
gonna
revise
my
discussion
point,
so
this
is
a
bit
connected
to
the
discussions
around
the
team
split.
So
there
are,
there
are
several
representations
of
these
things
split
and
we
are
still
like
actually
discussing
their
value
settings
or
over
the
ship
that
we
are
having,
and
there
was
one
part
that
was
probably
still
kind
of
out
of
discussion.
A
I
guess,
because
it
was
probably
the
clearer
interface
that
probably
we
might
have
between
organization
system
and
was
the
one
around
metrics,
where
systems
should
kind
of
take
care
of
providing
a
system
to
store
those
metric
and
provide
interfaces
that
can
be
queried
to
extract
metrics
and
orchestration
should
be
the
one
that
will
actually
publish
push
Cosmetics
within
within
the
system,
with
probably
some
convention,
or
some
well-defined
format
for
that
so
I'm
actually
bringing
up
this
topic
here
to
understand.
A
What
are
you
thinking
if
it's
clear
that
this
kind
of
separation
of
concerns
between
system
orchestration
around
the
topic
of
Matrix
or
not,
because
they
would
like
to
start
to
Periscope
some
activities
around
the
Epic
757
and
start
to
work
and
start
to
work
on
on
that?
So
we
can
maybe
start
to
provide
approval
concept
and
understanding
is
also
fulfilling
the
needs
of
orchestration.
As
a
customer
of
system.
C
I
think
of
the
time
I
had
it
makes
sense
that
we
own
this
and
as
far
as
the
segregation
goes,
we
could
provide-
and
this
is
part
of
what
the
issue
that
I'm
working
on
should
account
for
is
trying
to
figure
out
a
way
to
query
things
inside
pipelines.
So
we
just
provide
the
necessary
how
to
do
that,
whether
it
be
some
sort
of
script
or
what
have
you
Graham
is
already
suggested
using
some
sort
of
command
line
tool.
That
kind
of
accomplishes
one
of
these
goals.
C
It
would
be
up
to
us
to
work
with
the
development
teams
to
figure
out
what
metrics
contribute
towards
that
deployment.
Health
and
I'll
touch
a
little
bit
on
this
during
my
demo,
but
you
know:
different
teams
want
to
measure
differing
things,
so
we
want
to
make
sure
that
we
have
something
to
start
with.
That
makes
sense
for
what
we
have
today
and
then,
if
teams
want
to
expand
on
that,
I
think
it'll
be
up
to
us
to
support
them
for
that.
A
Definitely
that's
one
part
that
we
need
on
the
other
end,
I
I
would
like,
as
part
of
this
757
epic
I
would
like
to
understand
I'll.
Also,
where
are
we
going
to
store
those
metrics
on
delivery
side
so
I
have?
There
are
a
lot
of
points
that
maybe
are
not
super
clear
to
me.
We
are
delivering
metrics,
that's
a
way
of
pushing
metrics
that
we
have
right
now
using
kind
of
Instagram
representation
that
was
extremely
useful
for
release
management
purposes.
A
The
part
that
I
would
like,
maybe
to
see
a
bit
formalizing
7x7,
is
to
accept
the
metric
itself.
That
is
going
to
be
different
definition
of
the
metric.
That
is
special
deployment.
Health
that,
as
we
spoke
last
time,
is
going
to
have
a
part
that
is
probably
coming
from
directly
from
us
from
system
and
a
part
is
defined
by
the
development
team
itself
to
ex
to
understand
what
makes
sense
for
the
particular
service
to
be
part
of
it.
A
C
We
already
kind
of
do
this
today
unless
you
created
something
inside
of
release
tools
that
captures
and
gathers
metrics,
and
that
is
an
application.
That's
running,
alongside
of
our
Ops
instance,
and
the
same
Prometheus
that
we
leverage
everywhere.
Has
these
metrics
or
excuse
me
the
same
Thanos
that
we
use
to
Polar
metrics.
Has
this
data
I?
Don't
see
any
reason
for
us
to
drift
away
from
that
mechanism
and
using
something
new
or
different
I.
A
C
D
You
yeah
I,
think
I
think
what
Alexa
created
was
HTTP
go
server
which
runs
on
Ops
and
then
the
release,
Tools
application
or
release
tools
runs,
can
send
metrics
to
it
and
that
gets
pushed
to
Prometheus.
A
C
And
just
be
clear,
just
this
application
that
unless
you're
aware
we're
responsible
for
as
far
as
Prometheus
goes,
that's
all
reliability
and
how
we
script
it
I,
don't
know
how
that
was
configured,
but
we
would
obviously
own
that
particular
little
piece
of
configuration
as
well.
D
So
I'm
a
little
curious
about
what
our
policy
is
about
using
other
third-party
tools
for
metrics
related
stuff.
Does
any
team
in
in
reliability
or
in
infrastructure
use
a
third
party
tool
other
than
you
know
the
standard,
Prometheus
and
grafana.
A
I'm
not
really
sure
I
guess
it's
a
good
question,
maybe
to
ask
any
infrastructure
launch
I,
guess
maybe
someone,
maybe
somebody
else,
I,
don't
know.
What's
the
the
policy
a
gitlab
for
that,
but
I
guess
following
the
right
tool
for
that
job
approach
is
something
that
probably
should
look
at
and
I.
Don't
think
that
we
have
some
limitation
to
link.
A
Clearly,
we
need
to
understand
also
who's
going
to
maintain
it
if
we're
gonna
have
a
date
or
not,
but
if
we
find,
if
we
have
like
in
let's
say
technology
selection,
the
best
two
for
the
best
job
or
something
like
that
or
at
least
use
it
on
the
POC
evaluate
like
pros
and
cons
I,
don't
see
the
reason
why
we
shouldn't
we
shouldn't
use
it.
D
The
reason
I
was
asking
was
because
there
are
the
gitlab
product
does
not
do
anything
with
pipeline
traces,
but
there
are
third-party
tools
which
which
provide
observability.
If
you
give
them
like
open,
Telemetry
traces
from
your
Pipelines,
so.
A
A
Think
yeah
for
sure
I
mean
using
probably
open
source
technology
and
difficult
standards
that
we
have.
There
I
think
it's
it's
a
good
way,
but
I
think
it's
going
to
be
up
to
the
person
that
is
going
to
work
on
this
Epic
on
757.
We
decide
where
to
where
to
go.
I
mean
if
we
don't.
We
cannot
dog
food
the
product,
because
we
don't
have
this
kind
of
capability,
I,
think
kind
of
makes
sense
to
extend
the
functionality.
We
need
there
and
be
mindful
clearly
all
the
technology
we
choose,
but
I
think
it's.
C
I
guess
all
right
I'm
going
to
share
my
screen.
C
So
inside
of
our
grafana,
our
dashboards
I've
created
something
simply
called
deployment
health,
and
this
is
tracking
a
metric
which
is
I
documented
it
here.
So
we
query
the
gitlab
deployment,
Health
Service
metric
for
part
of
our
production
checks,
so
anytime
someone's
running
the
chat,
ops
command
to
flip
a
feature
toggle
or
if
any
time
we're
running
the
production
checks
command.
C
It's
querying
Prometheus
for
this
metric
and
we're
gonna
simply
get
a
one
or
a
zero
value
bag
for
various
components
in
various
environments,
depending
on
what
we're
querying
I
would
imagine
that
we're
studying
an
environment
in
a
stage
and
we're
just
going
to
get
a
value
of
zero
one
or
back.
That
indicates
whether
or
not
something
is
healthy.
If
it's
healthy,
it's
a
value
of
one,
if
it's
not
healthy,
it's
a
value
of
zero
and
what
I've
done
is
effectively
try
to
Showcase
this
on
a
dashboard
that
kind
of
gives
us
this
information.
C
A
C
Could
take
our
selectors
from
like
the
environment
in
the
stage
up
here
to
control
like
what
we're
looking
at,
but
I
chose
to
make
these
a
stack
chart
to
make
it
easier
to
read
and
look
at
because
everything
being
at
one
is
kind
of
was
kind
of
annoying
when
I
was
looking
at
it,
but
as
we
could
see
here,
all
of
our
metrics
are
for
the
most
part.
Okay,
we've
got
two
blatant
holes
in
our
metrics,
but
for
the
most
part,
everything
is
fine
except
web.
C
So
web
is
looking
a
little
goofy
I'm,
not
trying
to
sit
here
and
troubleshoot
this
I'm
just
trying
to
showcase
the
metrics
and
then
further
breaking
down
into
what
provides
the
service
deployment.
Health
is
the
app
decks
and
the
errors
queries.
These
are
two
distinct
query
or
two
distinct
rule
Generations
that
create
or
contribute
to
whether
or
not
a
service
is
healthy,
so
each
one
of
those
I've
broken
them
down
further,
so
we
now
know
which
of
these
is
contributing
towards
the
health
of
a
service.
C
So,
for
example,
in
this
little
sliver
of
time,
we
could
see
that
the
Optics
was
okay,
but
the
errors
was
not,
for
example,
so
you
know
web
was
probably
not
healthy
in
this
particular
case,
and
if
I
hover
over
this,
you
can
see
the
red
line
above
it
looks
like
it
was
healthy.
So
I
can't
explain
to
you
why
we've
got
holes
in
our
metrics,
but
you
know
we
could
also
adjust
what
service
we're
looking
at.
So
you
can
see.
C
Api
looks
a
lot
better
than
web
right
now,
so
I
wanted
to
create
this
dashboard
as
a
learning
exercise
in
showcasing
what
our
tooling
is
doing.
C
One
of
the
next
steps
I
want
to
add
to
this
dashboard
is
a
breakdown
of
each
one
of
these
objects,
because
that
query
is
ginormous.
It's
this
massive
line
for
just
one
of
these-
and
this
is
this-
is
the
aptx
one.
So
what
builds
the
app
deck
specific
health
is
looking
at
the
30
minute,
the
six
hour
five
minute
and
one
hour
ratios
and
we're
doing
some
fancy
math
that
I,
don't
comprehend
on
the
service.
I
want
to
kind
of
break
this
down.
C
To
try
to
showcase
on
that
dashboard
what
that
does
and
what
it
looks
like,
so
that
we
could
understand
like
which
one
of
these
windows
are
we
violating
in
the
case
that
a
service
is
experiencing
unhealthiness,
for
whatever
reason
we
have
an
idea
of
what
specific
window
we
might
be
violating
an
SLI
form,
adding
well
I,
guess
before
I
move
on.
Does
anyone
have
any
comments
or
questions
so
far?.
A
Yeah,
let
me
find
one
so
on
a
service
deployment,
Health
yeah,
the
service
back
down.
That
is
like
the
value
services
that
we
you
just
you
sit
there
and
each
one
of
those
is
at
least
the
component.
That
is
eight
decks
and
errors
that
are
actually
object
down
from
the
service.
One
right,
yeah
precisely.
C
Like
reading
this
Chart
top
down,
you
know
deploy
Health.
C
A
D
C
Which
was
going
to
include
you
know
this
huge
disaster
of
a
query
essay
disaster.
It's
just
a
very
big
query,
though,
but
you
know
this
component
health
is
kind
of
that
already
to
an
extent.
It's
you
know,
showcasing
both
the
objects
and
errors.
C
You
know,
hypothetically,
we
should
be
able
to
sit
here
and
be
like
oh
well.
The
web
looks
unhealthy
because
looks
unhealthy
because
say
the
you
know:
we've
violated
our
SLA
for
errors.
That's
why
we
don't
have
a
value
here,
but
you
know,
looking
above
at
the
red
line,
I
see
that
we
were
still
healthy
for
some
reason.
So
I
just
I,
don't
know
see
we
were
unhealthy
for
errors
above,
but
we
were
still
healthy
above
I.
Don't
know
why
but
anyways
so
like
this
is
kind
of
the
breakdown
that
you
were
just
asking
about.
C
I
want
to
further
break
it
down
further,
just
inside
of
here
just
to
Showcase
what
we're
looking
for.
But
yes,
we
could
directly
query.
You
know
this
information
here
can
I
showcase
that.
D
C
So
like
what
we
would
probably
do
is
instead
of
type
equal,
the
registry
would
be
type
if
we
were
just
deploying
the
API
as
an
example.
If
I
could
type
okay,
I
just
lost
ability.
Okay,
thanks
grafana.
C
There
we
go
if
I
run
that
query
we'll
see.
We
just
have
a
healthy
service
for
the
API,
except
for
the
holes
in
our
metrics,
which
I
can't
explain,
but
we
got
a
healthy
servers
for
the
API
I
think
one
Improvement
I
would
love
to
do
is
if
we're.
If
we're
missing
a
metric,
you
know
send
it
down
to
zero.
Just
you
know
graphing
that,
but
you
know
future
Improvement
yeah.
A
C
A
Would
have
been
this
in
impacted.
A
Yeah
there
was
also
an
incident,
so
there
was
like
we
disabled
Canary
for
almost
the
entire
day
until
yes
until
last
night,
and
we
have
to
deploy
a
fix.
C
D
C
We
do
a
little
quick
drill
down.
We
could
see
the
get
service,
wasn't
an
unhealthy
for
just
shy
of
an
hour.
So
let's
go
to
the
get
service
and
we're
already
on
the
canary
stage,
and
this
is
production
and
we
could
see
that
for
the
get
service
the
both
the
aptx
and
The
Heirs
were
violating
their
slis
at.
D
C
So
that
massive
query
is
a
recording
rule,
so
inside
of
our
runbooks
is
a
recording
rule
that
gets
built
this
massive
query.
So
all
of
these
are
prefix
with
gitlab
deployment,
Health
Service
and
there's
errors
and
there's
one
for
app
decks,
and
not
just
those
two.
C
But
that's
where
those
queries
came
from
is
this
Auto
generated
file
how
this
file
gets
generated
is
mildly,
complex,
I,
don't
want
to
waste
time
in
this
demo,
trying
to
re-figure
that
out,
but
we
do
have
documentation
if
you're
interested
Andrew
has
put
together
a
very
fancy
go
away:
oh
dang!
It
no
way.
C
Okay,
I
put
it
in
the
wrong
place,
but
I'll
correct
that
a
little
bit,
but
we
do
have
documentation
as
to
how
and
why
the
metrics
catalog
operates
the
way
that
it
does
and
there's
a
few
things
that
play
a
role
into
this.
I.
Don't
want
to
go
into
that
here
because
it's
a
very
big
topic,
but
we
have
documentation
which
is
pretty
cool,
so
I
advise
everyone
to
take
a
gander.
C
So
the
last
thing
I
wanted
to
Showcase
was
yeah,
so
the
last
thing
I
want
to
showcase
was
how
to
get
these
metrics
and
it's
actually
pretty
insanely
easy.
So
I'm,
looking
at
the
API
service
in
the
metrics
catalog
to
get
that
added,
all
we
need
to
do
is
inside
of
the
other
thresholds
block.
We
add
a
deployment
and
we
Define
our
SLI
scores.
So
you
can
see
the
API
pretty
pretty
much
matches
our
monitoring
thresholds,
so
these
would
contribute
towards
the
EOC
is
getting
paged.
C
You
can
see
the
deployment
matches
that,
but
we
see
that
this
is
missing
for
a
large
chunk
of
our
services.
Giddly
is
one
of
them
effectively.
If
it's
not
in
this
list,
let's
go
back
to
the
main
stage.
C
If
it's
not
in
this
list,
it's
missing
inside
of
our
service
catalog.
So
like
Italy,
for
example,
we
don't
have
it.
We've
got
it
for
you
know,
alerting
our
eocs,
but
we
don't
have
it
as
a
deployment
deployment
indicator.
So.
C
Need
to
do
is
sit
here
and
add
what
I
was
just
showing
in
the
API
or
all
of
our
service.
That's
missing
the
caveat
to
this,
rather,
is
that
the
way
that
that
query
runs
inside
of
chat,
Ops
and
release
tools
is
that
we
are
querying
it
similar
to
documented
I'm
made
out
of
documented.
It.
C
Okay,
I
didn't
document,
but
our
query
does
something
like
give
me
everything,
except
for
the
container
registry,
which
we
may
not
want
to
do
right
off
the
bat,
because
you
know,
as
we
see
web
is
very
unstable
for
some
reason,
I
wouldn't
want
to
add
giddly
to
this
and
all
of
a
sudden
I'm
breaking
deployments,
because
you
know
something
may
be
wrong
with
the
metric.
So
I
think
what
I'm
going
to
try
to
do
is
touch
release
tools.
C
First,
to
make
sure
it's
querying
the
services
that
we
care
about,
which
are
the
ones
that
we
know
like
the
API
get
psychic
and
web
first
add
giddly
to
this
make
sure
it's
working
as
desired,
and
then
you
know
remove
that
query.
C
Alternatively,
I
could
sit
here
and
add
a
feature
to
our
run
books
where
we
indicate
inside
of
our
service
definition.
If
the
deployment
is
unhealthy,
should
we
block
deployments
for
and
then
that
could
contribute
to
the
query
as
well,
so
we've
got
options,
but
those
are
the
next
few
things
I'll
be
working
towards
on
this
particular
issue
and
that's
what
I
want
to
showcase.
A
Thank
you
good
progress,
one
question
about
that:
do
we
have
anything
in
our
product
awareness
review,
about
having
those
kind
of
metrics
to
add
to
the
service
catalog.