►
From YouTube: 2023-01-11 Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
so
welcome
everybody.
This
is
delivery
system,
sync
and
demo.
11Th
of
January
1953,
I,
probably
already
developed
2022
in
a
lot
of
places,
but
I
need
to
get
used
to
it.
I
will
need
some
some
days
to
adapt
to
the
new
year,
just
a
small
announcement,
I
added
to
our
agenda.
Please
add
your
PTO
root
as
part
of
the
shirt
in
calendar
by
following
instruction
web
page.
So
it's
not
only
for
us
of
our
team
for
delivery
system,
but
it's
also
like
the
entire
delivery
group.
A
So
we
have
a
full
overview
within
results.
Something
like
that.
I
don't
know:
I
had
problems
in
the
past,
since
my
audience
were
not
showing
in
any
calendar.
I
guess
I
saw
some
of
you
also
the
same
problem.
I
know
it's
a
bit
a
bit
of
a
pain,
but
please,
let's
do
that.
So
it's
gonna
help
us
to
think
better
without
together
to
understand.
What's
available
is
not
available.
A
Or
not,
I
added
a
small
discussion
point
to
our
to
our
agenda
and
it's
about
the
Q3
retro
that
took
a
bit
of
time
to
close,
but
I
went
through
all
the
I
went
through
all
the
comments
that
you
left.
Thank
you
very
much
for
leaving
them
and
I
also
wanted
to
review.
The
survey
results
that
you
have
for
you
through,
and
the
service
that
we
actually
had
for
for
Q3
I,
don't
know
if
it
is
outlier
but
I
see
some
improvements
there.
A
So
I
like
to
see
that
I
like
to
think
that
we
are
actually
we
move
the
needle
a
bit
up
there
and
I
actually
summarize
some
action
points
that
came
out
of
the
home
or
the
comment
that
you
left
in
in
the
issues.
So
maybe
I'm
gonna
share
my
screen
briefly
and
then
we
can
just
go
through
that.
Like
you
know,
in
a
few
minutes
and
before
moving
to
the
interesting
part
of
this
meeting,
that
obviously
is
the
demo
and
and
not.
B
A
See
this
game
perfect
so
compared
to
few
to
Q2,
contributed
to
team
success
by
supporting
each
other
documenting
continuing
how
to
share
more
context
in
our
work.
It
seems
that
we
had
a
bit
of
improvement.
We
went
actually
we
got
a
five
two
fives,
four
four
and
one
and
one
three,
so
I
guess
this
is
like.
It
was
pleasant
to
see
that,
at
least
from
my
from
my
point
of
view,
keeping
our
bigger
goals
in
mind
when
designing
solution,
choosing
work,
I
think
yeah.
A
We
got
slightly
worse
slightly
better
as
well,
because
we
actually
moved
to
the
four
how
to
fight
a
bit
more.
We
so
there's
a
slight
Improvement,
also
here
question
in
the
status
quo
and
looking
at
what
the
industry
is
doing
in
our
Province
space.
A
We
also
got
a
bit
better
here,
I
think
before
increased
the
one
disappeared,
I,
don't
know
if
you
were
a
bit
more
optimistic
on
this,
but
I
think
we
as
a
team
where
we
kind
of
like
threatening
our
identity
a
bit
more.
Maybe
we're
also
a
little
bit
more
clarity
in
which
direction
to
look
at
to
understanding
what
we
can
do
better
or
we
cannot
do
better
any
questions
around
this.
Did
you
had
a
chance
before
this
call
to
to
see
the
results,
or
is
something
that
you
want
to
comment
about?
A
Because
I
know
okay,
so
I
summarize
some
retractions,
the
retractions
were
mainly
like
what
you
they
wanted
to
summarize
from
the
one
that
you
left
in
the
issue,
so
one
that
is
definitely
on
me
is
to
plan
better
release
management
Traditions
to
avoid
having
the
single
team
being
fully
taken
by
the
risk
management,
and
this
happened
in
probably
the
almost
the
entire
quarter,
except
except
January,
where,
right
now
the
only
release
manager
is,
is
an
anti-activist
moment.
A
November
December
we
were
like
100
release
management
was
covered
by
by
us.
It
was
not
only
a
a
mistake
that
we
did.
There
was
also
like
availability
problem
that
we
had
to
to
face,
so
we
ended
up
like
in
a
in
a
schedule
that
was
not
the
best
one
I
understand.
It's
also,
like
you
know,
put
us
back
a
bit
in
our
Project
work
in
our
okr
and
everything,
because
you
know
we
had
50
of
our
team
fully
working
on
that
now.
A
You
know
Vladimir
joining
so
and
would
have
been
a
bit
more
room
for
a
breath
for
everybody.
But
I
think
this
is
something
that
we
can
do
me,
and
maybe
we
can
do
better
for
the
next
next
work.
A
They
bought
an
assignment
cell
and
we
have
also
to
go
there
and
ping
them.
At
least
you
know
do
that
Ascend
to
yourself
again,
so
I
think
this
would
actually
reduce
a
lot
of
stress
and
I.
Remember
my
earlier
release
management
rotation
so
be
kind
to
me
on
this.
That
would
be.
That
was
something
on
top
of
my
mind,
more
or
less
the
entire
day,
when
I
was
supposed
to
release
a
deadline
to
go
back
and
see.
Oh
yeah
them
are
like
it
can
be
merged
or
not.
B
A
C
Okay,
we
did
discuss
this
issue
in
one
of
the
delivery
team
meetings
and
I
think
I
need
to
update
the
proposal
in
the
description.
A
Perfect.
Thank
you
very
much
for
this.
Another
attraction
is
a
bit
broader,
a
bit
less
actionable
and
it
was
keep
an
issue
with
the
Titleist
or
security
spending
point
to
be
sure
to
optional
items.
So
I
didn't
add
anybody
if
someone
to
take
this,
because
this
is
actually
becoming
a
pretty
big
topic
for
the
next
quarter.
A
There's
a
lot
of
discussion
going
on
around
this
together
with
orchestration
and
we're
probably
gonna
have
like
also
shared
okr,
maybe
how
to
improve
the
situation
or
understanding
how
we
can
just
do
better
there
and
a
small
example:
we
have
a
new
product
manager,
joining
Fabian
is
going
to
report
to
Fabian
and
it's
gonna
focus
on
scalability.
A
It's
going
to
focus
on
delivery,
and
probably
one
of
the
first
action
points
that
you
will
have
as
part
of
his
work
will
be
actually
to
focus
on
security
release,
how
we
can
do
them
better,
our
understanding,
a
bit
better.
What's
the
value
that
our
customers
has
seen
our
security
releases
and
understanding
correct,
do
we
need
all
the
releases
that
we
are
doing
right
now?
Can
we
group
something
together?
A
Can
we
provide
I,
don't
know
better
view
better
understanding
in
our
process
to
our
customer
in
order
than
to
upgrade,
because
we
saw
that
actually
not
so
many
of
our
customers
are
actually
following
the
normal
upgrade
path.
Some
of
them
are
lagging
behind.
So
maybe
there
are
some
requirements
coming
from
some
of
our
customers
that
are
like
unseen
to
us,
so
I
guess
having
product
management,
I
mean
punishment,
point
of
view
there
and
the
help
from
them
to
collect
these
requirements
for
customer
I
think
will
be
a
good,
a
good
starting
point.
A
So
there
is
not
directly
an
issue
related
to
this.
We
have
some
issues
to
you,
know
measuring
creating
a
metric
for
risk
management
and
everything,
but
I
think
this
is
something
that
we
will
will
be
worked
as
a
byproduct
of
a
lot
of
other
actions
that
we'll
take
in
the
next
quarter
or
so
the
last
one
partial
functionality
testing
the
schedule
like
staging
our
back
parties
to
ensure
it
works
when
needed.
I
think
this
is
came
up
after
use,
come
back,
I
think
he
went
to
reliability
to
demo
departure
it
wasn't
working.
A
D
D
The
if
I
recall
correctly,
like
I'd,
have
to
go
look
back
at
the
issue,
but
if
I
recall
correctly,
the
thing
that
broke
was
the
inability
for
us
to
create
an
appropriate
pipeline
or
something
like
it
was
missing.
Something
if
I
remember
correctly,
I
think
just
having
that
as
a
quick
ability
to
test
the
fact
that
Patcher
kind
of
works
with
a
good
stepping
stone.
B
E
D
So
I
think,
if
we
wanted
to
test
this
for
reals,
we
would
probably
want
to
make
a
more
cohesive
patch.
That
probably
does
something
dumb
like
create
like
a
comment
instead
of
a
random
file
and
applies
it,
but
we'll
still
want
to
make
sure
that
it
runs
in
its
own,
isolated
fashion,
so,
like
all
deploy,
would
need
to
be
paused
during
this
execution,
just
to
make
sure
that
we
don't
run
into
any
awkwardness.
A
Would
be
great,
did
you
get
an
issue
for
that
I
think
we
can
just
maybe
add
a
proposal
out
to
do
that,
and
then
we
can
decide
where
what
is
the
best
place
to
look
it
up
right,
I,
don't
know
if
our
release
issue
with
the
current
step
and
timeline
is
a
good
place
or
is
going
to
become
a
bit
too
too
packed.
You
know
having
two
staging
roll
back
practices,
sometimes
three
times.
D
B
C
Another
issue,
at
least
for
me,
I,
don't
know
about
others,
but
I've
never
gone
through
a
batting
process.
C
A
C
It
would
be
useful
to
actually
do
it
like
how
we
do
the
staging
rollback
tests.
Maybe.
A
A
B
B
E
So
as
you,
as
you
know,
we
have
this
I,
always
fighting
with
this.
E
Sorry
so,
as
you
know,
we
have
this
template
functionality
in
here
in
in
qctl.
If
you
need
what
does
to
what
template?
Do
it's
actually
generating
row
fast
in
the
directory?
That
calls
manifests
it's
very
kind
of
useful
thing,
but
the
readability
of
this
is
it's
a
little
bit
complicated
because
it's
it's
raw
yaml
that
goes
to.
E
At
least
sometimes
it's
super
hard
to
you
know
to
fight,
and
the
thing
is
that
what
we?
What
I
want
to
have
is
that
the
birds
could
try
to
implement
the
edges
to
our
club
configuration
I
wanted
to,
because
we
have,
we
have
charts
and
we
have
a
certification
and
contribution
and
help
us
right.
It's
it's
here,
for
example,
this
is
the
country
for
pre-production
of
oil,
and
this
is.
E
Like
the
gender
and
followers,
if
I
change
something
here
and
it's
very
difficult
to
actually
find
where
it
ends
up,
and
they
don't
make
a
lot
of
sense
to
because
they
are
just
called
for
the
for
help.
But
what
is
pretty
important
for
me
as
a
developer
on
both
as
as
S3
who
do
the
changes?
A
Are
cutting
off
for
a
bit
Vladimir.
E
I
think
my
yeah
I
think
my
USB
connector
is
kind
of
broken,
so
I
have
to
replace
it
so
well.
Basically,
this
command
will
generate
the
values
out
of
like
home
values,
out
of
the
helm
files
and
we
can
inspect
what
what
we
have
changed.
So
when
you
run
it,
you
will
have
the
the
the
the
values
files
in
here.
Unfortunately,
this
command
doesn't
support
directory.
E
You
cannot
specify
where
they
are
generated,
but
well
I
just
started
this
to
git
ignore
so
you
you
don't
commit
them,
and
this
is
like
the
values
that
will
be
given
during
Helm.
Helm
file
apply
to
the
helm
chart,
and
this
is
very,
very
useful
from
my
point
of
view.
E
It
has
a
multiple
use
cases,
so
the
first
use
case
is:
you
can
actually
see
what
has
changed
in
the
values
when
you
change
those
files,
the
Second
Use
case,
since
we
only
have
since
the
the
whole
installation
of
the
of
gitlab
is
done
by
chart
and
the
values
files.
You
can
have
a
your
own
cluster
and
using
these
values
files
you
can
actually
instantiate
exactly.
You
can
replicate
the
the
environment,
for
example,
pre-production
environment.
E
You,
you
can
replicate
it
in
your
own
cluster
with
exactly
the
same
values
so
and
you
don't
need
to
break
pre
or
you
know,
commit
something,
create
a
Mars.
Something
like
this.
You
just
like
test
it
on
your
own
cluster.
It
can
be
very
simple
autopilot,
geeky
gke
cluster,
that
doesn't
burn
any
money.
If
you
don't
use
it,
but
at
least
there
is
a
there
is
a
chance
that
you
can.
E
You
can
test
this
these
things
and
also
having
the
the
having
the
the
valeries
will
help
us
to
actually
migrate
from
this
very
complicated
setup
to
more,
like
I,
don't
know,
gitopsy
style
of
of
of
deploying
the
code,
so
I
think
it's
very
simple
change,
but
I
I
do
believe
that
having
the
like
pure
value
is
generated
by
all
these
files.
It's
it's
it's
a
good
approach
and
it
helps
a
lot
at
least
for
me
yeah.
That's
that's
what
I
would
say.
D
D
E
E
Actual
manifest
with
all
the
resources-
and
they
are
split
by
multiple
directories-
they
split
by
multiple
resources.
If
you
change
something
here,
changes
will
go
to
the
different,
manifests,
it's
very
hard
to
spot
them
spot.
The
changes
that
changes
Etc
but
but
the
the
helm
values
is,
is
actually
what
you
are
and
you
don't
really
need
those
Madness,
because
manifest
is
managed
by
hell
and
by
yourself
by
Helm
chart
it's
a
part
of
the
helm
chart,
and
if
you,
if
you
extract
the
values,
you
can
replicate
what
you
connect.
E
To
that
chart
you
can
replicate
what
I
have
installed
on
any
environment.
D
A
Yep
we
showed
that
which
possibility
is
going
to
open
up
for
us
using
this
Vladimir
on
the
on
the
immediate.
E
So,
like
I'm
thinking
that
again
there
are,
we
have
two
challenges
now
as
far
as
I
see.
First
challenge
is
having
better
development
workflow
and
like
accelerate
the
changes
on
this
on
this
infrastructure,
because
there
is
no
way
we
can.
E
We
can
test
the
the
chains
without
opening
Mr
and
going
through
the
whole
Pipeline,
and
you
know
if
you
change
something
in
generic
values,
for
example,
points
API
to
to
some
like
a
point
of
service
to
another
API
yeah,
just
in
the
UK
YouTube
in
generic
file,
wrote
on
a
particular
environment
file.
E
Desperately
hasn't
read
ephemeral
environments
or
something
that
we
we
were
using
to
test
the
changes
locally
or
using
using
the
you
know
like
breaking
range
or
something
like
that.
So
changes
go
should
be
in
the
pre-order
to
new
the
way
he
mastered,
even
if
you
change
the
environment
or
something
so.
This
will
helps
to
there's
the
chase
that
you've
done
without
going
to
the
without
going
to
to
the
pipeline
without
Computing
the
whole
pipeline
and
also
extract
those
changes
because
again
like
if
we
are
thinking
to
kind
of
improve
what
we
have.
E
My
suggestion
would
be
to
switch
to
github's
model
and
I
know.
Reliability.
Team
is
already
working
on
adding
argosity
and
they
also
and
the
product
team
is
working
on
adding
oops.
It's
recorded.
Sorry.
E
Okay,
so
they
they
they
are,
they
are
working
to
add
some
github's
functionality
and
we
can
dog
food.
This
thing
and
and
I
can
use
like
because
the
the
idea
of
hem
files
and
the
gitops
is
the
same.
Basically,
you
have
you
have
chart,
which
is
the
code
and
you
have
chart
values
which
is
which
is
the
configuration.
E
If
we
extract
these
values,
we
put
them
in
the
git
repository
and
we
already
have,
and
we
already
have
charts.
We
can
just
feed
those
values
to
this
charts
using
gives
principle
using
github's
tooling.
And
this
from
my
point
of
view,
this
will
drastically
improve
velocity
visibility
of
the
changes
and
and
we
will
be
able
to
test
those
changes
on
our
ephemeral
environments.
E
So
those
those
things.
This
is
one
simple
step
towards
that,
but
I
think
it's
kind
of
important
from
my
point
of
view.
F
D
I,
don't
have
any
questions.
I,
just
I
have
a
comment.
I
think
this
is
fantastic
because
say
if
you're
trying
to
interrogate
what
values
are
going
to
be
applied
to
Cluster
B
in
production.
It's
very
difficult
to
look
at
all
of
this
in
a
cohesive
Manner,
and
that's
precisely
what
this
is
doing.
There's
always
been
a
little
source
of
contention
between
what
files
should
be
modified
and
where,
in
order
to
Target
specific
things,
and
this
kind
of
provides
a
little
better
visibility
into
what
our
repository
currently
does
so
I
I
enjoy
this
change.
D
I
think.
Maybe,
as
a
next
step,
we
could
try
to
figure
out
what
we
could
do
for
like
Vladimir
mentioned,
like
it's
very
difficult
for
us
to
test
anything
locally
like
it's
just
not
currently
possible
anyway
shape
or
form
we
if
we
were
to
spend
up
an
entire
like
all
the
stateful
services
locally,
and
then
we
could
apply.
This
we'd
still
need
to
modify
various
values
in
order
to
point
a
gitlab
installation
to
those
local
installs,
because
otherwise
we're
trying
to
leverage
console
and
redis
and
postgres
in
an
environment.
D
That's
not
local
to
us,
which
is
very
dangerous.
It
would
be
interesting
for
us
to
see
if
we
can't
figure
out
how
to
create
environment
or
ephemeral
clusters
that
enable
a
local
development
style
of
test.
I.
Think
that'd
be
an
interesting
step
forward
from
here.
E
I
think
we
have
an
issue
about
the
femoral
clusters
already
so
I'm
I'm,
honestly
I'm,
very
keen
to
start
working
on
that
since
I
did
that
in
the
past
already
I
don't
know
what
the
priorities
are,
but
I
think
from
my
point
of
view
this.
This
is
my
priority,
because
this
will
improve
the
velocity
of
changes
and
the
quality
of
the
stuff
that
we
are
delivering.
A
Okay,
so
let's,
let's
do
one
thing:
let's
look
if
the
the
ephemeralize
cluster
I
think
is
an
epic.
It's
not
even
an
issue
that
one
as
anything
that
makes
sense.
You
put
this
as
the
next
step.
If
not,
let's,
let's
create
a
an
issue
for
that.
How
we
plan
to
do
that,
I,
don't
know
you
Vladimir
use
cardboard,
you
you,
since
you
will
contribute
this
conversation.
You
want
to
do
that
and
priority
wise.
A
We
need
to
see
a
bit
also
on
the
timeline
of
orchestration,
because
ephemera
class
is
something
orchestration
is
working
on,
but
please,
let's,
let's
concretize
these
on
an
issue.
At
least
we
don't
lose
the
we'll
lose
the
the
trail.
All
these
kind
of
things
and
putting
up
our
gctl
can
work
on
that.
A
B
D
Okay
so
Ruben
created
an
issue
a
little
while
ago
about
trying
to
figure
out
how
to
add
pipeline
names
to
various
repositories
that
we
manage
in
order
to
grab
tracing
data
and
look
at
things
in
a
much
better
fashion
in
the
future,
so
kind
of
a
combo
issue,
kind
of
thing,
so
I've
been
working
through
this,
and
some
of
you
are
already
familiar
with
some
of
the
work
that
we've
already
done
so
like.
D
If
you
go
to
the
release
tools
pipeline
inside
of
Ops,
we've
already
named
a
large
chunk
of
these
pipelines,
so
you
know
anytime,
there's
a
chat,
ops,
command,
that's
run!
We
get
that
information
in
the
pipeline
name.
Some
of
our
jobs
are
not
run
by
chat.
Ups
for,
like
you
know,
things
that
are
executed
via
scheduled
jobs.
You
know
those
are
perfectly
name,
so
it's
easy
to
find
versus
the
name
of
the
last
time
we
merged
and
whatever
that
pipeline
would
like.
D
This
is
probably
named,
like
Ruben
merge
this
Branch
name,
which
is
highly
annoying,
but
now
we
could
see
things
in
a
lot
better
fashion,
so,
like
a
release,
coordinate
pipelines,
they'll
have
this
fun
stuff.
In
here,
so
it
makes
it
easier
to
look
at
when
you're
coming
from
pipelines,
instead
of
trying
to
search
through
slack,
trying
to
find
the
appropriate
pipeline.
D
This
will
also
make
it
easier
to
look
for
inside
of
our
Trace
logs
as
well.
That
Reuben
will
probably
shortly
demo
here
after
me,
but
we've
also
added
the
same
thing
to
our
deployer.
So
now
we
have,
all
of
them
are
named
in
some
way
shape
or
form.
So
we
know
that
they
are
in
Auto
deploy.
We
know
what
environment
we're
targeting
and
we
know
the
version
that
we're
trying
to
install.
D
So
now
we
could
just
sit
here
and
say:
hey
if
we
want
to
search
for
a
given
pipeline,
so
theoretically,
I
should
be
able
to
do
a
just
a
plain
string
search
for
this,
and
hopefully
that
pipe,
never
mind
put
raw
text
search
is
not
supported.
Sorry
searching
is
not
supported
everyone,
my
apologies,
but
at
least
we
have
better
visibility
if
we
are
trying
to
come
from
the
opposite
direction
of
trying
to
find
pipelines,
because
sometimes
it's
not
at
least
for
me-
I
usually
go
to
the
repository
before
I.
D
Go
to
slack,
which
maybe
that's
a
me
problem
but
I
feel
like
this
is
better
overall
I'm
going
to
continue
this
work,
because
QA
kind
of
suffers
a
similar
issue
where
everything
every
single
pipeline
is
named
after
the
last
branch
that
was
merged,
which
is
kind
of
annoying,
so
I'm
gonna
I
am
actively
trying
to
work
on
that.
Qa
is
set
up
slightly
differently
with
the
way
they
inherit
projects,
so
making
sure
I
do
that
without
breaking
things.
D
That's
going
to
be
a
challenge
but
I'm
hoping
to
try
to
figure
that
out
today.
Unless
you
have
any
questions,
that's
all
I've
got,
but
this
is
related
to
all
the
work.
Reuben
is
doing
so
I'm
kind
of
curious
as
what
Reuben
has
to
showcase.
C
Not
a
question
just
a
comment:
this
is
this
will
help
us
for
calculating
the
metrics.
Like
you
know,
a
number
of
times
we
had
to
retry
QA
jobs
in
in
a
deployment
pipeline
So
the
plan
there
is
to
have
web
hook,
events
for
each
job
being
sent
to
delivery,
metrics
and
delivery.
Metrics
now
needs
to
filter
out
pipelines.
C
That
is
not
interested
in
because
the
way
webhook
Services
feature
works
in
the
product
is
that
you
have
to
add
a
webhook
API
or
the
API
URL
to
a
project
so
like
every
single
event
or
job
event
or
pipeline
event.
Whatever
you
chose
every
single
event
of
that
type
in
that
project
will
go
to
your
to
your
app
so
like
we
have
QA
pipelines
that
we
are
interested
in.
But
if
we
add
the
the
webhook
API
URL
to
that
project,
you
will
get
events
for
other
QA
pipelines
that
you're
not
interested
in.
C
C
It's
actually
not
supported
on
the
API
either.
So
there's
there's
no
way
you
can
use
it
and
it
was
supposed
to
be
implemented
this
Milestone,
but
I,
don't
think
it's
going
to
be
possible.
A
D
It's
the
name,
you
want
it's
generated.
Yeah
RCI
config
allows
us
to
shove
a
variable
into
the
name
now
so
they're,
partially
unique
in
the
fact
that
we
could
put
whatever
we
want
in
there.
In
this
case,
we've
been
putting
the
version
that
we
deploy
into
the
pipeline.
Okay,.
D
Variable
that
we
have
inside
that's
available
to
the
the
entire
pipeline.
A
D
Yeah,
so
the
part
we
need
to
be
cautious
about
that,
and
Reuben
knows
more
about
this,
so
he
could,
you
know,
confirm
or
tell
me
I'm
wrong
here,
but
the
part
that
we
need
to
be
conscious
of
is
if
we
are
triggering
a
pipeline,
we
are
sending
a
host
of
variables
to
that
Downstream
pipeline
I
ran
into
this.
D
This
is
how
I
know
if
you
use
the
same
variable
to
name
or
if
you're,
using
the
same
variable
in
the
pipeline
name
configuration
you
might
end
up,
creating
a
set
of
named
pipelines
in
that
Downstream
pipeline.
That
is
the
exact
same
name
of
your
Upstream
pipeline.
So
I
ran
into
this
with
deployer,
where
everything
was
named:
release
according
to
pipeline
or
I
think
it's
according
to
pipeline.
Whatever
it's
called
inside
of
release
tools
was
also
applying
to
deploy
by
accident,
because
the
variable
was
the
same.
D
So
we
just
need
to
be
conscious
that
we
don't
use
the
same
variable
naming
schema,
that's
used
elsewhere,
I,
don't
know
how-
and
this
is
simply
due
to
CI
variable
inheritance
like
there's
not
much.
We
could
do
about
this
unless
we
try
to
create
a
new
feature
or
modify
calculab
Works
I'm
working
around
this
by
simply
using
a
different
variable
name
and
setting
that
variable
differently
elsewhere.
So
it's
easy
to
work
around.
D
We
just
meet
conscious
if
we,
if
we
contributing
to
other
projects
that
leverage
this
similar
feature
that
we're
not
stopping
on
them,
with
a
variable
that
we
end
up,
passing
down
to
them
and
ruining
the
names
of
their
Pipelines.
A
Yes,
I,
actually
there's
a
set
to
the
point.
I
was
trying
to
touch
right,
so
thank
you
for
for
that
Ruben
I.
Think
as
part
of
the
Epic.
We
had
something
about
establishing
some
conventions
right
to
avoid
the
some
some
problems.
B
C
It
happens
because
gitlab
has
variable
precedence,
rules
and
Trigger
variables
are
right
at
the
top
so
like
they
override
anything
else
Okay.
So
it
would
be
nice
if
if
there
was
a
way
to
exclude
certain
variables
from
being
passed
to
the
downstream
pipeline,
but
there's.
D
C
The
best
way,
I
think
for
now
is
with
what
John
did
simply
name
the
variable.
You
know
something
less
generic
so
like,
instead
of
using
pipeline
name
as
a
variable,
you
can
use
release
tools,
pipeline
name
or
deploy
a
pipeline
name
QA
pipeline
name.
So
even
if
it
gets
passed,
Downstream
the
downstream
pipeline
is
unlikely
to
use
the
exact
same
name.
So
if
release
tools,
pipeline
name
gets
passed
to
deployer
deployer
is
not
going
to
use
the
page
variable
as
release
tools
pipeline
name.
C
D
Really
think
this
is
a
yeah
I
really
think
this
is
a
product
documentation
issue
versus
an
us
issue,
primarily,
but
I
do
think
it
would
be
beneficial
if
there's
some
sort
of
convention,
because
we
create
all
kinds
of
triggers
to
all
kinds
of
pipeline
or
projects.
So
it's
on
it's
highly
likely
that
there's
going
to
be
Collision.
D
That
said,
I
think
we're
one
of
the
first
few
teams
to
really
take
full
advantage
of
this
feature
so
like
we're
in
a
good
position
to
like
get
started
and
accidentally
create
a
wrong
term,
but
system
of
variables
that
end
up
being
used
with
a
wide
variety
of
projects
that
don't
get
modified
later,
because
it's
already
heavily
leveraged.
At
that
point,.
B
F
A
C
This
should
be
pretty
easy
to
add
to
the
documentation
for
workflow
name
I
can
edit
myself
or
if
someone
else
is
interested
I.
A
Can
wait?
Let's
make
sure
that
we
say
you
know
this
is
a
proposal
because
we
Face
these
kind
of
problems.
If
someone
is
about
the
proposal,
I
mean
they
can
feel
free
to
to
promote
something
better.
I.
Just
don't
want
to
find
myself
in
a
previous
situation
where
I
found
myself
in
the
past,
where
everyone
is
kind
of
like
Wild
Wild
West,
proposing
whatever
maybe
they
want
to
put
and
then
all
the
data
was
messed
up.
That's
it.
D
In
my,
in
my
opinion,
I
think
minimally.
We
should
document
that
this
is
just
something
for
people
to
be
aware
of
like
this.
It's
not
documented
that
triggers
it's
probably
document
that
triggers
send
a
host
of
values
or
variables.
We
just
need
to
make
it
present
or
advertise
on
the
workflow
rule,
documentation
that,
if
you're
going
to
use
a
variable,
keep
in
mind
what
you're
receiving
from
triggers
Upstream
because
of
variable
inheritance
I
think
that's
what
needs
to
be
documented
for
our
purposes
and
naming
I.
A
D
C
I
I
think
I
go
ahead.
Sorry
I
was
gonna,
say,
I
know
what
McKelly
is
talking
about
in
terms
of
Wild,
West
and
Metric
names,
but
this
is
different
because
metric
names
can
be
added
sort
of
you
add
like
hundreds
of
metrics,
but
in
this
case
it's
basically
a
variable
being
used
inside
your
project.
You
name
your
Pipelines
and
changing.
It
is
isn't
very
complicated.
You
just.
C
Search
and
replace,
basically
so
no
I
think
this
is
kind
of
not
as
big
a
problem
as
as.
A
A
Okay,
perfect
I
also
agree
with
the
fact
that
the
first
probably
minimal
iteration
to
do
that
is
just
to
document
that
this
might
happen
to
people.
So
I
flare
you
with
it.
Someone
is
you
scarbeck
Ruben
who
wants
the
connection
to
maybe
document
that
in
the
workflow
documentation
and
other
disclaimer
at
least
you
know,
hey,
be
careful.
This
could
happen.
A
B
F
B
C
Yeah
I
was
just
thinking
of
showing,
so
what
we've
been
doing
is.
Over
the
last
week
we've
been
merging
the
code
for
tracing
pipelines
into
master,
so
I
can
just
show
what
the
current
state
is.
There
are
still
two
Mrs
waiting
to
be
merged,
but
we
can
see
what
the
current
state
is.
B
C
So,
currently
we
are
tracing
every
single
job
in
the
pipeline,
as
well
as
bridged,
Downstream
jobs,
blade
jobs
are
a
product
feature
that
allow
you
to
trigger
Downstream
jobs.
C
What
we
are
not
tracing
currently
is
manually
triggered
Downstream
pipelines
so,
for
example,
weight
CNG
and
weight
Omnibus.
They
trigger
Downstream
pipelines
in
a
different
gitlab
instance,
which
is
why
they
cannot
use
the
bridge
feature,
so
they
trigger
the
downstream
pipeline
manually
using
the
trigger
API
and
we
don't
trace.
We
don't
detect
that
you
know
downstreet
pipeline
was
triggered
manually,
that's
one
of
the
Mrs
waiting
to
be
merged
other
than
that.
We
also
don't
link
retry
jobs,
that's
the
second
I'm
already
waiting
to
be
merged.
D
C
C
I'm
wondering
if
it
will
be
useful
to
to
also
include
sections
within
each
job.
C
I,
don't
know
what
do
you
mean?
So
if
you
open
a
job.
B
Picture
and
I
I
I
I
think
you
know
it
will
reduce
their.
B
C
C
So
these
can
be
passed
and
and
added
as
sub
spans
inside.
B
C
C
So
sections
are
useful
for
some
pipelines
like
weight,
omnibus,
so
the
weight
Omnibus
jobs
in
the
downstream
pipeline.
They
use
the
sections
well
I,
think
and
it's
useful,
sometimes
to
see
where
time
is
spent
because
I
remember
we
had
the
case
sometime
back
where
jobs
had
started
taking
a
long
time
and
what
was
actually
taking
a
long
time
was
not.
You
know
the
main
part
of
the
job.
It
was
the
SEC,
the
last
section
where
it
uploads
artifacts.
C
No,
we
can
definitely
do
it
selectively,
so
we
can
like
choose
certain
jobs
or
certain
Pipelines
to
to
trace
the
sections
yeah.
A
A
What
is
just
for
the
sake
sacrificating
everywhere
and
I-
think
it's
going
to
be
probably
not
bringing
the
world
that
we
expect,
or
maybe
even
just
causing
too
much
noise
and
people's
like
hey,
okay,
you
know
too
much
too
many
things
in
the
screen
to
look
at.
D
C
We
can
start
with
adding
it
only
to
certain
pipelines
and
if
it's
useful,
then
expanded.
C
D
D
C
Yeah,
oh
that's
another
thing.
So,
even
though
right
now
we
will
only
be
automatically
tracing
coordinator,
Pipelines,
the
tooling,
for
tracing
pipelines
can
be
used
to
trace
any
pipeline.
So,
for
example,
I
tried
using
it
to
trace
the
main
gitlab
Mr
Pipelines
and
it
works
for
that
as
well.
So
that's
generic
enough
to
to
trace
any
pipeline
great.
C
What
would
be
nice
for
the
API
to
provide
us
is
the
amount
of
time
a
job
remains
queued
like,
for
example,
waiting
for
a
runner
to
to
be
available.
C
A
Is
available
somewhere
or
we
just
matter
of
exposing
it
or
is
just
really
simply
not
available.
C
So
I'll
just
show
a
couple
of
issues
that
the
observability
team
is
working
on
to
improve
our
user
experience
with
their
product.
B
C
The
other
issue
is
relates
to
Jaeger
service
performance
monitoring.
So
Jaeger
has
a
feature
called
service
performance
monitoring,
which
gives
you
sort
of
a
bird's
eye
view
of
traces.
C
So
that's
what
SPM
gives
you
so
it
it
creates
a
red
Matrix
out
of
out
of
it
out
of
traces.
So
that's
request,
error
and
duration
request.
An
error
will
probably
not
be
very
useful
for
us,
because
that's
more
useful
for
request
response
type
of
traces
but
duration
metrics
will
be
very
useful
for
us
because
it
gives
you
things
like
heat
maps
for
each
span.
D
I
wonder
if
errors
would
be
good
to
have
for
creating
some
sort
of
visualization
that
kind
of
tracks
common,
failed
jobs
potentially
like
if
there's
something
that's
always
failing
on
a
regular
basis.
That
might
be
an
interesting
data
point
that
that
might
be
able
to
provide
QA
is
a
good
example.
I
feel.
C
Yeah
that
might
be
useful.
They
provide
two
metrics
calls
total
and
latency.
So
if
calls
total
has
a
label
separating
out
each
job
each
span,
then
we
might
be
able
to
do
that.
C
A
Yeah,
if
you
can
link
the
issue,
that
would
be
great
because
this
is
gonna
dictate
also,
if
you're
going
to
use
this
or
we're
going
to
POC
order.
A
Other
function,
other
solutions
to
do
that
right
because,
as
we
spoke
about
me
and
you
in
our
101,
this
is
like
every
dress
is
great.
My
interest
is
great
in
a
moment
where
you
can
do
aggregation
and
comparison
and
analyzing
Trends.
Otherwise,
it's
just
like
okay.
A
We
know
how
much
it
took
in
these
and
so
on,
and
we
know
that
maybe
once
we
had
a
job
that
took
blocked
me
than
expected
and
we
can
pinpoint
that,
but
we
probably
want
to
see
outliers
and
Trends
to
actually
you
know
trying
to
build
up
into
situations
where.
C
Yeah
yeah
exactly
so
yeah.
This
will
be
very
useful
for
us.
I
will
say
that
SPM
is
an
experimental
free
feature
right
now,
but
yes,
it'll
be
useful
for
us.
A
I
just
have
looking
forward
to
it.
Oh
sorry,.
D
I
just
have
one
question:
Reuben,
it's
103
degrees,
Celsius
somewhere
near
you.
Do
we
need
to
be
concerned
on.
A
I
thought
it
solution
was.
D
B
C
Is
boiling
but
it's
been
boiling
since,
for
the
last
year
more
than
a
year,
two
years
almost
now
seems
to
be
running
fine,
sir.