►
From YouTube: 2023-05-30 Delivery:Orchestration demo - EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
welcome
this
is
the
30th
of
May
2023,
and
this
is
our
emea
Americas
orchestration
demo.
So
we
have
a
demo
item,
great
news,
Alessio
all
credit
to
you.
Take
it
away.
Thank.
B
You
so
this
is
not
a
real
demo,
it's
kind
of
right,
so
it's
something
that
I
realized
that
happened
in
the
past
two
weeks
as
part
of
one
of
the
key
art
that
we
are
working
on
and
kind
of
materialize.
B
Very
well
today,
in
the
real
seven
experimental
deployment,
so
I
think
it's
a
good
time
to
just
share
the
I
call
this
the
Blazing
fast
speak
into
other
deploy
temporary
Edition,
because
the
work
is
not
yet
complete,
but
there
is
enough
stuff
to
build
confidence
and
have
a
blazing
fast
pick
in
total
deploy,
so
I
will
basically
share.
Let
me
share
my
screen
and
try
to
navigate
through
this
okay.
So
it's
too
big
one.
Second,
let
me
okay
smaller
now!
Well,
so,
basically,
if
I
mean
listen,
please
talks
it's
somewhere
here.
B
So
basically
we
have
a
goal
to
remove
this
play
book
here,
which
is
the
drum
book
which
is
I.
Don't
know
there
is
this
one
which
is
I
can't
find
it.
C
B
Another
one
right
yeah,
so
basically
we
were
used
to
run
this
thing
right.
So
when
you
had
the
big
intro
deployed
the
severity,
then
you
set
the
you
run
the
peak
schedule,
then
you,
but
you
have
to
set
the
autoploit
a
latest
official
flag.
B
Then
you
have
to
to
timely
disable
it,
and
then
you
have
to
pay
attention
about
what
is
happening
with
the
pipeline,
because
you
used
to
talk
something
that
was
that
had
a
running
Pipeline
and
then
you
never
know
because
you
may
start
deploying
something
that
ends
up
being
a
broken
package,
because
you
haven't
run
the
the
the
pipeline
into
the
autoploy
branch.
And
so
this
has
all
this
complexity
about
I
need
to
enable
disable
what
happens
in
between
and
all
this
thing
right.
B
So
what
we
have
right
now
as
the
intermediate
solution
is
that
so
we
were
doing
the
raid
7
deploy
experiment,
which
means
we
are
giving
the
developers
the
ability
to
run
and
merge
it
code
for
a
piece
of
time.
In
today
we
hit
even
staging
and
we
did.
We
give
them
a
production
canaries.
B
We
know
this
is
not
supposed
to
be
merged
and
from
the
gitlab
UI
I
was
able
to
pick
the
change
itself,
the
commit
into
the
auto
reply
branch
and,
basically,
that's
the
equivalent
of
picking
total
deploy.
This
is
specific
to
the
raid
7
experiment,
because
it's
not
merged,
so
we
are
doing
this
to
to
build
the
experimental
package,
but
then
I
was
able
to
set
the
auto
deploy.
Tag
latest
feature
flag
for
just
for
a
tiny
moment
and
run
the
demand
to
this.
B
B
But
the
good
news
now
is
that
you
can
quickly
go
back
to
the
feature
flag
and
disable
it
immediately,
because
now
we
have
validation.
We
have
pipeline
validation
in
the
rollout
phase,
so
you
can
safely
let
it
go
because
by
the
time
we
will
roll
out
to
product
to
staging
canary
the
pipeline
on
the
autoplay
Branch
should
be
completed
because
the
package
is
building,
so
it
takes
roughly
one
hour,
which
is
more
or
less
the
same
time.
It
takes
to
run
the
test,
and
so
you
have
this
safe
stop.
B
If,
by
the
time,
the
package
is
ready
to
roll
out
and
the
pipeline
isn't
complete
or
is
it
failed,
it
will
fail
the
deployment,
and
so
you
have
the
ability
to
retrigger
something
if
it
failed
and
then
retriggered
the
check
or
there
is
a
variable
to
override
the
check
and
continue
with
the
deployment.
But
this
already
basically
gave
us
the
ability
to
I'm
selecting
something
that
I
want
to
expedite
and
picking
it,
because
now
we
have
to
pick
manually.
B
Is
it
specific
to
the
to
the
rail
7
experiment
and
then
create
the
package
immediately
and
run
in
parallel
testing
and
packaging
with
the
assurance
that
the
package
it
will
be
validated
at
the
end
and
so
basically
what
what
we?
The
initial
goal
for
this
task
was
to
actually
remove
the
official
flag
itself
to
always
operate
in
this
way.
So
if
we
pick
something
it
is
running,
we
always
tag
it
because
we
have
the
check.
We
have
the
delayed
check.
B
D
When
it
comes
to
that
manual
timer
or
that
manual
job,
that
you
highlighted
there,
is
that
necessary
if
I
mean,
if
we're
willing
to
wait
the
10
minutes
for
the
auto
deploy
timer
job
to
complete?
Will
that
still
do
the
same
thing?
Assuming
the
feature
flag
is
enabled
yeah.
B
C
Oh
so
I
do
have
a
question
in
this
scenario,
when
you
cherry
pick
the
mercho
quest
into
the
auto
deploy
branch
and
enable
the
flag,
so
the
package
could
be
prepared
and
then
immediately
disable
the
flag.
What
would
have
happened
if
the
auto
deploy
Branch
was,
for
some
reason,
broken
and
therefore
the
last
pipeline?
The
last
Pipeline
on
the
other,
deploy
Branch
with
the
commit
was
also
read.
C
B
B
The
thing
is
that
the
problem
would
be
with
tagging
right
so
right
now,
if
it's
enabled
it
tags
everything,
even
if
it's
red,
but
it
will
fail
at
the
validation
stage,
and
this
is
a
basically
we
are
losing
time,
because
we
know
ahead
of
time
that
this
will
not
pass
the
validation
but.
B
C
B
We
remove
the
deficient
flag
at
all.
It
would
be
different
because
we'll
be
is
it
running,
I
mean
not
running.
The
point
is,
did
it
failed
or
it
can
potentially
be
a
success
if
it
can
potentially
be
a
success?
So
it's
running
it's
created,
it's
any
state
that
is
not
failed
or
canceled.
Then
I
will
tag
because
I'm
betting
on
the
fact
that
this
will
be
green
in
one
hour.
If
it's
already
red
or
canceled,
it
can't
be
without
manual
action.
It
can't
be
ready
in
one
hour,
so
it
will.
B
C
Okay,
yeah
I
think
I'm,
probably
missing
some
context,
because
I
haven't
read
how
at
this
work,
but
by
what
you
are
saying,
it
is
going
to
check
the
commit
or
immediately
tack
the
commit
that
is
on
top
of
the
pipeline,
regardless
the
status
it
is
going
to
be
in
the
package
and
then,
after
that
it
is
going
to
validate.
If
the
commit
is
valid
and
if
it
is
valid,
then
we
are
going
to
proceed.
Okay,
okay,.
B
B
B
C
B
B
C
B
C
B
Failures
on
all
those
things
that,
basically,
when
you
we
do,
we
need
to
get
out
from
an
incident
with
a
revert
or
a
or
a
commit.
Okay.
A
And
then
was
that
the
current
situation
as
it
is
today
that
you
just
described.
B
Are
aiming
to
the
final
step
now
is
to
remove
the
feature
flag
entirely,
so
this
will
always
give
us
the
fastest
way
to
get
pick
in
tool
to
deploy
into
a
package.
B
So,
and
so
you
don't
have
to
deal
with
with
the
feature
flag,
it
would
just
always
tag
pick
into
Auto
deploys
when
they
are
running
so
the
you
always
get
an
expedited
big
and
total
play
experience
in
any
case,
and
so
we
can
remove
the
that
that
runbook
page
and
there
are
other
Pages
where
they're
mentioning
how
to
use
that
flag.
So
the
only
way
we
are
keep
the
only
reason
we
are
keeping
the
flag,
which
is
questionable,
but
it
is
up.
B
We
can
discuss
it
if,
if
we
want
to
create
an
autopilot
Branch
out
of
git
lab
Master,
regardless
of
its
state,
I
would
say
that
rather
create
the
branch
manually
and
set
the
variable,
which
is
I,
mean
it's
less
risky
than
starting
creating
packages
out
of
a
off
the
top
of
muscle
which
moves
so
fast
and
may
change.
Since
you
you
hit
that
button,
but
I
mean
One
Step
at
times.
If.
B
So
yeah,
that's
the
thing:
okay,
we
we
are
unblocked
because
we
don't
rely
on
Master
being
green,
because
we
I
mean
the
problem
would
be
for
the
next
Auto
Supply
Branch.
How
do
we
make
sure
that
it
has
the
same
fix
into
it?
But
this
is
another
problem
which
is
about
the
minimum
shot
that
has
to
be
included
into
a
package
which
is
it's.
A
Another
problem
down
in
the
road,
currently
a
deployment
feature
we
could
handle
that
with
a
broad
book.
Actually
I
think
you
know
like
I
know
it's
not
a
great
solution,
but
our
rollback
run
book
I
think
is
a
great
example
of
one
where
actually,
we've
done
a
good
job
of
putting
a
step-by-step
process.
Maybe
we
should
just
do
the
same.
D
So
it
sounds
like
because
yeah
one
of
the
problems
that
I
recall
is
that
when
we
pick
into
the
auto
deploy
branch,
it
doesn't
guarantee
that
it's
picked
into
every
Branch
after
and
so
you're
saying
this
won't
address
that
initially.
B
B
It
may
give
you
another
package,
10
minutes
after
that,
roll
it
back
if
the
new,
if
the
new,
auto
employee
preparation,
timer,
is
run
right,
so
but
basically
every
round
we
do
do
we
need
and
also
deploy
a
new
off
Supply
Branch.
Yes,
no
and
we
created
and
then
on
the
same
job.
We
pick
on
top
of
that
and
then
we
tag
right.
B
So
the
in
the
say
the
mean
time
to
we're
working
on
mean
times
resolution
by
ensuring
that
we
provide
we
produce
a
package
that
addressed
the
problem,
but
still
we
need
to
be
aware
of
the
the
order
of
operation
and
if
the
a
new
package
will
a
new
water
deploy,
Branch
will
be
generated.
That
may
not
include
that
thing.
B
It's
a
huge
Improvement,
so
you
change
because
you
it's
guaranteed.
You
had
the
label
if
it
applies.
If
the
peak
is
is
appliable,
it
generates
it
immediately
and
in
one
hour,
is
ready
to
roll
out
before
it
was
at
least
two
hour
plus
hoping
that
none
of
the
timers
were
overlapping
and
just
never
producing
the
package
that
you
want.
D
Thanks
so
I've
been
working
on
this
epic
for
the
security
release
Automation,
and
this
kind
of
focuses
on
the
pipeline
that
Myra
proposed
and
created
a
POC
of
and
so
I've
created,
started,
creating
some
issues,
kind
of
breaking
down
that
POC
and
also
including
a
few
other
of
the
related
issues
that
we've
kind
of
looked
at
that
would
fit
into
that
Pipeline
and
I've
broken
things
up
initially
by
pipeline
stage,
which
aligns
roughly
with
the
different
days
of
tasks
in
the
security
release,
task
issue.
D
So
I
have
it
set
up
so
that
the
Lord
can
keep
going
while
I'm
out
and
I
thought
it'd
be
helpful
to
go
over
the
Epic
and
discuss
where
we
might
want
to
start,
and
my
initial
thoughts
on
where
we
might
want
to
start
are
after
sort
of
the
initial
setup
of
the
pipeline,
and
you
know
setting
up
those
first
tasks
or
First
Steps
tasks
from
the
security
release.
D
D
So
that's
the
big
thing
is
that
that's
the
data
will
merge
the
backwards
and
then
the
other
place
that
I
kind
of
thought
of
that
we
could
get
started
is
the
early
merch
stage,
which
we
generally
run
the
same
chat,
ops
command
multiple
times
per
day,
but
there's
some
aspects
there
where
we
could
improve
automation.
D
D
And
then
also
kind
of
giving
them
a
warning
a
few
days
before
their
Mr
or
their
issue
is
about
to
be
unlinked
so
that
they
they
know
about
it
ahead
of
time,
but
mostly
I'm.
Looking
for
like,
like
a
sense
of
if
we
built
you
know
one
or
two
of
these
things
to
start
in
a
way
that
we
could
kind
of
mix
it
in
with
our
current
process
over
the
next
month
or
two
as
we,
you
know,
continue
to
build
out
the
rest
of
the
pipeline.
D
Oh
yeah
but
yeah
so
I'm
curious
to
hear
if
other
people
agree
or
have
other
thoughts.
A
I
guess
I
suppose
like
more
specifically
like
if
we
were
to
to
cut
this
right
down
to
First
iteration
like
is
there
one
of
those
things
that
you've
mentioned
there?
Steve
that
you
think
like
this
is
this
is
the
one
that,
above
all,
others
is
causing
us
more
pain.
B
So
I'm
running
this
this
week,
so
I'll
kind
of
I
haven't
concluded
so
maybe
later
in
the
step
after
our
territory.
B
I
can
tell
in
the
where
I
am
right
now
what
I
found
annoying,
but
maybe
not
painful,
is
oh.
No.
Let
me
rephrase
because
actually
I
have
Good
Earth
thing,
so
I
think
that
there
is
room
for
improvement
in
the
preparation,
a
lot
of
root
for
improvement.
For
instance,
there
are
checklists
like
send
this
message
to
this
Channel
and
then
there
is
another
one
which
is
inform
this
chat
and
say
the
first
one
is
quality.
The
second
one
is
engineering
productivity,
same
Department
and
the
second
one
has
no
blurb.
B
It's
just
informed
that
the
process
started,
and
so
then
you
ask,
should
I
just
send
the
same
message
should
I
link
that
one
and
it
it
basically
there
are
some
of
those
are
very
actionable
other.
Are
you
should
try
to
figure
out
what
was
in
the
mind
of
who
wrote
the
checklist,
because
it's
not
clear
what
you
have
to
do.
You
have
to
inform
about
something
people
about
something
and
that
that's
it
checking
Branch
status.
All
those
things
are
annoying,
but
it's
a
one-off
effort.
B
Then
there
is
the
day-to-day
which
is
easy
to
forget.
So
if
you
don't
I
mean
you
have
to
be
on
top
of
it,
because
if
you
don't
keep
merging,
you
are
delaying
the
time
it
takes
to
get
the
fixes
into
the
next
Auto
deploy
and
you
may
have
something
to
miss
the
deadline.
So
it's
easy
but
annoying
and
then
I
find
really
annoying
chasing
down
author
so
having
to
say
we're
going
to
cut
the
release
by
the
state.
It
has
to
be
ready
by
the
state.
B
What
do
you
think
is
the
status
of
your
security
fixes
and,
and
things
like
that.
So
if,
if
pinky
authors,
writing
the
the
message
about
say,
we
started
the
process,
it
will
likely
complete
in
two
days
three
days
or
even
linking
back
to
the
to
the
issue
and
say
the
timeline
is
defined
here.
B
So
go
there
check
out
when
we
are
going
to
to
consider
things
ready,
because
the
point
is
that
there
is
always
a
security
release
tracking
issue
open
at
time,
and
so
it's
quite
possible
and
always
happen
that
you
get
new
issues
assigned
during
the
preparation,
and
you
may
end
up
chasing
this
never-ending
situation
where
every
time
you
run
the
early
preparation
emerge,
you
are
merging
something
new
which
delays
everything,
because
now
you
have
to
promote
it
and
by
the
time
you
promote
it,
you
may
run
it
again
and
putting
something
else
into
it,
and
so
you
that
that's
kind
of
the
one
of
the
problems
just
clearly
marking
we're
not
accepting
new
stuff.
B
C
C
C
B
A
C
Good
yeah,
yeah
I
was
gonna
say
that,
for
me,
the
most
excruciating
tasks
are
the
initial
ones,
the
ones
that
says.
Okay,
you
need
to
send
a
message
here,
post
a
message
there
notified.
If
about
this,
because
as
unless
you
said,
the
message
is
the
same:
we
just
need
to
copy
and
paste
it
across
multiple
channels
and
then
disable
the
Omnibus
thingy
check
the
merge
request
status,
even
though
that
is
easy.
It
kind
of
piles
up
with
all
the
tasks
that
you
need
to
do
and
also
the
final
tasks.
C
For
me,
they
are
also
sort
of
easy
to
do,
except
when
they
fail.
For
example,
the
sync
remote
that
one
is
very
painful
and
also
creating
the
versions
is
something
that
can
be
easily
automated.
C
B
Creating
the
versions-
it's
something
that
came
up
with
the
team
that
when
they
were
upgrading
the
the
page
we
asked
for
API,
so
they
implemented
those
we
never.
A
So
a
couple
of
things
I
guess
about
this
like,
if,
if
you
do
want
to
go
down
the
the
route
of
putting
a
pipeline
on
things
versus
having
chat,
Ops
commands,
then
you
know
it
sounds
like
a
lot
of
these
communication
style
tasks
could
be
a
really
good
burst
jobs
to
put
on
that,
because
if
they
fail,
it's
easy
to
see
and
easy
to
recover
from
versus
something
like
the
backboard
merge
or
something
that
may
be
a
bit
more
significant.
A
But
how
do
we
mono
measure?
What
is
the
most
time
consuming
part,
because
the
sort
of
overall
goal
for
this
whole
epic
is
to
get
down
the
active,
the
active
hours
involved
with
the
security
release?
There
are
any
pieces
that
stand
out
as
like
particularly
time
consuming.
D
Well,
yes,
one
of
the
questions
I
have
like
last
year.
You
kind
of
walked
through
how
it's
not
introducing
down
the
authors
and
asking
them
and
all
this
kind
of
stuff.
D
So
if
there
was
an
automated
ping,
that
said,
you
know
just
pinged
all
the
authors
and
said
hey
this
process
has
started
if
you
or
rmrs
have
not
been
merged
like
look
at
this
table
or
look
at
this
to
see
what's
wrong
and
try
to
fix
it,
and
then
you
know
having
that
sort
of
locking
mechanism
to
prevent
people
from
adding
new
things.
Would
you
stop
looking
at
the
Mrs
and
and
just
trust
that
that
tooling,
is,
is
giving
all
the
communication
needed?
B
I
would
probably
stop
looking
at
the
merge
requests
and
issues
and
just
look
at
the
the
tracking
issue.
If
there's
I
mean
it
depends
on
where
we
are
collecting
those
things
right.
So
if
we
are
going
to
Ping
people
in
the
in
the
merge
request
itself,
then
I
still
have
to
go
there
and
see
if
they
answer
or
they've
read
something
about
that.
B
But
if
this
goes
into
the
tracking
issue,
then,
as
a
central
point
right,
I
go
there
I
see
the
status
of
everything
and,
if
I
have
to
dig
through
something
I
can
dig
through
something,
but
only
the
things
that
didn't
receive
answer
they
still
have.
They
still
have
some
weird
status,
but
it
probably
works
better
right
because
I
I
don't
have
to
open
all
the
merge
requests,
only
those
that
have
no
Communications
or
things
like
that,
because
I
think
it's
a
two-way.
It's
the
two
problems
here.
B
One
is
that
the
table
is
already
providing
a
lot
value,
because
I
used
to
check
all
all
the
merge
requests,
one
by
one
now,
I'm
only
checking
the
one
that
have
some
status.
That
is
not
okay,
huge
Improvement.
The
other
thing
is
the
communication.
Part
right,
so
authors
are
done,
are
not
notified
that
the
process
started.
So
then
you
have
to
find
them
later
in
the
process
when
you
are
ready
in
a
hurry.
So
if
we
could
do
a
pre-
check,
let's
say
we
started
the
process.
The
process
started.
B
We
are
maybe
not
pinging,
those
who
are
okay
on
the
first
run,
so
maybe
you're,
just
they're
okay,
you
merge
them
and
that's
it.
No
noise
for
the
developers,
but
everyone
else
that
has
not
ready,
create
a
ping
immediately
inform
them
that
we
are
actively
working.
This
should
be
their
top
priority
or
they
should
unlink
if
they
think
they
can't
do
it.
Unlink.
A
I
also
wonder
if
maybe
as
a
sort
of
additional
like
a
future
iteration
separate
from
this,
we
should
start
thinking
about
how
do
we
make
this
self-serve?
So
at
the
moment
we
sort
of
own
the
whole
thing
and
we
go
and
tell
individual
people
like
hey.
We
need
you
to
do
this
thing,
but
I
get
the
impression.
I
can't
remember
if
I
was
told
this
directly
or
whether
it
was
just
sort
of
implied
I.
A
Think
I
was
told
directly
that
it's
very
difficult
for
people
who
like,
if
say,
for
example,
the
author
is
then
not
around
the
Stage
Group
doesn't
there's
no
continuity,
it's
kind
of
on
an
individual
person,
so
sometimes
things
fall
through
the
gaps.
I
wonder
if
it
might
also
be
worth
thinking
about
like
how
would
we
get
that
table
so
that
is,
the
expectation
is
more
of
a
it's
here.
Is
the
information
of
the
release
in
preparation
and
Stage
groups
are
being
expected
to
actually
track
that
themselves.
A
It
might
require
us
to
think
about
a
few
extra
steps
around
our
comms
and
timeline,
but
I
wonder
if
that
might
be
a
kind
of
a
useful
goal
to
get
to,
because
what
I'm
thinking
of
is
in
the
future.
If
we
do
get
to
having
multiple
security
releases
in
the
months,
then,
from
our
point
of
view,
it
doesn't
really
matter.
If
something
misses
it
will
just
go
into
the
next
one.
We
should
try
and
push
that
responsibility
onto
the
stage
groups
exactly
as
like
the
monthly
release.
A
So
I
wonder
if
there's
some
way
like
we
could
I
almost
wonder
if
there's
some
point
where
we
actually
go
like
put
out
a
kind
of
either
broadcast
or
it
goes
into
releases,
Channel
or
somewhere,
like
that,
where
we
say
security,
release,
properties
and
progress-
and
here
is
the
here-
is
the
overall
table
yeah?
We
don't
have
to
move
it
anywhere.
For
now
we
could
just
put
a
link
to
it.
C
You
think
that
having
a
defined
date
like
the
ones,
we
are
planning
about,
having
the
monthly
release
I
think
the
third
week
of
every
month.
That
is
going
to
also
kind
of
force
us
to
have
perhaps
some
schedule
security
release
that
we
are
going
to
have
each
week
each
Thursday
I,
don't
know
each
Wednesday
and
that's
going
to
allow
us
to
have
some
more
defined
schedule
for
the
faces
of
a
security
release
in
a
way
that
stage
teams
are
going
to
be
trained
about.
Okay,
when
to
expect
to
have
things
ready.
A
Maybe
I
think
there
are
some
pros
and
cons
to
having
a
sort
of
rigid
date
like
disadvantages.
We
create
more
deadlines,
so
it
puts
more
pressure
on
we
really.
We
remove
a
lot
of
our
flexibility,
so
I
don't
know
that
we
will
necessarily
want
to
or
be
able
to
kind
of,
say
lock
this
fully
down,
I
think
probably
for
now,
and
if
we
do
that's,
certainly
going
to
be
Q3
or
Beyond,
so
not
not
soon.
So
I
think
we
should
probably
for
now
assume
that
the
flexibility
will
remain.
A
B
Well,
there's
something
came
to
my
mind
when
we
were
discussing
this
idea
right
people
going
somewhere
to
see
the
status
of
of
things
right,
so
I
wonder
if
we
want
to
stop
using
the
next
security
releases,
a
parking
slot
which
is
the
source
of
all
this
problem
and
actually
having
our
dedicated
label
something
like
next
here's
release.
So
when
you
create
a
security
issue
for
something
that
the
tracking
issue
for
the
specific
vulnerability,
don't
remember
the
specific
name
of
that
thing
right.
So
the
thing
that
we
link
into
the
the
overall
issue.
B
So
instead
of
linking
you
just
have
a
label
next
security
release
and
then
an
automation
produce
weekly
reports,
so
everything
that
has
that
label
applied
will
be
in
that
report
with
the
status
like
if
you
were
to
merge
it
right.
So
it's
a
not
a
sign
ready
to
be
merged
invalid
or
the
same
logic
right,
but
it
just
gave
us
that
thing
and
when
something
is
ready,
he
will
link
it
to
the
tracking
issue,
so
that.
B
With
yeah
so
that
we
only
deal
with
ready
things
and
then
it's
say
it's
up
to
upsec.
If
something
is
high
priority
High
severity,
they
can
track
this
through
the
the
weekly
thing
and
make
sure
that
it
gets
done
in
time
again.
We
we
split
the
problem
right
so
Gathering,
the
the
status
of
development,
which
is
more
into
upstick
and
development,
and
making
sure
that
everything
that
is
ready
gets
into
a
release,
timely,
which
is
more
what
we
do
as
delivery.
D
I
think
that's
I'm.
Thinking
about
the
idea,
like
I,
think
that's
a
really
interesting
idea
to
automate
the
linking
so
that
it
only
links
when
when
something
is
ready,
rather
than
putting
it
on
the
author
to
tell
us
when
it's
ready
and
possibly
being
wrong
about
it,
yeah
I
think
that's
that's
a
really
interesting
idea.
I'm
trying
to
think
of
like
any
implications
or
problems
like
that.
I
could
see
down
the
road
yeah.
C
It
could
it
could
be
tricky
or
I'm,
not
sure
about
a
critical
security
releases,
for
example,
that
required
to
be
linked
to
the
tracking
issue,
but
perhaps
not,
and
also
we
will
need
to
consider
the
reduced
backboard
situation
in
which
we
don't
require
for
teams
to
have
four
merge
requests,
but
only
two
yeah.
You
just
need
to
configure
those
edge
cases.
We.
B
Can
tell
people
to
link
manually
right
if
the
it's
a
reduced
backboard
and
they
think
it's
ready.
I
think
that
the
the
main
goal
here
is
to
split
grooming,
the
status
of
the
all
the
security
vulnerabilities
out
there
and
making
a
release
faster
and
and
ready
right.
That's
the
thing
because
then
you
have
a
weekly
report
that
can
be
assigned
to
upsec.
Just
something
like
this.
B
C
B
If
something
requires
manual
action,
we
will
do
I
mean,
as
we
do
today,
we'll
do
manual
actions,
but
it
it
can
give
clear
indication
of
what
to
be
done
or
what
has
to
be
done
on
on
a
specific
week.
D
So
then,
thinking
about
the
entirety
of
the
security
release
process.
So
if
we
were
to
to
implement
that
type
of
a
system,
then
that
would
reduce
and
eliminate
a
lot
of
the
early
merge
phase
sort
of
effort
and
time,
hopefully
which,
if
we
look
at
the
the
current
release
or
security
release
task
issue.
D
There's
not
really
none
of
that's
actually
really
listed
anywhere
right.
That's
just
like
the
early
merch
phase
run
the
the
merge,
the
default
branch
so
in
terms
of
the
idea
of
creating
a
pipeline
that
all
could
fit
in
as
part
of
the
automation
that
that
happens.
When,
whenever
a
merge,
an
early,
merge
command
is
run
or
I.
D
Guess
it
wouldn't
even
necessarily
be
necessary
anymore,
because
it
would
be
automated
that
things
would
be
linked,
so
I'm
thinking
in
terms
of
like
what's
realistic
for
like
this
next
quarter,
like
that
seems
like
something
we
might
want
to
build
out
after
we've
automated
some
of
these
other
tasks,
because
that
potentially
would
be
like
a
like.
It
seems
like
a
bigger
project
to
me,
but
it's
also,
you
know
first
time
thinking.
A
About
what's
wrong,
I
suggest
that
what
might
be
useful
to
do
is
so
this
week
we've
got
Myra
ready
to
go
and
you're
around
as
well
Steve.
What
about?
If
we?
So
what
I
think
one
of
the
concerns
for
I
have
with
setting
up
a
pipeline
is
starting
to
run.
These
tasks
is,
there's,
there's,
probably
a
little
bit
of
overhead
of
actually
getting
a
pipeline
set
up
and
in
place,
and
we've
got
some
documentation
and
we
probably
want
to
try
it
out.
A
So
it
feels
like
we
should
do
that
on
a
small
task
that,
and
you
know,
remove
some
of
the
pain.
So
the
common
stuff
we
talked
about
at
the
beginning
seems
absolutely
perfect
to
me
like
if
we
just
scoped
it
so
it'll
set
up
a
pipeline
that
we
trigger
in
whatever
way,
makes
sense,
and
it
does
the
slack
notifications
and
it
does
the
jihu
thing,
and
then
we
can
just
manually
keep
an
eye
on
it
for
the
next
release
right
and
if
it
doesn't
work.
Here's
the
fallback!
A
Is
you
just
post
this
thing,
so
you
just
post
this
thing
in
the
in
the
right
place
or
you
run
the
job
or
it's
easy
enough
to
debug
right,
but
at
least
we
get
to
start
to
validate
that
idea
of
having
of
jobs.
A
I
would
suggest
that
we
we
focus
and
have
like
the
comms
Pipeline,
and
you
know
not
go
the
security
release,
Pipeline
and
attempt
to
do
the
whole
thing,
but
just
validate
what
this
looks
like
as
a
small
scoped
task
alongside
I
think
it
would
be
interesting
to
try
and
get
out
figure
out
what
would
be
the
steps
needed
to
flip
this
linking
process
right,
even
though
it's
not
that
solution,
we've
just
discussed
what
would
it
actually
take
for
us
to
flip
away
so
that
it's
not
the
release
managers
chasing
down
lots
of
people
and
saying
you're
going
to
you
need
to
do
this
step
and
you're
going
to
miss
this,
but,
like
Alessia
I
said
we
flip
it
around
so
that
we
just
deal
with
stuff.
A
That's
ready
and
we
present
the
information
that
allows
someone
else
so
appsec
or
stage
groups
to
determine,
if
they're,
ready
and
then
I
think
that
one
feels
like
similar
to
perhaps
the
patch
release,
where
we
might
suddenly
unlock
quite
a
big
step,
change,
I,
I
guess
my
concern
with
automating
lots
of
little
steps
in
a
pipeline
is
it
gives
us
some
value,
but
it
probably
doesn't
give
us
like
10
hours
of
value,
I.
Think
we're
going
to
need
to
do
one
or
two
of
the
bigger
things
this
quarter
to
really
get
that
time.
D
B
It
makes
sense,
and
if
we
do
this
weekly
report
or
whatever
it
is,
we
can
even
consider
pinging
by
PM,
because.
A
A
Exactly
I
think
similar
to
what
Rachel
does
with
the
error
budgets
reports
at
the
beginning
of
the
month.
She
just
literally
drops
through
a
few
slide
channels
and
goes
hey
here.
Is
this
month's
error
budget
report
take
a
look
like
you
know?
We
have
a
few
of
those
sorts
of
channels
or
meetings
that
we
could
literally
drop
this
in
and
say
here
is
the
state
of
the
next
security
release.
So
here
is
the
state
of
the
security
fix,
is
being
prepped
go
forth.
A
We
will
release
whatever
is
ready,
and
that
feels
like
a
really
nice
sort
of
approach
that
will
scale
with
us
as
we
alter
release
cadences
and
things
like
that
we
won't
have
to.
We
won't
actually
have
to
change
our
process.
We
can
just
start
dealing
with
things
that
are
ready.
B
I
want
to
point
out
something
which
is
an
implementation
detail,
but
that
I
found
really
useful
in
the
past
week
with
problem
with
pipeline,
because
we're
talking
about
moving
things
to
pipeline.
So
if
you
remember
I
think
a
couple
of
months
ago,
we
have
implemented
the
retry
trigger
pipeline.
So,
instead
of
so
you
can
retry
job
on
Downstream
pipeline
or
you
can
Retreat
retrigger
the
the
trigger
as
a
whole
that
that
is
a
huge
Improvement.
I
initially
thought.
B
This
is
a
terrible
idea
because
it
takes
more
time,
but
then,
when
the
problem
is
that
the
downstream
pipeline
has
a
broken
pipeline,
that's
a
lifesaver
because
now
you're
no
longer
wasting
the
whole
package
because
you
can
re-trigger
and
it
will
work.
So
if
we
are
going
to
do
something
like
this
for
the
the
com
pipeline
or
whatever,
we
call
it.
The
the
security
release
pipeline
I'd
suggest
that
we
identify
steps,
but
each
one
of
those
is
a
trigger
is
a
triggered
to
the
same
project
so
that
in
case
we
run
something.
B
A
I
I
feel,
like
I,
have
to
also
comment
that
I
have
I
I,
don't
know
how
many
years
we've
been
posting
their
message,
but
it
has
amused
me
every
release.
Since
we
started
everyone
has
very
very
patiently
and
very
willingly
posted
that
message
to
jihu,
and
it
is
specifically
written,
so
my
bot
could
do
it.
I
have
to
say
I
I
have
a
prioritized
that
I
admit.
I
will
take
full
responsibility,
but
I
am
amazed
that
no
one
has
just
gone.
What
am
I
doing
I'm
also
making
this.
D
Okay
and
starting
with
those
first
tasks,
will
will
make
like
an
easy
transition,
because
we
can
just
update
the
jobs
command
so
that,
instead
of
generating
the
issue,
it
runs
those
tasks
and
generates
the
issue.
And
then
the
release
manager
jumps
into
the
issue
from
there,
and
it
also
gives
us
the
night
of
nice
opening.
If
we
do
want
to
start
automating
other
jobs
into
a
pipeline.
We
can,
you
know,
pull
them
in
as
needed
and
just
change
the
task
issue
from
there.
A
Sounds
good
and
I
also
request
on
the
implementation
that
we
we
do,
keep
it
one
job
per
task
so
that
we
can
easily
bring
things
in
and
out,
and
you
know
if
we
have
to
retry
it's
like
very
specific
we're
just
trying
this
one
piece
of
things.
D
A
A
Cool,
does
that
give
you
everything
you
need
Steve
to
take
the
next
steps.
A
Great
Myra
do
you
need
anything
more
for
the
adopting
the
patch
release,
Epic.
C
A
A
Awesome
great
stuff,
nice
progress,
I'm
super
impressed,
like
I
I
sent
McKelly
just
this
morning
like
given
our
capacity.
This
quarter
super
impressed
by
how
much
we've
already
achieved
so
well
done.
Everyone,
like
I,
think
it's
definitely
been
down
to
excellent
coordination,
you're
handing
off
well
with
each
other
scoping
things
and
yeah
really
prioritizing
on
this
stuff.
So
great
work
awesome
to
see
the
security
stuff.
B
A
Okay,
oh
I
have
one
final
thing.
Actually,
so
when
I
move
this
I'm
gonna
have
a
hunt
about
and
see.
If
I
can
do
some
scheduling
to
improve
things.
I
now
have
a
clash
with
the
original
demo
time,
which
is
unfortunately,
not
gonna
shift,
but
the
availability
stand
up
so
I'll
catch
up
with
you
all
and
figure
out
what
what
we
do
about
making
sure
we
have
a
time
so
that
there's
not
a
way
to
clashes
on
on
this
demo.
A
So
I'll
keep
you
posted
on
slack
if
we
want
to
shift
things
around,
but
we
may
want
to
do
some
moving.