►
From YouTube: 2023-02-07 - Delivery:Orchestration demo - EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
welcome
everyone.
This
is
our
7th
of
February
email,
America's,
orchestration
demo.
So,
let's
see
Steve
you've
got
the
first
demo.
B
Okay,
so
I
have
a
few
updates
that
I've
been
working
on
for
the
the
extended
maintenance
policy
concerning
the
updates
to
danger,
and
so
what
this
Mr
is
doing
is
it's
adding
some
additional
guards
against,
or
it's
specific
around
the
package
and
test
pipeline.
So
the
end-to-end
package
and
test
pipeline,
it's
a
pretty
expensive
pipeline
to
run,
and
we
we
discovered
in
implementing
it
that
it
actually
does
run
on
all
stable,
Branch
Mrs.
B
So
so
it's
an
automatic
thing,
whereas,
like
on
on
canonical
or
I,
guess
a
master
Branch.
It
only
runs
manually,
whereas
on
stable
brand
should
always
runs,
so
that's
actually
kind
of
a
benefit.
So
we
don't
have
to
worry
about,
of
course,
trying
to
force
people
to
run
it.
B
B
We
also
check
for
that
and
then
have
another
error
that
will
come
up
and
fail
the
danger
review
job
and
then
last.
What
we're
doing
is
the
the
general
danger
output.
We
just
add
this
block
of
text
kind
of
pointing
engineers
in
the
right
direction.
To
say
that
you
know
this
job
must
be
run,
and
if
there
are
failures,
we
need
to
Ping
a
software
engineering
test
to
confirm
whether
or
not
the
failures
are
are
okay
or
not,
and
so.
A
C
B
B
And
I
think
in
the
issue.
However,
we've
been
communicating
with
also
mentioned
that,
if
that
person's
not
available,
they
can
just
ping
in
the
quality
slack
channel.
So
I'll
I'll
add
some
details
to
this
I
suppose
to
to
make
that
a
bit
more
clear,
perfect
and
so
yeah,
and
so
here's
an
example
of
when
that
actually
happens.
So
I
just
pushed
these
changes
to
a
stable,
Branch
added
the
label.
B
I
learned
quickly
that
you
can't
just
add
the
pipeline
expedite
label,
you
either
need
it
to
be
Master
broken
or
it
needs
to
be
revert.
So
in
this
case
I
just
labeled,
it
that's
a
revert
in
order
to
make
this
work,
and
then
here
we
have
the
failures,
so
those
two
failures
show
up
and
then
the
extra
bit
of
text
is
at
the
bottom
here:
I'm,
not
sure
if
it's
easy
to
move
that
up
anywhere,
because
all
of
these
always
will
be
at
the
top
I
potentially
could
move
it
above.
C
Point
could
we
make
a
something
to
make
that
text
more
notable,
because
sanat
mentioned
that
it
is
very
important
if
the
pipeline
fails,
someone
from
quality
needs
to
investigate
before
this
is
merged,
so
yeah
I
don't
know
if
perhaps
pinging
the
merch
request
outer
there.
C
You
know
like
with
a
threat:
I,
don't
know
if
that
is
possible.
Yeah.
B
I'll
see
what
I
have
access
to
to
put
here.
I'm,
pretty
sure
I
mean
we
have
access
to
people
here.
So
we
should
be
able
to
ping
people,
so
yeah
I
think
I'll
try
to
set
it
up
where
it
just
pings
the
offer
author,
so
they
get.
They
see
that,
but
to
look
a
little
bit
into
the
changes
this
Builds
on
what
we
already
built
initially
for
the
stable
branch
checks.
B
So
so
here's
that
message
that
we
can
see
that's
just
plain
for
some
reason:
I
had
a
lot
of
trouble
like
we.
We
have
this.
The
main
set
of
checks
built
into
an
actual
class
in
the
tooling
folder
and
I
I
really
struggled
to
find
a
way
to
get
this
markdown
to
render
like
from
that
file.
So
I
ended
up
just
putting
it
back
in
the
data
file.
B
Since
it's
just
a
block
of
text,
I
was
going
to
ask
in
the
review
if,
if
there
is
another
way,
but
since
it's
just
a
block
of
text,
it's
not
really
a
big
deal,
I
think
if
it
stays
there,
but
then
in
the
actual
area
for
testing
we've
got
the.
B
B
These
new
rules
around
the
packaging
test
and
then,
if
it's
only
documentation,
changes
in
the
Mr,
then
we're
also
allowing
it
to
bypass
these
rules,
because
we
don't
need
the
packaging
test
for
that
yeah
and
then
here
are
those
two
error
messages,
so
one
it
fails
if
it
has
the
pipeline
X
label
and
then
the
other
one
fails
if
the
packaging
test
has
not
run.
B
That
was
an
interesting
one
to
kind
of
dig
into
I
had
to
to
find
out
how
to
get
that
information
which,
luckily
all
of
the
danger
stuff,
is
set
up
nicely
where
we
have
access
to
the
the
gitlab
gem
and
can
easily
make
some
API
calls
based
on
the
pipeline
ID,
and
then
that
that
allows
us
to
find
that
that
job
yeah,
so
that's
kind
of
the
basics
of
how
all
that
works,
I,
think
yeah.
My
main
questions
coming
into
this
demo
were
I.
B
C
Yeah
I,
don't
have
a
comment,
so
the
package
enters
pipeline
takes
takes
a
while
to
execute
I
think
takes
one
hour
and
30
minutes
or
something.
So
it
is
a
long
pipeline,
so
the
workflow
is
going
to
be.
They
are
going
to
open
a
merge
request,
starting
within
a
stable
Branch.
A
pipeline
is
going
to
run.
C
B
To
yeah,
if
the
way
that
it's
set
up
is
it's,
it's
only
checking
if
it's
manual,
so
that's
the
only
one
where
it
really
where,
where
it
fails,
it'll
fail,
if
it's
manual
or
if
it
if
it
doesn't
exist
at
all.
B
So
if
it's
completely
skipped,
so
we
could
add
some
other
statuses
in
there
like
we
don't
want
to
add
canceled
I
mean
sorry,
we
don't
want
to
add
failed
because
we're
allowing
it
to
fail.
We
just
need
to
check
with
the
quality
engineer
and
we
could
add
canceled.
So
for
some
reason
someone
decides
to
manually
cancel
the
job.
We
should
fail
danger,
assuming
they
canceled
before
danger,
finishes
and
then
but
yeah.
Otherwise,
it's
it's
only
if
it's
a
manual
that
will
fail
automatically,
otherwise
it
will
keep
running.
A
What
is
the
workflow?
If,
if
two
people
merge
onto
that
stable
Branch
like
in
less
than
an
hour,
will
they
does
that
cancel
the
pipeline,
or
do
they
both
get
their
own
independent
package
and
test
Pipelines.
D
A
D
Yeah,
but
it's
after
the
merge,
so
when
you
before
you
merge
you
you're
running
the
merge
result
pipeline.
So
you
have
a
virtual
point
in
the
history
which
is
whatever
is
on
the
target,
plus
your
changes
and
each
and
it
lives
in
the
context
of
the
merge
request.
So
if
they
push
again
on
the
merge
request,
it
may
cancel,
but
it
cancels
the
the
old
run
on
that
same
merge
request.
D
B
C
Concerned
about
Outdoors
not
seeing
this
message
that
hey
if
the
package
and
test
pipeline
fails,
please
being
a
quality
engineer
before
emerging,
because
technically
nothing
actually
prevents
them
from
merging
if
we
don't
have
a
quality
approval.
B
Think
one
thing
we
could
do
is
I'm
fairly.
Certain
I
mean
this
would
probably
be
a
bit
more
work,
but
we
could
update
the
pipeline
rules
themselves,
so
in
outside
of
danger
actually
have
if
that
pipeline
fails
kind
of
similar
to
how
we
have
the
notifications
on
on
Pipeline
failures
for
for
master
and
for
not
for
state
law.
We
could
say
if
that
job
fails,
we
can
post
the
comments
to
the
Mr
and
ping,
the
author,
that
they
need
to
notify
the
engineering.
B
I
think
that
would
probably
be
the
best
way
to
do
it.
That
would
take
a
bit
more
work
because
we'd
have
to
write
some
additional
rules
around
that,
but
I
guess
that's
probably
not
too
bad
I
could
I
can
look
at
that
and
perhaps
open
that
as
a
follow-up
to
this.
D
Does
anyone
know
how
we
handle
them
when
the
first
approver
approves
something
and
and
GitHub?
There
is
this
immediate
message
that
tells
you
thank
you
for
approving
blah
blah
I'm
gonna
trigger
a
new
complete
pipeline,
whatever
which
sounds
like
we
have
web
books
somewhere
that
are
doing
some
of
those
automation,
because
maybe
if
maybe
it's
another
project
that
is
having
that
is
handling
this
type
of
situation
and
there
is
going
to
be
easier
to
add
those
type
of
rules
right.
D
So
first
approval
run
the
ee
package
and
test
manually
run
the
package
and
test
and
then,
if
pipeline
fail-
and
it
failed
on
that
job-
add
a
comment
because
I'm
just
thinking
about
what
I've
seen
with
Ruben
working
on
the
metrics
side
of
this
for
for
delivery
and
basically
the
these
are
the
job
events,
if
I
remember
correctly,
and
basically
it
tells
you
all
this
information
right.
So
this
pipeline
failed
on
this
job.
D
B
D
B
That's
a
good
point:
I
I,
think
I
know
which
project
that
might
be
and
I
was
asking
about.
The
danger
bot
I
mean
yeah.
The
gitlab
bot
I
think
is
the
one
that
posts
that
and
the
gitlab
bot
is
present
in
a
handful
of
projects.
So
that's
a
good
good
idea
to
see
if
it's
possible
from
that
angle
as
well.
C
C
One
after
one
after
running
an
era
proofs,
then
there
is
going
to
be
the
message
that
he
execute
or
don't
forget
about
the
package
and
test
and
then
another
one.
When
the
package
and
test
fails,
we
are
going
to
post
a
message
saying
like
you
need
to
get
this
check
by
quality.
B
Yeah
I
think
that
makes
sense
and
so
yeah
I'll,
so
we're
gonna
just
to
be
clear.
I'll
continue
forward
with
this
as
the
initial
iteration
and
then
once
we
have
those
other
comments,
we
could
potentially
drop
this
block
from
the
the
danger
review.
A
Yeah-
and
that
makes
sense
because,
like
that
gets
Us
close
enough-
that
we
can
begin
doing
our
dry
run
testing
whatever,
whatever
that
phase
is
called.
What's
that
face
called,
is
it
the.
B
A
Yeah
testing
basically
yeah,
we,
if
we
you
know
if
we
can
get
to
that
stage,
then
great,
and
then
we
can
add
on
the
improvements
alongside
okay,
great.
B
A
Great
next
demo
Steve,
what
is
the
other
stuff
that's
going
on
with
the
maintenance
policy
extension
like
what
are
the
I
guess
like
do?
We
have
other
things
in
progress
at
the
moment.
C
Well
in
progress,
this
is
the
only
one.
There
are
a
couple
of
issues
that
are
ready
to
be
picked
in
the
dashboard
regarding
ad
invisibility
about
the
active
versions
for
the
maintenance
policy-
okay,
that
is
one
and
I,
think
we
have,
and
it's
another
one,
but
I
can't
remember
at
this
moment:
I
just
returned
from
vacation,
so
my
brain
is
a
little
dispersed
right
now.
A
Is
I
wonder
whether
it
might
be
worth
like
it
would
be
good
to
know
Myra
like
when
you
think
we
can
get
to
like
testing
inside
delivery?
So
we
can
start
planning
that
you
know
like.
Actually
it
sounds
like
you
know,
although
we
do
want
the
active
versions
for
when
we
get
developers
involved,
we
don't
need
them
for
the
delivery
testing
right.
A
Like
so
I
wonder
if
there
are
other
things
like
that
that
we
could
we
could
plan
around
I'm,
just
super
Keen
that
we
get
to
the
testing,
because
I
I
mean
it's
definitely
possible.
This
all
works
100
and
we've
thought
of
everything.
D
I
have
a
question
because
I
I
don't
remember
if
I
missed
some
of
the
updates,
so
the
last
time
I
checked
the
blog
post
generation.
The
the
task
was
actually
generating
a
file
on
disk
we're
just
creating
the
markdown
and
for
for
the
three
version
blog
post
did
we
ever
get
to
the
next
stage
of
actually
creating
the
merge
request
or
making
this
easier
to
use?
Or
is
it
still
that
we
generate
the
file?
And
then
you
have
to
do
the
merge
requests
on
on
the
website.
Repo
manually.
B
I
believe
it
opens
the
merge
request.
I
can
double
check
that
I'm
pretty
sure
it's
set
up
to
open
the
merge
request,
I
think
I'm,
trying
to
remember.
There's
yeah,
there's
a
specific
there's,
an
individual
rake
task
that
you
can
run
to
do
it
or
I
think
we
did
build
it
into
something
else.
I,
don't
remember.
C
Yeah
I
think
there
was
a
rake
task,
but
the
merge
request
is
created
automatically
and
then
I
think
we
just
need
to.
It
needs
to
be
adjusted
by
adding
the
git
love
outdoor
and
the.
D
D
Branch
when
we
merge
things
that
thing
kicks
in
and
goes
there
and
update
the
the
merge
request.
So
when
we're
discussing
that
this
one
is,
is
a
diff
is
different,
because,
basically,
when
you're
ready
to
do
the
release,
you
just
have
to
generate
the
the
blog
post.
A
A
D
I
think
it's
the
it's
a
red
in
the
case
right
because,
basically
right
now
you
run
the
old
process
and
it
will
generate
accumulation
branches.
Then
you
merge
when
you're
ready,
and
this
basically
gives
you
a
backboard
merge
into
stable
branches,
which
is
the
entry
point
for
the
new
process.
So
at
that
point
you
just
tag.
So
if
you
leave
the
old
as
is,
but
you
actually,
but
you
also
merge
things
manually,
you
can
just
run
the
new
process
right
because
you
say
things
are
merged.
D
B
D
C
A
C
Was
getting
some
of
the
merge
requests
that
are
at
label
on
gitlab?
We
pick
into
labels
and
just
try
to
create
recreate
a
backboard
and
mergers
ourselves
and
use
the
new
template
and
just
follow
up
the
issues.
D
C
C
B
So
then,
to
answer
the
original
question
also
I
just
looked
in
the
code
and
we
do
create
the
blog
post,
merge
requests
during
the
release,
prepare
task.
So
if
that
is
run
with
no
versions,
then
with
no
version
assigned
to
it,
then
we
automatically
do
the
new
blog
post,
because
normally
we
would
run
it
with
a
version
of
as
an
argument,
and
so
if
it's
run
with
no
versions
of
an
argument,
it'll
create
them
a
blog.
D
Post
so
release
prepare
is
the
task
that
create
the
tracking
issue.
Yes,
yes,
so
it
will.
No
versions
means
it
will
figure
out.
We
are
doing
a
patch
release
with
the
new
process,
so
I'm
gonna
create
a
single
tracking
issues
with
the
instruction
for
the
three
new
releases.
Plus,
a
merge
requests
on
the
website
with.
A
D
A
A
Okay,
cool
I,
one
small
update,
so
I
have
some
time
booked
in
with
Sam
tomorrow.
To
give
him,
hopefully
enough
of
an
overview
that
he
can
answer
some
of
the
questions.
A
I've
added
a
draft
issue
to
your
epic
Myra,
so
that
we
can
start
planning
the
rollout,
like
the
the
official
rollout,
probably
some
pieces
that
come
into
that.
So
we
all
want
to
make
sure
that,
like
internal
people
know
about
it
so
that
you
know,
support
teams
are
aware
and
product
managers-
and
you
know,
sort
of
people
who
often
do
a
lot
of
this
communication.
A
We
also
probably
want
to
do
some
sort
of
external
blog
post
and
explain,
what's
changing
and
why
so
I
chat
with
Sam
about
what
might
be
involved
with
getting
those
sorted
out
and
then
I
have
a
couple
of
other
questions
for
him,
which
I
don't
think.
He'll
have
an
immediate
answer,
but
it
will
be
good
ones
for
him
to
help
us
figure
out.
One
is
going
to
be
around
the
actual
rollout
date,
so
just
Maya
and
I
chat
about
this.
A
A
I
have
also
added
a
question
for
him
around
what
might
be
the
expectation
once
we
do
switch,
because
we
have
a
slightly
awkward
switchover
period.
So
assuming
we
do
switch
on
assuming
we
do
switch
on
15.11.,
then
we
will
end
up
having
15.9
gets
bug
fixes
for
current
version.
It's
got
security
fixes,
we
release
1510
1510,
it's
getting
bug,
fixes
and
security
fixes
and
59
is
still
getting
security.
Fixes,
15
11
is
getting
bug
fixes
and
then
we
also
open
up
bug
fixes
for
59
and
15
10..
A
So
the
question
really
is:
at
that
point:
are
we
going
to
need
to
take
everything
that
we
patched
in
1510
and
back
Port
it
essentially
to
15.9,
or
is
that
going
to
be
okay?
It's
only
going
to
be
a
one
month,
Oddity
before
the
support
changes
but
I'm
going
to
let
Sam
go
and
figure
out
what
what
that
might
what
the
expectations
might
be
around
that
from
a
kind
of
a
user
perspective.
D
A
D
I
think
we
had
this
conversation
where
we
were
saying
that
the
expectation
could
be
that
everything
in
sub
in
the
subport
window
receive
every
bug
fix,
but
yeah,
ultimately,
is
a
kind
of
a
decision
of
the
product
manager
or
the
as
with
the
team
right.
So
not
our
project
manager
is.
A
Yeah
I
I
think
once
some
has
figured
out
the
answer
to
that,
we
can
then
start
to
make
a
plan
either.
It's
a
no,
in
which
case
great,
like
we
just
keep
going
or
if
it's
a
yes,
then
I
think
we
need
to
probably
just
think
about
you
know:
do
we
do
something
where
we
basically
ask
everyone
who
has
catch
something
in
15
10,
for
example,
to
back
port
or
do
we
do
something
like
a
rollout
process?
A
So
that's
probably
a
bit
of
an
unknown
at
the
moment,
but
well,
let's
get
let
Sam
figure
out
what
that
should
look
like.
First.
A
Oh
great
and
before
I
move
to
another
one,
so
sorry
that
we
lost
you
for
a
bit
at
the
beginning,
unless
you
I've
updated
the
zoom
link.
So
hopefully
there
no
problem
they're.
Both
the
same
one
item
I
did
want
to
kind
of
ask
about
is
given
that
we
have
got
this
big
change
coming
and
particularly
with
the
big
change
coming
for
developers
quite
soon,
is
there
anything
useful
that
we
can
either
ask
about
or
share
or
market
to
in
tomorrow's
Ama.
C
A
Yeah
that
makes
sense
how
about
I
stage
a
question
around
that,
and
then
you
can
maybe
give
the
highlights
Myra.
A
And
I
think
they're,
really
they're
really
key
thing
that
we
can
emphasize
here
is
that
you
know
this
is
our.
A
This
is
our
first
really
big
shift
towards
more
of
a
self-serve
model,
because
what
we
hopefully
find
is
that
we
will
have
fewer
cases
of
people
needing
to
put
in
back
Port
requests
and
try
and
convince
us
to
do
something.
You
know
like
this
gives
the
teams-
and
we
can
add
this
in-
if
you
don't
cover
all
these
points-
Byron,
but
just
to
say
for
all
of
us
to
kind
of
I
guess
have
this
like
for
when
we're
talking
with
other
people
as
well.
A
Like
you
know,
people
have
a
little
bit
less
of
a
hey,
I
have
to
go
and
convince
or
a
lease
manager,
and
they
can
actually
do
a
lot
of
this
themselves,
and
will
you
just
respond
to
the
dashboard?
So
hopefully,
although
it
it
adds
a
little
bit
of
task.
Hopefully
it
removes
a
huge
amount
of
friction
and
will
really
help
people
get
the
right
fixes
to
use
users.
D
Yeah,
okay,
yeah,
it's
very
likely
the
three
versions
back
covers
all
the
usual
needs,
the
the
vast
majority
of
backwards
requests,
but
I
think
it
also
unlocks
an
important
thing,
which
is
more
of
a
long-term
goal,
which
is:
if
we
get
a
smooth
process
around
this,
there
would
be.
This
will
open
up
for
having
even
outside
of
policy
requests
being
easy
to
handle,
because
it
will
become
merge.
D
Prove
me:
it's
green
I'll
tag,
so
it's
a
decision
that
can
be
made
on
the
moment
right
because
right
now
is
someone
came
to
us
with
I
would
like
to
do
this
and
there
is
a
lot
of
uncertainty
around.
Will
it
work?
Will
it
be
able
to
merge
it?
Will
it
back
Port?
Will
the
package
be
fine
with
this
approach
is
more
like
I
already
made
the
work,
so
I
I
have
a
stable
Branch
with
a
working
fix
that
I
need
to
ship.
Is
there
anyone
available
to
Target
release
and
so
again
no.
D
We
can't
because
we
are
doing
the
security
release
and
we
are
full
of
work
or
yes,
we
can
do
next
week
and
it's
just
a
matter
of
running
it.
A
Nice
awesome
is
there
any
other
stuff?
We
should
cover.
A
Yeah
so
quick
check
in
on
collaboration
I.
We
we
may
rotate
this
question
depending
on
new
retro
actions,
but
from
our
Q3
retro.
Does
anyone
have
any
kind
of
additional
requests
for
collaboration.
A
One
that
we
will
I
was
chatting
with
Jen
yesterday
from
EP
and
one
that
we
should
make
sure
that
we
kind
of
maintain
into
the
the
coming
months
is
I.
Think
one
of
the
biggest
unknowns
about
this
maintenance
policy
extension
is
what
it
does
to
workload
for
release
managers
and
for
quality
and
for
EP.
So
I
think
we
should
probably
just
keep
that
in
mind,
as
well
as
a
kind
of
like
for
future
iterations
like
those
would
be
really
good
check-in
points
for
us
to
actually
make
a
decision
on
like.
A
And
one
other
thing
I
found
that's
for
me
unrelated,
but
it's
also
great
news
is
for
q1
EP
have
their
okr
as
set
around
flaky
tests
for
master,
so
that
will
be
huge,
not
just
for
maintenance
policy,
but
just
generally
day
to
day
as
well.
So
I
think
we'll
start
to
see
some
really
really
good
sort
of
results
from
that
as
well.
A
Awesome
great
final
thoughts,
final
comments,
something
else
we
should
cover
no
okay,
in
which
case.
Thank
you
very
much.
Thanks
for
demo,
thanks
for
the
chats
and.