►
From YouTube: 2022-11-23 Delivery:Orchestration demo - APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone.
This
is
the
APAC
Emer
orchestration
demo
on
November
23rd.
A
A
So
we
have
nothing
on
the
oh.
No
here
we
go
well
done.
B
A
C
Well,
there
is
something
that
it
we
can
show
what
is
not
really
dim
mobile,
which
is
something
that
we
I
was
toying
with
Graham
as
well,
which
is
a
bit
tangential
to
his
work
on
the
release
environment.
So
I
can
show
this
I.
C
Line
of
code,
three
line
of
CI-
and
it's
still
it's
an
interesting
idea.
So
let
me
do
this
hello
here
we
go
so
this.
So
what
I'm
talking
about
this
is
a
terraform
project
that
we
have
on
Ops
only,
and
this
was
born
as
attempt
to
basically
create
project
mirrors
for
for
the
security
mirror
right.
So
you
declare
them,
and
it
were
just
very
simple
simple
here
you
just
say
this:
is
these:
are
the
projects.
C
So
these
are
the
projects
that
we
generated
with
this
instead
of
the
old
manual
way,
and
you
declare
them
and
say:
where
is
the
the
the
security
namespace,
the
mirror,
the
dev
namespace
the
canonical
path?
And
this
will
just
do
the
magic
it
will
terraform
the
thing
create,
configure
mirrorings
and
things
like
that.
So
This
was.
This
was
an
interesting
idea,
but
then
we
started
exploring
more
advanced
CI
things.
C
So
when
it
this
started,
this
was
just
deployed
from
terraform
from
my
computer
and
storing
the
state
file
on
the
terraform
State
file
on
oops,
because
it's
a
feature
of
the
product.
You
can
just
have
this
as
a
central
store.
Then
the
next
step
was
yeah
I.
Could
we
probably
should
do
this
in
CI
right?
We
all
know
how
things
should
work,
and
so
the
CI
is
the
deploying
this
changes.
C
C
Merge
request
result
pipeline,
which
we
already
use
in
all
of
our
components,
release
tools
included,
but
it
also
goes
the
extra
mile
and
has
merge
strain
enabled,
which
means
that
when
we
hit
the
merge
button,
this
is
going
to
stack
every
merge
request
that
we
want
to
merge
into
the
train,
and
so
they
will
just
build
on
top
of
the
other
changes
that
are
already
in
the
train
and
they
will
run
a
special
pipeline
that
can
be
identified
with
some
variables
and
this
pipeline
will,
if
it's
green.
C
If
the
pipeline
is
green,
this
will
become
the
new
main
branch.
It
will
just
be
the
same,
commit
same
shot
same
information
is
not
that
this
is
going
to
be
merged
and
generates
a
new
commit.
It's
just
giving
you
some
sort
of
look
ahead
in
the
future.
This
is
how
main
will
look
like
if
this
pipeline
completes
and.
B
C
Give
you
a
merge
commit,
but
it's
not
on
Main
yet.
So,
if
you
ask
what's
my
branch
is
not
Maine
Maine
right
yeah?
So
but
that's
the
that's
the
the
interesting
bit
which
is.
Can
we
Deploy
on
the
merge
strain,
because
something
that
we
were
trying
to
figure
out
is
sometimes
we
merge
something
we
we?
Basically,
we
run
some
kind
of
dry
run
in
this
case.
It's
a
styrofoam.
So
it's
a
plan,
so
we
run
a
dry
run,
which
is
a
plan
on
the
merge
request.
C
So
we
hope
that
this
will
apply,
but
then
the
deployment
itself
will
run
API
calls
and
those
API
calls
May
Fail.
For
other
reasons.
Maybe
we
don't
have
permission
and
these
things
doesn't
surface
on
playing
on
on
the
plan
and
things
like
that,
so
the
idea
was
because
something
is
merged,
officially
merged.
Only
after
the
merge
drain
pipeline.
If
we
Deploy
on
the
merge
strain,
then
main
only
represent
what
is
actually
being
applied
in
production,
I
mean
in
production.
In
this
sense,
I
mean
it
has
been
applied
to
the
live
system.
C
C
So
this
project
here
is
using
child
pipeline
to
protect
the
environment
all
together,
so
that
there's
no
way
that
you're
gonna
run
multiple
deployment
at
the
same
time
and
you
can't
even
run
the
plan
at
the
same
time
because
they
rely
on
the
same
state
file
and
this
could
be
changed
by
another
plan
or
deployment.
So
everything
is
sealed
at
the
terraform
level,
so
we
have
every
time
we
interact
with
terraform.
It
could
be
that
we
do
plan
or
we
do,
plan
and
deploy.
But
this
is
this
is
a
trigger.
C
Is
it
triggered
from
a
parent
Pipeline
and
there
is
a
lock
a
resource,
Group
lock
from
the
trigger.
So
that's
what
gives
us
the
protection
around
this
thing,
but
the
interesting
part
is
this
that
that's
the
plan
as
I
said.
So
this
is
just
run
every
time,
but
this
is
the
nice
change
that
we
that
we
have
done
here.
So
we
run
the
deployment
only
if
the
merge
request
event
type
is
the
merge
strain.
C
So
we
are
on
the
merge
train
and
this
is
a
candidate
to
be
merged
and
we
no
longer
run
the
deployment
on
the
main
branch
we
just
need
to
do.
You
need
an
extra
variable
if
you
actually
want
to
run
a
deployment
on
the
main
branch,
because
you
don't
expect
to
run
them,
they
will
be
already
applied.
The
changes
will
be
already
applied
by
the
merge
strain
itself.
C
So
what
happens
here
is
that
if,
for
some
reason
the
thing
we
told
were
working
does
do
not
apply,
this
will
not
commit
them,
and
that's
where
we
are
with
this.
What
we
would
like
to
go
over
as
the
next
step
is
in
case
of
a
failure
trigger
a
deployment
from
Main
that
will
revert
to
the
system
because
terraform
May
Fail,
but
when
it
failed,
maybe
he
applied
some
of
the
changes,
because
it's
just
running
things
in
steps,
so
you
just
you
give
them
as
a
security
mirror
is
create
me
a
project.
C
Fork
me
this
project
setup
mirror
create
this
push.
Rules
create
another
mirror
on
dev.
So
it
just
gives
you
incremental
steps,
and
maybe
it
fails
later
on,
and
so
the
idea
worth
exploring
is.
If
this
fail,
we
can
have
a
job
that
only
runs
on
failure
that
is
triggering
the
same
Pipeline
on
Main,
with
this
terraform
apply
variable
provided
as
true,
so
that
this
will
actually
trigger
another
deployment
from
the
old
known
state.
C
A
And
so
then,
this
is
relevant
to
the
release
environment,
because
so
this
would
be
like
this
would
give
us
a
way
of
getting
the
commits
onto
the
stable
Branch
to
to
match
trade
right.
C
Yeah,
this
is
kind
of
weird.
This
is
a
fresh
project
which
was
easier
to
just
toy
with,
but
it's
kind
of
can
we
push
the
the
the
the
type
of
workflow
that
we're
using
to
more
advanced
CI
feature
and
then
learning
from
this?
How
much
of
this
can
we
backfeed
in
the
new
in
the
things
that
we
have
or
the
new
things
that
we
are
building
because
for
change
management?
This
looks
really
good.
B
C
I
was
just
going
to
say
this,
the
final
thing:
if
we
are
going
to
change
Kate's
workload
to
not
work
on
variables,
but
just
work
on
commits
this
could
be
applied
on
Kate's
workload
as
well.
There
will
be
more
work
because
then
we
have
to
split
by
environments
there's
extra
stuff,
but
this
could
be
applied
to
Kate
workloads
as
well
and
having
changes
stacking
one
on
top
of
the
other.
So
it's
gonna
say
this.
C
There
was
an
auto
reply:
it
was
creating
this
and
then
there
was
a
configuration
change
that
was
changing
this
and
each
one
he'll
have
his
own
Pipeline
and
all
if
something
fail
in
between
the
system
will
get
back
to
this
original
state
and
all
the
things
on
the
train
will
just
be
removed.
So
you
have
to
recreate
the
train
and
re
redecide.
What
you
want
to
apply
in
in
which
order.
A
B
B
You
have
a
pipeline
that
tries
to
apply
the
changes,
so
the
merge
request
is
your
intent
right,
I'm
intending
to
change
this
I'm
representing
via
terraform
or
manifests
or
values
my
intent
to
change
something.
But
then
we
merge
it
right.
So
it's
merged
to
master.
We
have
a
pipeline
to
apply
that
after
it's
merged
to
Mars,
that
it
fails
so
now,
what's
in
git
doesn't
match
what's
running,
and
this
is
just
constant
headaches
right,
because
what
happens
is
someone
else?
B
If
you
don't
notice
that
you've
got
to
get
reverted
to
kind
of
like
and
do
a
merge
request
to
revert
it
and
get
things
back
into
sync,
someone
who
don't
do
that
the
next
person
comes
along
merges
theirs
and
their
commits
on
Master.
Your
commits
on
master
and
now
the
deploy
pipeline
that
they
had
is
trying
to
reply
both
of
your
changes
and
then
they
it
fails
again
and
then
you
know
it
spirals
out
and
the
bigger
the
team.
B
The
bigger
the
complexity,
the
more
people,
the
more
geographically
distributed,
the
people
are
it's
a
big
problem,
but
this
flips
it
on
its
head
you're
doing
the
apply
first
you're,
actually
going
to
change
the
running
State
first,
and
it's
only
when
that
completes
that
your
pipeline
is
green,
that
it
actually
gets.
Okay,
now
I'm,
going
to
put
this
onto
Main
and
reflect
the
state
in
git
is
now
accurate
because,
as
I've
already
done,
the
pipeline
to
apply
it,
whereas
before
it's
merged
to
master,
then
try
and
apply
it
now.
It's
let's
try
and
apply
it.
B
Yes,
that
worked
okay,
now
I'm
going
to
actually
merge
it
on
master,
and
obviously
you
mentioned
Alessio
in
that.
There's
that
the
one
problem
with
terraform-
maybe
not
so
much
with
kubernetes,
actually
maybe
with
kubernetes-
is
that
revert
process.
It's
like
okay!
If
something
goes
wrong,
you
might
be
in
a
half
state,
so
you
want
to
actually
do
something
to
go.
Okay,
go
back
to
what
was
good
and
make
sure
that
is
now
applied,
make
sure
the
good
part
is
applied,
and
that
in
theory
then
gives
us
that
true
whatever's
in
git
is
what's
right.
A
A
A
A
Also
wonder
it'd
be
fascinating
to
actually
try
and
gather
some
data
around
master
I.
Wonder
at
what
point
we
have
the
number
of
changes
going
into
Master
the
length
of
the
merge
Pipeline
and
the
percentage
of
failures.
I
wonder
at
what
point
you
just
would
never
ever
apply
anything
I,
don't
know
what
that
number
would
look
like,
but.
C
That's
the
reason
why
we
don't
have
this
enabled
in
our
public
projects,
so
it
was
tested
on
the
www.getlab.com
project
and
the
rate
of
failure
was
2i
and
because
it
removes
everything
from
the
merge
train,
it
was
really
painful.
That's
why
I
think
this
is
interesting
for
infrastructure
changes
because
they
tend
to
be
less
frequent
and
you,
the
people
applying
them
tends
to
be
more.
C
In
any
case,
they
will
take
a
look
at
the
pipeline
because
if
because
right
now
you
know
that
it
applies
on
after
the
merge,
so
you're
already
spending
time.
Looking
at
this,
so
there
is,
and
then
you
have
also
things
like
change
lock
or
change
requests.
So
there
is
a
already
a
process
around
this,
and
it's
probably
easier,
and
in
theory
we
don't
know
pipeline-
should
be
a
bit
fast
because
you're
not
testing
here
right.
C
So
we-
the
this,
has
happened
before
we're
just
applying
something
and
I
mean
in
a
project
like
this
is
simple
enough
to
be
fast.
Maybe
full
gitlab
deployment
will
be
will
take
more
time
but
in
any
case
we're
talking
about
production.
So
you
still
it's
already
failing
today.
So
if
those
things
are
not
working,
they
we
already
just
working
on
to
each
other
works
and
it's
a
mess.
So
I.
C
B
And
I
think
I
think
you're
right
about
the
metrics
like
it
would
be
interesting.
Do
we
get
points
of
the
day
where
the
merge
Bridge
crime
becomes
so
big
like
because
the
amount
of
changes,
if
you
hit
the
point
where
the
train
is
growing,
that's
more
people
trying
to
move
faster
than
your
pipeline
is
letting
them
right
like.
If
and
that's
an
interesting
problem.
So.
C
B
C
The
problem
is
only
one
failure,
because
the
failure
disband
the
train,
so
everything
that
is
after
that
it
gets
a
merge
failure.
Basically,
they
need
to
re-instantiate
an
immersed
train
and
press
the
merge
button
again,
and
this
will
create
a
new
strain
So.
In
theory,
you
have
10
changes,
they
will
take
one
hour
each.
C
B
Yeah,
it
is
an
interesting
it's
yeah
would
be
interesting
to
see
how
that
looked,
because
really,
if
you
do
get
to
the
point
where
you
are
like
oh
wow,
this
is
our
trains
are
constantly
failing.
They're
kicking
things
out,
it's
quite
disruptive,
it's
not
necessarily
a
bad
thing.
It's
a
definitely
a
really
good
wire
like
analyzing
the
failures.
B
I
think
it
becomes
a
very
important
point
then,
but
yeah
I
I
think
it's
got
good
implications
for
the
independent
deployment
stuff,
because
you
do
if
you're
going
to
have
more
disparate
groups,
more
disparate
people,
trying
to
do
changes
that
queuing
up
and
that
keeping
the
state.
So
it's
not
like.
Oh
look,
one
team
did
this:
their
pipeline
failed.
A
B
I'll
try
and
keep
this
quick
so
chatting
with
a
few
different
people
and
both
in
person
and
on
issues
just
trying
to
think
about
the
Epic
that
we
currently
have
around
the
release:
environments,
work,
which
is
uncovered
some
things,
we're
going
to
need
right
and
then
the
Epic
Myra
has
about
the
just
policy
extension,
which
is
all
similarly
related
right,
like
it's
all
kind
of
overlapping
and
how
do
they
fit
together?
Or
how
can
we
logically
fit
together,
fit
them
together
and
starting
to
think
and
think
about
that?
B
More
and
Alessio
and
Myra
and
myself
had
a
few
comments
on
there,
but
I
think
we're
all
on
the
same
page,
but
just
thought
take
a
chance
to
discuss
it.
B
How
do
we
actually
see
the
pipelines
on
stable
branches
looking
and
kind
of
breakdown
of
responsibility
and
when
things
go
wrong,
what's
going
to
happen
and
I
think
once
again
open
to
discussion
of
people
if
I've
got
it
wrong,
you
know,
certainly
let
me
know,
but
I
think
what
we
kind
of
want
to
do
is
have
the
pipelines
that
run
in
the
gitlab
org
gitlab
repository
so
on
the
stable
Branch.
When
you
merge
stable
Branch,
those
are
kind
of
like
our
Auto
deployed
pipelines.
They're,
like
our
coordinated
pipelines,
there's
a
couple
of
things.
B
We
need
to
do
in
them
right
to
make
this
all
work,
how
we
expect
it.
We
need
to
interface
with
distribution,
to
build
packages
that
we
can
deploy
and
that's
an
area
of
Investigation.
We
need
at
the
moment,
there's
an
issue
in
the
release
environment's
epic,
but
it
might
worth
be
worth
splitting
that
out
into
its
own
epic,
if
it's
a
lot
of
work-
hopefully
it's
not.
B
So
now
that
we've
done
that
and
that's
distributions
kind
of
well,
we
have
to
talk
an
interface
with
distribution.
To
do
that.
The
second
part
will
be
the
deployment
so
like
in
Auto,
deploy
that
through
deployer
and
case
workloads,
but
for
this
pipeline
this
will
be
the
one
that
actually
triggers
into
release
environments.
B
The
mechanism
for
doing
that.
We're
still
not
clear
on,
but
you
know,
I've
got
some
ideas
about
trying
to
like
do
it
through
merge
requests
and
stuff
and
maybe
leverage
the
work
that
Alessio
is
doing
in
there.
But
at
any
rate,
there
needs
to
be
some
interface
and
Trigger
to
actually
deploy
through
the
release
environments
and
it's
most
likely
by
committing
a
file
with
some
version
numbers
which
in
itself
is
a
very
interesting
topic
of
conversation.
B
But
you
know
that
that
that
repository
that
release
environment's
repository
the
pipeline
inside
of
it
has
only
got
a
very
simple
job,
which
is
check
that
the
environment,
it's
deploying
to
meet
some
base
level
of
Health.
We
could
probably
reuse
the
deployment
Health
metrics,
applying
the
deploy
so
Helm
or
whatever
it
needs
to
do
the
actual
kubernetes
bitterly
Bops
and
then
checking
again
that
you
know
the
environment
is
still
okay,
somehow
probably
maybe
watch
deployment
health
for
a
few
minutes
or
something
like
that.
B
B
You
know
Matt
thank
you,
Matt
for
for
working
on
that
issue
in
the
release;
environments,
epic,
but
realistically,
I'm,
not
sure
if
we
want
the
release
environment
to
repo
to
concern
itself
with
that
that
could
possibly
be
moved
out
into
a
different
epic
or
maybe
up
into
the
backpack
policy
extension
epic,
but
I
think
that
needs
to
be
triggered
from
that
higher
level.
What
what
I
don't
want
is
like
this
tree,
I
guess
like
we're
like
okay,
we're
gonna.
Do
this
release
envirus?
Now
it's
going
to
trigger
distribution.
B
Now
it's
going
to
trigger
a
QA
as
well
I.
Think
keeping
things
at
the
high
level
from
what
I
can
think.
We've
learned
from
Auto
deploy
as
well
with
the
release
tools.
Managing
all
of
those
more
being
managed
in
the
coordinated
pipeline
gives
us
two
things:
it's
less
depth
of
Pipelines
and
when
things
break
and
the
pipeline
breaks
it's
a
very
clear
understanding
about
who
was
responsible
or
who
can
we
talk
to
to
investigate
the
problem?
B
If
it's
the
trigger
into
the
distribution
building
the
container
pipeline
or
packages
pipeline,
then
we
know
we
need
to
work
with
Distribution
on.
Why
that
failed?
If
it's
into
the
release
environments,
then
it's
probably
delivery
team,
sres
or
something,
and
then,
if
it's
a
QA
trigger
and
QA
fails,
then
we
need
to
obviously
interface
with
QA
on
fixing
that
so
any
questions
concerns
or
things
that
I've
said
that
we
don't
agree
with.
A
I
think
my
I
I
agree
with
all
of
that
Graham.
My
question
I
think,
is
around
how
we,
how
we
link
these
two
things
up,
so
what
the
developer
experience
looks
like
and
how,
if
once
we
have
all
these
bits
in
place
like
as
a
developer,
I
merge
a
change
onto
a
stable
Branch.
If
the
tests
fail,
how
how
do
I
connect
that
back
like
how
will
I
know,
that's
happened,
they.
A
And
will
they
so
will
the
break
so
they
do
they
break
the
stable
Branch.
What
would
the
what
would
it?
What
would
be
the.
C
C
Many
of
us
have
they
just
have
no
directly
getlab.com
emails,
that's
another
problem,
but
we
can
build
on
top
of
that.
I
think
this
is
more
in
the
realm
of
the
issue
about
working
with
the
stable
security,
the
stable
Branch
break
break
broken
Branch
workflow,
which
is
more
about
surfacing
those
type
of
error
and
just
having
quick
reaction,
but
I
think
the
key
bits
here
is
that
it
is
easier
to
attribute
the
failure
for.
C
Every
single
merger
we're
not
talking
about
thousands
of
emerge,
requests
right.
So
that's
the
tricky
part
here.
This
is
a
it's
a
it's
a
fine
and
small
numbers,
so
every
one
of
them
will
have
in
the
merge
request
page
the
widget,
showing
the
merge
result.
So
the
first
pipeline
that
runs
are
just
the
result
of
that
merge
is
in.
There
will
be
the
Red
Dot
showing
yeah.
This
is
the
one
that
failed
and
it's
kind
of
an
indication
already
in
the
right
place.
C
Don't
remember:
Jesus
I
wanted
to
say
something:
oh
triggering
single
level,
yeah
I
will
came
back
so
I,
don't
but
I
I,
I
I,
like
the
the
way
it's
broken
down,
and
so
the
two
things
together
should
give
us
a
a
good
way
for
react
to
the
stated
by
self-awareness
of
the
maintainer
author
or
by
the
the
process
kicking
in
and
fixing
the
problem.
Oh
yeah,
that's
the
third
thing.
Sorry
I'm
I'm
back
on
my
thoughts,
so
this
because
we're
talking
about
triggers
here
this
may
oh
and
we're
talking
about
slow,
not
many
changes.
C
We
may
also
consider
if
and
where
we
want
to
place
research
group
logs,
so
I'm
thinking.
Maybe
we
want
to
deploy
in
QA
one
by
one,
but
definitely
don't
want
to
package
one
by
one,
because
packaging
takes
time.
So
probably
we
want
to
go
ahead
and
start
Packaging.
Everything
in
parallel,
assuming
Dev
can
handle
the
load.
I
guess
so.
C
There's
also
our
interesting
question
like
do?
We
want
to
just
create
CNG
images
at
that
stage,
that's.
C
So
there
are
extra
things
we
we
work
around
right,
but
probably
we
we
can
also
do
C
and
G
All
In
Parallel.
Then
we
do
deployment
with
resource
lock,
because
you
want
to
deploy
once
a
time
and
probably
have
deployment
and
QA
in
the
same
resource
block
research
block
so
that
that,
even
though
it's
on
on
the
top
level,
so
we're
not
increasing
the
nesting
too
much.
We
are
trying
to
bundle
those
two
operations.
C
There's
no
way
that
another
deployment
can
kicks
in
and
prevent
you
from
running
the
QA
on
the
environment.
So
it's
like
yeah
one
by
one.
Each
one
gets
its
own
thing
and
then
maybe
at
the
end.
After
all
of
this,
you
also
build
C
Omnibus,
because
if,
if
you
can't
build
an
Omnibus,
it's
another
data
points,
it's
something
that's.
B
B
B
I
think
we
can
certainly
obviously
add
things
to
the
pipeline,
like
slack
triggers
and
things
like
that,
if
you
know
pipelines
break
and.
B
You
know
the
the
user
experience
we
can
definitely
improve,
but
at
the
end
of
the
day
it
is
basically
all
this
comes
down
to
is
a
broken
Pipeline,
and
so
the
experience
for
developers
will
be
seeing
a
broken
Pipeline
with
some
jobs
that
got
the
Red
X
and
I
I
did
like
your
thought,
Amy
on,
like
or
just
I'm
just
thinking
in
general.
B
It
would
be
nice
if,
if
it'd
be
nice,
if
gitlab,
the
product
gave
us
a
way
to
give
more
description
to
some
CI
tasks
like
oh
you
see,
the
CI
task
has
failed,
you
Mouse
over
it
and
we
can
put
a
note.
You
should
probably
go
to
this
slack
Channel
and
ask
for
help
or
something
like
I
I.
Don't
know,
I
guess
you
know
you
could
do
it
in
the
job
output.
Maybe-
and
maybe
that's
something
we
can
consider
as
well.
C
B
Really
trying
to
make
it
easily
discoverable
that
yeah
when
they
get
the
email
they
they
click
on
the
thing.
Obviously,
the
first
thing
they're
going
to
do
is
find
the
X.
You
know,
click
on
the
X,
the
job
comes
up,
and
then
it's
almost
immediately
right
in
front
of
them
to
just
get
them
started,
not
not
even
having
to
go
to
the
handbook
and
be
like.
B
A
Wonder
if
we
could
almost
this
is
just
pure
light,
but
I
wonder
if
there's
almost
something
we
could
do
on
the
on
the
Mr
template
I,
wonder
if
there's
something
we
could
put
on
the
actual
Mr,
which
is
like
you're
doing
a
bug
fix
on
a
patch
have
some
extra
or
something
but
yeah
I
agree.
This
should
be
which
make
it
easy
to
find
stuff.
B
B
A
So
as
we
as
we
pass
around
on
these
sort
of
the
pipeline
like
how
how
much
control
will
we
have
over
the
permissions,
I'm
kind
of
thinking
like
we
wouldn't
probably
just
want
I
mean
it's
not
super
dangerous,
but
we
probably
just
wouldn't,
want
everybody
being
able
to
run
that
deploy
job,
but
at
the
same
time
we
do
want
everybody
to
be
able
to
see
the
QA
jobs.
For
example,
like
do
we,
we
may
not
have
a
solution,
but
I'm
kind
of
wondering
like
is
this
a
is
this
another
problem.
B
B
You're
right,
I
I.
We
we
need
to
understand
it
right
because
we
see
it
with
the
auto
deploy
pipeline
now
and
it's
like
people,
retrying
jobs,
Downstream.
How
that
how
that
affects
things
we
will
need
to
continue
to
have
an
understanding
of
that
right
because,
let's
say
let's
say
a
job
goes:
let's
talk
through
the
three
sections
right,
so
it
goes
into
distribution
and
they're,
like
you
know,
please
create
packages
once
again.
Does
that
mean
we
want
every
developer?
Have
the
permissions
in
distributions
repos?
They
might
already
have.
B
B
Yeah,
okay,
that's
well
yeah,
we'll
have
to
figure
that
out
and
make
sure,
because
we're
still
at
this
correct
me
if
I'm
wrong,
we're
still
at
the
state
where
there's
no
way
you
can
say
trigger
a
downstream
Pipeline
and
delegate
it
to
different
permissions.
You
are
locked
into
whoever
yeah
Okay
cool,
so.
C
C
That's
and
I.
Don't
remember
what
level
of
permission
you
can
set
on
those
triggers.
That
being
said,
QA
runs
from
Ops,
so
there's
another
Point
here.
C
C
So
those
things
are
are
interesting
to
analyze
a
bit
more
I
would
say
that
as
a
general
line,
if
you
are
allowed
to
merge
on
stable
branches
because
of
what
we
are
describing,
you
should
be
allowed
to
deploy.
C
Should
be
the
same
settings
because
the
release
environment
has
only
one
task
which
is
allowing
you
to
to
to
push
the
the
value
there
right
so
that
the
the
versions
you
want
to
deploy?
Probably
we
don't
want
to
have
I'm
just
guesswork
here.
Maybe
we
want
to
segregate
configuration
out
of
that
right
so
that
the
thing
that
they
can
do
is
changing
versions,
but
if
we
want
to
change
the
configuration
of
the
environment,
this
is
a
a
series
delivery
task
and
maybe
they
launch
somewhere
else.
B
Well,
that's
why
I
was
that's.
Why
I
was
hoping
the
interface
into
the
release.
Environments
would
be
either
using
the
git
API
to
commit
a
file
or
possibly
even
tracking
a
whole
merge
request,
because
then
I
can
use
code
owners
because
I
can
say
versions
go
in
one
file
code
owners
is
everyone
at
gitlab.
B
Content
developers
configuration
files
that
their
code
owners
is
someone
else,
and
then
it's
all
built
into
gitlab,
and
we
just
you
know
it's
all
handled
so
I
think
it's
possible,
but
you're
right,
we'll
have
to
we'll
have
to
scope
that
out.
I
guess
we'll
have
to
start
doing
we'll.
Do
some
dry
tests
with
like
people
with
different
permissions
and
break
things
and
see
see
what
happens
but
I
think
it's
all
it's
all
achievable.
We
just
need
to
figure
it
out.
A
Nice
are
there
any
actions
we
want
to
take
I
know,
there's
kind
of
a
lot
of
like
things.
We
should
be
aware
of
and
kind
of,
like
we'll
play
out
as
we
build,
but
is
there
anything
specific
where
you
actually
want
to
take
like
capture
from
this
chat.
B
B
B
B
B
Yeah,
no,
that's
a
good
question:
I
I!
Guess
that's!
What
I'm
kind
of
getting
at
is
I,
not
that
I
I
I've
got
it
on
a
to-do
but
I'm
thinking
out
loud
now
that
I
could
bring
that
forward,
maybe
as
the
next
priority
and
then,
if
it's
like.
Oh,
yes,
this
is
good.
We
know
how
to
do
this.
This
is
easy.
Actually,
oh!
Well,
if
it's
already
been
done
in
a
way,
that's
consumable
by
us,
we
can
just
tick
it
off.
We
can
say
yep,
it's
already
been
done,
we're
getting
packages
that
are
suitable.
B
If
it's
not,
then
we
should
be
like
okay.
We
need
to
do
whatever
the
work
is
and
get
that
happening
in
stable
branches.
Now,
even
if
you
know,
we've
got
other
work
to
try
and
get
the
release
environments
ready.
We
can
still
just
start
packaging
start
packaging,
get
the
packages
going
because
that's
the
first
thing
I
can't
deploy
to
them
unless
I
have
packages,
so
I
might
bring.
That
I
mean
this
is
just
an
action
item.
I
guess
for
myself:
I'll
try
and
bring
that
task
forward.
B
B
Then
maybe
we've
split
that
into
an
epic
and
actually
make
an
epic
around
getting
stable
branches
packaged
properly
in
in
the
way
we
need.
Hopefully
it's
not
hopefully
it's
not,
but
I
can
take
a
look
at
that.
A
I
know:
there's
a
reasonable
number
of
issues:
kind
of
outstanding
on
the
like
developer,
tooling
pieces
that
Myra
and
Steve
are
working
on,
but
just
a
general
kind
of
terms
of
how
these
things
are
moving
along
and
parallel
do
they
does
it
feel
like
we
will
be
in
a
position.
A
I
assume
it
will
be
at
some
point
in
January.
That's
the
sort
of
the
developer
tool
piece
as
well
will
end
up
being
kind
of
available
for
you.
So
that's
just
an
assumption
based
on
what
I've
seen
so
far,
but
do
we
think
that
we
will
be
able
to
just
go
ahead
with
the
maintenance
policy
work
then,
or
do
we
need
to
keep
this
environment
epic
running
kind
of
at
the
similar,
a
similar
pace.
B
A
Yeah,
that's
right,
yeah
I
do
think
this
is
going
to
be
a
blocker
for
enabling
the
maintenance
policy
we're
assuming
we
have
well,
probably
not
even
this
packaging
stuff
right
because
I
assuming
we
have
all
the
developer
tooling
and
you
know
like
there
is
a
way
to
tag
and
and
prepare
and
and
got
the
blog
posts
and
things.
What
do
we
think
Henry
originally
talked
about
like?
Is
there
another
permanently
alive
environment
that
the
packages
go
on
to
be
tested?
B
B
Do
we
have
anything
we
could
substitute?
We
don't
right
now
we
could
probably
find
like
we
could
always
spin
something
up
by
hand
or
you
know,
copy
and
paste
pre,
or
something
like
that
if
we
really
wanted
to.
Although
it's
arguably
just
as
much
effort,
then
continuing
with
the
work
we've
got
correct
me
if
I'm
wrong,
but
by
not
having
release
environments,
we're
not
necessarily
in
a
worse
state
than
we
are
now
so
I
guess.
B
That
means
it's
not
really
a
blocker
right
like
we're
not
going
backwards,
we're
just
not
quite
as
forwards
as
we
would
like
so
I.
Think
it's
okay
to
just
continue
on,
or
it's
not
necessarily
like.
No,
we
can't
we
have
to
block
this
on
on
these
on
these
environments
being
ready,
simply
because
at
the
moment
we
don't
have
them
anyway,
like
we've
got
nothing
we're
aiming
to
have
something
better
than
nothing.
So
probably,
okay,
but
but
I
personally,
don't
know
enough
about
it
all
end
to
end
to
make
that
call.
C
So
I
was
starting
with
the
same
thinking
as
your
grain,
so
saying
yeah.
When
today
we
do
something
like
this.
We
still
don't.
We
don't
have
the
release
environment
for
the
old
one,
even
for
the
security
releases
right.
So
we
we
only
just
one.
However,
now
that
you
were
thinking
a
couple
of
key
points
came
to
my
mind.
One
is
we
often
do
this
for
security
and
security
builds
and
releases
are
tested
by
upset
individually,
so
there
is
a
manual
QA
process
that
is
happening,
which
will
not
be
here.
C
0.1
0.2,
which
is
probably
more
concerning,
is
that
dedicated
runs
on
the
previous
Miner.
B
C
C
A
I
I
think
we
must
have
something
I
feel
like
I.
Don't
really
have
a
good
handle,
maybe
that's
that
would
be
a
good
action
for
us
to
quiz
Reuben,
who
I
think
has
the
best
insight
into
what
we
are
actually
able
to
test.
I
know.
I
know:
Reuben
has
like
a
way
of
running
various
tests,
but
at
the
same
time
I
do
know.
There
are
also
things
that
we
go
to
Quality
and
we
basically
ask
them
like.
Is
it
going
to
be
okay?
A
A
Okay,
let's
have
a
think
about
that,
like
because
I
think
that
one
would
probably
be
a
good
one
to
understand,
because
if
we
can
get
close
at
least
to
the
existing
process-
and
we
just
know
like
we
don't
have
to
decide
right
now,
I
think
this
is
a
bit
of
a
as
we
get
further
down
with
the
project
like
are
we
do
we
reach
a
point
where
we're
kind
of
like
we
do
need
to
wait
for
all
the
pieces
to
be
in
place
or
or
other
workarounds.
A
It
would
probably
be
good
for
us
to
know
at
that
point,
whether
there's
actually
a
manual
process
or
something
we
could
put
in
place
to
make
it
safe
enough
that
we
just
go
ahead
or
whether
actually
we
do
want
to
keep
these
two
things
running
quite
closely
tied
together.
A
Cool
okay:
let's
keep
that
one
in
mind.
I
think
that
one
will
be
a
little
bit
like
I,
don't
want
to
generate
unnecessary
work,
I!
Think
it's
absolutely
fine!
For
now.
We
won't
need
it
this
year,
I
very
much
doubt
so.
We
can
just
keep
that
one
in
mind,
but
yeah
Alessia
makes
a
great
point
about
dedicated.
B
A
Don't
think
that's
going
to
change
right,
deliberately,
yeah,
yeah,
yeah,
exactly
exactly
I,
think
that's
I,
think
that's
fine
I
think
it
does
just
it
does
just
make
the
the
testing,
particularly
of
that
version,
a
bit
more
interesting
than
it
currently
is
yeah
one
other
thing:
okay,
am
I.
Thinking
about
is,
as
we
I
Wonder
as
we
merge
in
security
fixes.
A
So
as
we
when
we
launched
this
sort
of
Maintenance
policy
extension,
there
will
be
two
processes
essentially
running
right
and
so
patch
release
things
will
just
get
merged
in
by
the
developers
relatively
low
numbers.
Security
release
will
run
assisting
and
release
managers
will
do
sort
of
bulk
Merch.
A
B
Good
question
and
I've
got
an
open
issue
there
to
discuss
security
of
the
release
environments
in
general,
because
just
in
general,
because
in
some
ways,
they're
good
and
that
they'll
be
almost
empty.
There
is
not
going
to
be
any
data
like
there's.
No
doubt
you
know
like
the
data
levels,
Red
Data,
okay,
there's
no
cost
like
they're,
empty
environments,
they're
worthless,
but,
interestingly
enough,
you're
right
I
did
think
like
they
haven't,
got
data.
That
is
interesting,
but
they
will
have
code.
B
That
is
interesting
because
at
the
moment
yes,
absolutely
we
you
would
merge
the
security
in
they
would
go
to
the
release
environments
with
test
deploy
and
what
have
you
in
theory?
That
should
be
fine,
because
those
environments
should
really
only
be
accessible
and
usable
by
people
at
gitlab,
and
we
should
make
sure
they're
secure
enough
to
to
do
that
and
I.
Obviously,
I
want
to
work
with
security
on
making
sure
that's
the
case.
B
If,
for
whatever
reason,
for
whatever
reason,
we
can't
get
a
technical
solution
where
that
is
in
place,
and
we
we
can't
trust
them
I
think
then
what
we
want
to
do
is
something
like
on
the
merge
requests
that
are
security.
We
can
put
something
put
a
label
on
the
merge
request,
put
an
MR
and
sorry
a
Tech
like
a
skip,
deploy
tag
in
the
commit.
You
know
we've
got
some
mechanisms
in
gitlab
CI,
where
we
can
say
actually
we're
not
going
to
trigger
a
downstream
pipeline
deployed.
B
Because
of
this
we'll
just
skip
that
we'll
skip
the
QA,
we'll
skip
the
you
know,
we
might
do
something
different
based
off
that
so
it'll
really
just
come
down
to
what
we
feel
the
security
of
those
environments
and
and
100
we're
going
to
need
to
sign
off
whatever
we
choose
to
do.
We
need
security
involved
because
they
are
essentially
Now
new,
tracked
environments
that
we
will
need.
A
To
consider
around
that
yeah
for
sure,
yeah
and
it'll
be
certainly
interesting
to
see
how
like
what,
how
useful
they
end
up
being
because
I
guess
one
of
the
other
big
differences
is
the
time
frame.
So
security
release.
It's
like
within
this
two-hour
window
here
are
20
Mrs
or
something
like
that,
which
will
be
probably
very
different
from
patch
releases.
So
that
would
be
interesting.
B
Yeah,
it's
interesting
and
I
think
too
in
the
future.
If
we
do
like
the
merge,
if
we
get
to
a
spot
where
we
can
be
like
you
just
merged
the
security
features.
Oh
sorry,
the
security
fixes
onto
the
stable
branches
as
needed
and
we'll
just
tag
later
on
and
we
can
deploy
them.
It
does
open
up
a
lot
of
interesting
opportunities
because
not
only
with
qea
in
them
we're
now
having
them
deployed
somewhere
where
people
could
conceivably
like
like
a
developer
or
a
security
person,
could
log
into
those
release,
environments
and
go
yes.
B
B
We'll
we'll
address
that
in
time,
possibly
I
think
for
MVP
May
I'm,
not
against
just
saying
we'll,
just
close
the
scope
and
just
say
yeah.
We
will
come
up
with
a
mechanism
so
that
anything
merge
via
security
release
process.
Hopefully,
in
that
automation
that
maybe
does
the
merging
or.
A
B
Later
cool
yeah-
and
that
means
how
would
that
work
though,
because
then
the
next
person
who
deploys
like
just
merges
a
normal
patch
release
will
now
get
the
security
release.
So
that's
probably
okay,
because
we're
merging
tagging
and
releasing
to
the
public
straight
away
or
very
close
together
right.
So
I,
don't
think
it's
a
big
issue,
but
we
do
have
to
be
careful
that
once
you
do
merge
on
the
stable
Branch,
anyone
else
who
comes
along
after
is
going
to
get
your
code
as
well.
So.
B
I
mean
the
other
option.
Is
we
do
something
yeah?
It
really
depends
on
that
type.
I.
Don't
have
enough
kind
of
understanding
of
that
timing,
but
I
think
if
the
timing's
short
enough,
then
it's
probably
not
a
big
deal
to
just
skip
make
do
the
tag
get
the
release
out,
make
it
public
and
then
obviously
the
next
merge
request
that
goes
on
to
it.
Maybe
we
have
the
lock
deployments.
While
we
do,
maybe
that's
the
thing
we
already
know.
We
have
to
lock
certain
things
around
security.
Maybe
we
kind
of
maybe
when
we're
like.
B
A
Know
that's
interesting
because
Maya
and
I've
shared
briefly
the
other
week
about
whether
there
is
a
risk
so
right
before
you.
So
you
could
merge
in
all
of
the
security
back
ports
and
then
right
before
you
get
a
chance
to
do
the
tag.
Somebody
could
merge
in
a
patch
fix
and
break.
B
B
A
Yeah,
okay,
let
me
let
me
catch
up
with
Myra
again
later,
because
I
think
that
would
be
a
good
one
to
get
onto
the
maintenance
policy
that
feels
very
much
like
developer
tooling.
How
do
we?
How
do
we
manage
the
influx
of
changes
and
how
does
the
developer
like
understand
what's
happening?
So
let
me
chat
with
Myra
and
see
see
what
she
wants
to
do
with
that
one,
but
I
mean
I.
It
probably
isn't
a
disaster.
B
A
That's
it
yeah
cool,
okay,
great
stuff,
final
couple
of
minutes:
well,
probably
actually
over
time,
based
on
how
speedy
meetings
work.
Is
there
anything
else
and
it
wants
to
cover
on
this
demo.
A
Nope:
okay!
Well!
Thank
you
very
much
for
the
chats
and
yeah
shout
if
you
do
need
any
help.
Otherwise,
we'll
catch
up
on
issues.