►
From YouTube: 2023-03-15 - Delivery:Orchestration demo - APAC/EMEA
Description
Demo and discussion about the new release environments - https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/837
A
B
So
actually
I
might
have
something
to
demo
a
little
bit
to
demo.
If
you
just
give.
C
B
Cool,
so
everyone
can
see
basically
a
copy
of
the
release
environment's
Repository,
as
you
can
see
here,
we'll
try
and
make
it
a
little
bit
bigger.
This
is
just
the
private
Fork
I
have
of
it.
So
it's
not
the
actual
real
release
environments.
So
this
you
can
see
here
it's
under
my
name
space.
It's
the
fork
of
the
project.
B
So
originally
what
I
wanted
to
happen
is
I,
wanted
the
stable
Branch
pipeline,
so
the
pipeline
and
the
gitlab
or
get
lab
repository
to
essentially
do
a
full
I
guess,
get
Ops
kind
of
workflow
where
it
might
push
a
commit
or
open
a
merge
request
into
release
environments
set
that
merge
request
to
merge
when
it
passed,
wait
for
it
to
merge
and
then
track
the
pipeline.
That
was,
you
know,
running
on
Main,
deploying
the
actual
new
deployment.
B
Doing
all
that
automated
is
it's
not
impossible,
but
it
is
tricky
and
what
I
discovered
when
I
was
doing.
It
is
the
experience
with
the
developer
or
the
person
is
not
great,
and
this
just
comes
back
into
some
very
big
issues
we
have
with
gitlab
and
how
we
kind
of
can
glue,
workflows
together
or
how
we
can't
glue
workflows
together.
B
If
we
were
to
do
that
kind
of
workflow,
essentially
what
we'd
have
to
do
is
you
know
you
would
have
a
job
that
would
do
all
this,
and
so
the
developers
would
see
some
deploy
job
that
would
be
spinning
away
and
they
wouldn't
get
the
downstream
pipeline
at
all,
because
you
can
only
do
that
through
triggers.
You
can't
do
that
from
like
you
can't
just
attach
to
unrelated
pipelines
together
Downstream,
even
though
I
actually
think
you
should
be
able
to
somehow
join
them
together
and
likewise,
gitlab
CI
has
no
way
of
saying
this.
B
This
job
is
waiting
on
a
merge
request
over
here
to
be
merged,
or
something
like
that.
Like
we're
very
limited
into
the
workflow
options
we
have
with
gitlab
CI,
it's
not
a
very
good
general
purpose.
Workflow
tool
unfortunate
so
yeah,
so
in
the
interest
of
trying
to
move
things
along
quickly
and
provide
Pro,
possibly
a
little
bit
of
an
easier
experience,
especially
for
people
understanding.
What's
going
on,
I've
gone
back
to
the
traditional
trigger
model,
as
in
you
trigger
a
release,
you
know
trigger
a
pipeline
on
release
environments.
B
You
pass
in
two
variables
the
environment
you
wish
to
deploy
to
in
the
version
you
wish
to
deploy
to,
and
it
will
basically
deploy
that
for
you.
So
that
leaves
the
problem
of
well.
How
do
you
still
kind
of
reconcile
that
with
what's
in
git
and
make
sure
you
don't
have
this
kind
of
drift?
And
so
what
I've
kind
of
done
is
this
process
of
essentially
developing
the
pipeline
for
release
environments
in
two
separate
ways?
B
It
detects
that
if
there
is
a
an
environment
and
a
version
set
in
the
pipeline,
so
they
can
pass
those
environment
variables.
It
decides
that
this
pipeline
process
to
deploy
an
environment
is
basically
just
deploying
one
environment.
It's
going
to
deploy
the
version
you've
passed
in
and
then,
if
it's
successful
it
will
commit
that
back
to
the
git
Repository.
B
So
if
it
fails,
the
git
state
is
of
the
old
version,
so
another
pipeline
coming
through
won't
kind
of
like
try
and
because
what
the
other
problem
we
have
is
right.
So
if
something
goes
wrong,
we
don't
want
all
the
other
pipelines
coming
through
and
continually
to
try
and
like
do
the
wrong
thing.
Essentially,
if
those
environment
variables
are
not
set,
we
treat
this
like
a
normal
pipeline,
just
like
any
other
merge
to
master
or
merge
process.
B
So
if
I
was
upgrading
updating
like
something
in
global
values,
I
do
that
in
an
MR
I
merge
it.
The
pipeline
detects
that
it's
just
there's
no
variables
being
passed.
It's
just
the
standard,
merge
request
pipeline,
that's
being
deployed,
and
it
just
does
the
normal
thing.
If
I
run
a
pipeline
like
I
can
run
a
pipeline
here,
but
you
know
it
will
be
from
a
trigger
in
the
gitlab
org
repo.
B
If,
if
I
pass
those
variables
in
it
goes
okay,
there's
someone's
trying
to
trigger
me
to
do
an
actual
deploy
of
a
new
version.
So
you
can
see
here.
Scarbeck
was
playing
around
with
it
as
well,
so
I'm,
sorry
I'm,
trying
to
actually
so
here
you
can
and
I've
actually
labeled
the
two
types
of
pipelines.
So
if
you
see
a
pipeline
like
this,
ignore
the
deploy
failures,
that
means
that
this
is
the
late.
This
is
a
standard
pipeline.
B
That's
going
off
the
latest
commit
to
Main,
and
then
you
might
see
a
pipeline
like
this,
where
I've
passed
in
two
environment
variables,
you
see
that
deploy
tree
pipeline
trigger
deployed
version,
15.6.6
to
environment,
156,
stable,
and
if
we
look
over
here,
we'll
see
that
it
does
the
deploy
and
that
it
does
the
commit
deploy.
So
it
commits
basically
the
change
back.
So
a
little
bit
of
you
know,
scripting
and
stuff
there
to
do
a
very
simplistic
job
of
saying.
B
Okay,
we've
deployed
successfully
I
want
to
make
that
change
back
into
git
State
and
we
just
put
skips
the
eye.
So
we
don't
get
in
a
CI
Loop
where
we're
kind
of
deploying
multiple
times
it's
rudimentary.
It
works
not
too
bad.
The
biggest
problem
is
the
security
model,
because
essentially,
if
you
allow
anyone
to
trigger
a
pipelines
onto
Main,
essentially
you're,
basically
saying
you
have
to
give
them
permissions
to
more
or
less
be
able
to
work
anything
in
the
repository,
they
can
do
anything
with
the
repository
deploy
anything
change.
B
B
All
our
permissions
model
is
Branch
based,
so
you
have
like
a
developer
Branch
or
you
could
do
protected
branches,
but
branching
model
does
not
work
with
infrastructure
code
because
you,
if
you
try
and
have
it
a
branch
per
environment
if
I
want
to
change,
say
the
global
values
which
version
do
I
change
the
one
on
Main,
the
one
on
all
of
the
branches,
and
then
it
starts
to
spiral
out
of
control.
You
can't
have
two
Dimensions,
unfortunately,
anyway,
getting
a
little
bit
distracted.
So
that's
what
I
wanted
to
demonstrate?
I
mean
I.
B
B
You
can
see
here,
deploy
pipeline
should
trigger
deploy
version
15620
to
environment,
156
stable
this.
You
can
ignore
the
renovate
job.
This
will
take
a
little
bit
to
run.
So
that's
basically
it
so
I
think
this
is
good
enough
for
the
first
implementation.
It
gives
us
a
simplistic
model
that
we
all
know
and
trust,
which
is
trigger
pipelines.
B
It
gives
us
the
downstream
pipeline.
So
when
a
developer
is
merging
to
a
stable
Branch,
they
will
see
the
downstream
Pipeline
and
we'll
be
able
to
follow
it.
The
pipeline
into
the
release,
environments,
I,
don't
think
it's
the
best
for
security
and
I
do
think
the
model
of
switching
back
to
a
kind
of
an
automated
merge
request,
workflow
or
something
like
that.
I
do
think
it's
something
we
should
look
at
doing,
probably
as
part
of
the
second
second
iteration
epic
for
release,
environments.
D
Thank
you
dreams,
so
this
is
something
we
have
been
trying
in
the
past
with
Helm
charts,
so
the
when
we
developed
the
helm,
charts
integration
for
releasing
how
to
deploy
charts
that
we've
never
ended
up.
Never
using
that,
but
we
had
this.
D
We
basically
implemented
the
same
thing,
because
the
level
of
complexity
of
address,
of
upgrading
the
the
reference
inside
the
charts
were
so
high
and
was
already
implemented
in
the
chart
itself.
When
we
say
we're
gonna
trigger
a
job,
charts
will
I'm
talking
about
the
helm,
repo,
not
the
CNG
right.
This
will
run
an
internal.
B
D
The
thing
was,
this
will
run
its
own
internal
script,
that
is
capable
of
figuring
out
the
proper
versions
and
everything
upgrading
every
file
and
self-commit
yeah
right.
So
we
were
doing
exactly
the
same
thing.
What
happened
there
is
that
we
were
not
owning
both
side
of
the
thing
we
never
ended
up
using
the
outer
deploy
the
charts,
because
the
change
in
the
chart
may
break
the
deployment.
So
this
kinds
of
things
break
and
we
are
no
longer
using
we
never
used.
D
An
action
is
not
even
working,
but
here
with
us
in
being
in
full
control
of
both
projects.
I
see
opportunities
to
make
sure
that
these
things
continue
working
right.
So
that's
not
a
problem.
What
I
wanted
to
suggest
is
trying
to
figure
out
if
we
can
work
around
this,
not
the
right
term,
if
we
can
strengthen
the
security
model
by
using
something
like
an
example.
So,
as
you
said,
you
need
the
maintainers
of
gitlab
to
be
maintainers
of
the
release
environment
as
well.
D
So
that's
the
first
thing
and
that's
okay,
but
then
maybe
what
we
want
to
do
is
using
kind
of
a
bot
token
in
the
I.
Don't
know
how
you
generate
the
credentials
for
self-committing,
let's
say
if
you're,
using
a
project
scope
token.
Basically
what
we
can
try
to
do
is
say
only
this
project
scope
token
is
allowed
to
to
directly
push
on
Master
maintainers.
Cannot
they.
D
So
you
can
work
on
that.
You
can
work
on
I,
think
there's
also
the
get
hook,
so
I
I
never
done
this
with
gitlab,
but
there
should
be
a
way
for
preventing
things
to
get
into
a
branch.
So
something
like
if
this
is
a
direct
commit
to
master.
It
is
changing
things
that
I
should
not
change
in
terms
of
file.
So
then
denies.
A
D
If
it's
emerged,
okay,
okay,
so
I
think
there
are
options.
Maybe
not
all
of
them
are
feasible,
but
there
should
be
as
well
as
if
possible.
This
is
not
sure,
I'm
not
sure
about
this.
There
should
be
this
thing
where
only
owners
can
change
I
think
so
we
don't
have
this
one.
Only
owners
can
change
the
protected
branches
and
rules
around
branches,
because
otherwise
maintainers
can
circumvent
all
the
restrictions.
D
D
Because
of
this,
we
may
consider
having
a
fork
of
release
environment
I
mean
that
doesn't
have
to
be
a
fork.
What
I
mean
is
that,
as
long
as
release
environment
is
only
addressing
the
release
environments,
then
I
have
no
problems
with
gitlab
maintainers
being
able
to
affect
release
environments,
because
that's
the
thing
we
want
to
give
them
power
to
to
own
things,
hand
to
end
fine.
D
If
we
end
up
exploring
this
concept
with
other
type
of
environment
that
have
stricter
permission,
then
we
can
afford
and
gonna
have
another
and
so
working
on
the
kubernetes
level
right.
So
the
credential
inside
the
project
will
be
restricted
only
to
the
namespaces
that
can
work
or
on
those
environments.
So
I
think
we
can
try
to
work
on
this
level
of
defining
the
the
permission
model.
B
I
think
yeah.
No,
that
makes
sense
to
me.
I
didn't
think
about
that.
But
you're
right
I
could
at
least
protect
the
commit
part
a
bit
better,
probably
with
rules
of
like
yeah.
Maybe
the
bot
token
is
the
only
one
could
commit
that
that
means
they
could
still
deploy,
but
it
wouldn't
commit
it
back
to
the
repo
yeah
I
think
the
big
key
is
is
that
gitlab
has
and
possibly
not
just
gitlab.
Honestly,
it's
really
good
security
around
branches
right
like
so
you
know.
B
You
need
to
do
a
merge
request
if
you're
going
to
change
like
the
global
values
or
you're
going
to
change
like
the
memory
settings
or
something
like
that.
But
here's
a
file
where
you
specify
the
versions
for
this
environment
right
so
each
and
by
each
environment
might
have
a
versions
file
I'm
happy
for
you
know.
Xyz
people
to
change
that,
but
but
that
doesn't
work
with
the
pipeline
model.
C
B
So
that's
why
I'm
thinking
the
second
stage
is
we
just
have
to
kind
of
sit
down
and
get
a
good
understanding
of
how
to
automate
merge
requests
that
like
work
in
a
way
where
it's
like
I
trigger
a
job.
It
does
create
a
merge
request
and
like
merges
that
using
the
security
that
do
you,
like,
whatever
the
job
is
running
as
it's
talking
to
the
API
of
that
security
can
set
things
to
Auto,
merge
and
therefore
I
can
go
okay,
gitlab
or
can
can
talk
to
these.
B
You
know
environment
or
they
can
change
these
files.
They
can't
change
these
files
and
what
have
you
and
that
might
give
us?
The
question
will
really
come
down
to
talking
with
Amy
about
this
earlier,
as
well
is
like
what,
as
these
release
environments
start
to
become
more
important
to
the
process.
So
at
the
moment
we're
doing
the
first
iteration
is
really
just
getting
things
up
technically
making
sure
we
can
kind
of
see
this
happening
that
works.
It
makes
sense.
B
You
know
to
the
level
where
we
can
say:
yes,
this
can
be
a
trusted
piece
of
the
the
process.
I
mean
technically,
you
know
it's
all
fine
but
like.
If
there's
controls
we
need
to
put
in
place.
If
we
need
harsher
rules,
it
might
change
the
implementation
in
the
future,
depending
on
what
people
come
back
with,
but.
E
D
As
there's
no
way
to
escape
or
say,
the
important
thing
is
that
you
can't
escape
the
namespace
in
kubernetes
and
escalate
into
the
production
or
things
like
that
right.
Otherwise,
the
the
gate
is
at
the
beginning,
so
maintain
only
those
allowed
to
merge
on
stable
branches
will
be
allowed
to
propagate
battery
yeah,
and
so
we
also
need
to
make
sure
that
then
we
we
have
the
right
permission
in
place.
D
B
B
Yeah
see
yeah,
so
I'll
just
quickly
wrap
up
the
demo
here
you
can
see
here.
This
is
the
one
commit
that
was
just
created,
update,
environment,
1560
version
15.6.20.
We
can
have
a
description
says
where
the
pipeline,
where
the
commit
back
came
from.
If
we
actually
look
at
the
commit
it's
yeah,
you
can
see
there,
it's
updating
it
all
the
different
pieces.
D
D
B
Okay,
cool,
okay,
that's
also
good
to
know
yeah
briefly
mention
if
the
deployment
model
becomes
more
complicated,
I
will
actually
replace
the
variables
being
passed
in
at
the
moment.
It's
just
environment
version,
it's
very
simple,
but
if
it
becomes
more
complicated,
I'll
actually
replace
it
with
a
variable.
That's
like
deployment
Json
or
something
which
is
the
Upstream
password
pipeline-
must
actually
pass
in
some
kind
of
well-known
Json
object
of,
like
all
the
deployment
targets
and
stuff
that
it
wishes
to
hit,
and
then
I'll
pass
that
Downstream
and
do
the
appropriate
pipeline.
B
Yeah
I
saw
that
as
well,
which
is
another
similar
thing
like
if
we
generate
an
artifact
that
says
this
is
a
deployment
Target
like
we
can
define
a
schema,
that's
more
complicated
than
just
environment
version,
I!
Guess
that's
what
I'm
trying
to
say
we
can
if,
if
the
need
arises,
which
might
we'll
just
expand
upon
that.
A
As
the
environment
comes
up
great,
do
you
already
have
some
anything
around
kind
of
checking
the
health
of
the
environment
like
how
do
we
determine
like
it
up
and
healthy.
B
So
beyond
just
how
I'm
doing
its
basic
checks
at
the
moment,
that's
it
I,
like
I,
agree.
That's
the
thing,
I've
kind
of
pushed
that
into
a
second
phase
kind
of
idea,
because
really
so
let
me
kind
of
give
an
overview
of
what
I'm
really
thinking
is
like.
What's
phase
one
and
as
I
said,
the
focus
is
just
trying
to
get
some
pieces
working
technically,
so
there's
three
really
main
pillars
to
this:
getting
images
built
so
the
interface
with
CNG
and
really
getting
the
images
built
and
making
sure.
B
Yes,
those
images
contain
the
code.
We
expect
there's
doing
the
which
I've
already
got
merged
now
to
gitlab
or
gitlab.
There's
the
deployment
of
that
which
now
I've
got
that
trigger
interface.
So
I've
got
a
method
of
doing
that.
It's
not
the
best
method,
but
it
is
a
method
that
should
work
and
then
the
final
piece
is
running.
Qa
I
have
the
pieces
together
for
running
QA.
So
from
what
they've,
given
me,
I
think
I
can
piece
together
something
and
and
not
piece
together.
B
That's
probably
it
I'm
confident
I've
got
enough
now
to
build
a
job
that
will
run
QA
against
that
environment,
and
so
that's
kind
of
the
like
the
next
pieces.
I'll
actually
build
together
that
job
so
where
we
are
at
so
the
way
this
kind
of
all
works
from
a
developer
perspective
and
in
the
stable
Branch
perspective
is
on
on
the
pipelines
that
run
on
stable,
Branch
right.
So
whenever
you
merge
an
MR
to
a
stable
branch,
we
we
have
a
few
different.
You
know.
You've
got
your
aspect
tests.
B
B
B
B
What
this
means
is
it's
been
merged
to
master,
but
it
hasn't
really
been
100
tested
yet
because
the
pipe
the
child
pipeline
only
fires
on
stable
branches
and
because
that
changes
merged
to
master,
you
have
an
and
not
a
stable
Branch.
Therefore
we
haven't
really
got
it
happening.
Yet
I
did
an
aramara
up
that
I
know
with
that.
I
think
is
sitting
with
Alessio
at
the
moment
to
backport
that
change
into
the
15-9
stable
branch
for
testing,
but
the
other
thing
that
will
happen
is
when
we
do
the
ass.
B
No
someone
could
correct
me
here
when
we
do.
The
RC
for
1510
is
when
we
create
the
1510
stable
branch,
and
when
that
branch
is
created,
it
will
pick
up
the
copy
of
gitlab
CI
from
Main,
as
all
branches
do
when
they're
created,
and
then
that
will
have
this
change
and
that
will
also
you
know,
start
basically
building
out
the
you
know.
It'll
start
running
this
release
environments
pipeline
this.
B
Hoping
is
and
then
I'll
actually
because
the
tricky
part
is,
is
a
kind
of
want.
I
I
do
need
to
do
the
other
parts,
but
I
kind
of
was
hoping
to
make
sure
I
could
validate
the
child
pipeline
worked
and
the
image
there's
no
point
me
putting
the
trigger
deploy
in
until
I
know.
The
images
are
coming
out
kind
of
like
so
so.
B
If
we,
if
we
merge
the
the
back
Port,
then
we
could
test
it
today
on
15.9
or
we
can
just
wait
until
15,
10,
I
guess
in
what
five
days
or
less
four
days,
three
days
when
the
when
the
15
10
RC
is
created.
C
B
So
either
way,
we'll
I'll
actually
get
some
real
feedback
about
the
images
I
will
go
in
I'll,
validate
them
myself
that
you
know
they're,
giving
us
the
code
that
we're
right
and
then
I'll
probably
do
an
MR
to
start
adding
the
trigger
like
to
the
deploy
trigger
and
that
will
probably
then
need
to
to
actually
get
it
working.
B
We'll
have
to
need
to
back
all
these
CI
changes
we'll
have
to
get
back
ported
onto
a
stable
branch
of
it,
obviously,
to
actually
make
them
useful
because
being
merged
onto
Mouse,
CI
changes
merge
to
master,
won't
really
affect
things
until
they
it's
kind
of
like
I
have
to
develop
them
on
master
and
then
I
kind
of
get
them
all
back.
Ported
onto
stable
branches,
I
guess.
A
What
would
this
look
like
on
the
actual
MetroCast,
so
the
on
1510
is
also
the
first
stable
Branch,
where
we'll
be
having
developers
merging
stuff
in
like
what
does
this
look
like
if
it
fails
sure.
B
So
what
they
will
see
is
they'll
see
just
like
any
of
the
other
child
pipelines
we
have
now.
Let
me
see
if
I
can
Show
an
example.
B
B
So
what
it's
going
to
look
like
and
I
really
wish
I
could
show
the
real
one
actually
I
wonder
if
I
can
use
the
no
no
I
can't.
We
need
to
wait
for
15
10
stable,
maybe
you'll
have
another
Downstream
over
here
and
it
will
say,
release
environments.
C
B
There'll
be
a
box
to
the
side
and
it'll
have
one
job
which
is
trigger
CNG
I.
Think
something
like
that.
I
can't
sorry
I
can't
remember
it's
build
images
if
it
breaks
it's
allowed
to
fail.
So
it'll
do
the
whole
exclamation
mark
and
you
know
I.
We
definitely
will
will
you
know.
Obviously
we
need
to
start
developing
documentation
of
like
Okay,
so
this
pipeline
breaks.
What
do
we
do
kind
of
thing?
B
A
Yeah,
that
makes
sense
Graham.
Are
you
also
going
to
be
affected?
Are
you
also
working
towards
running
the
package
and
test
test?
Suite?
Is
that
the
one
that
would
be
run
on
these
release
environments.
A
B
Build
days,
all-in-one
Docker
image
and
runs
it
in
in
the
gitlab
runner,
it's
great.
It's
actually
crazy
that
it
works
I'm
very
impressed,
but
it
runs
just
as
Docker
container
with
everything
inside
it
inside
the
runner.
Vm,
that's
like
running
over
in
in
gitlab,
CI
and
I
believe
the
test
Suite
is
much
smaller.
I
could
be
wrong.
I.
C
B
The
suite
is
smaller
because
of
the
limitations
of
that
setup.
Okay,
so,
for
example,
right
with
release
environments
because
they
are
full
release,
environments,
we
get
cars,
we
get
Pages
all
those
things
that
live
off
subdomains,
so
we
get
like
Pages,
Dot,
15.6,
15-6.,
stable
or
whatever,
like
we
get
the
full.
You
know
we
can.
My
goal
is
to
try
and
build
these
environments,
which
is
with
as
many
of
the
components
enabled
as
possible
so
that
we
can
run
the
largest
test
Suite
possible.
A
A
Because
if
we
one
thing
on
the
work
Myra's
been
doing
is,
but
we
know
that
the
packaging
test
test
Suite
isn't
going
to
be
set
as
blocking
because
of
the
number
of
the
percentage
of
flaky
tests
in
there
possibly
becomes
too
noisy.
So
so
you
may
want
to
sort
of
review
that
as
well.
B
Qa
tests
issue,
so
we
can
have
a
look
at
that
issue.
So
what
I
believe
what
I
believe
we've
landed
on
is.
B
B
That's
that's
just
configuration,
but
the
tricky
one
is
or
what
I
need
to
do
is:
I
still
need
the
stable
Branch
pipeline
to
build
me
a
QA
image
right,
so
that
I
think
we
do
that
as
part
of
this
packaging
QA
test,
I
I
will
validate
that,
but
it
whatever
whatever
it
means,
is
that
we're
already
building
some
image
with
QA
tests
and
I
will
just
be
already.
So
we
already
do
that
and
I
will
be
reused.
D
It's
not
published,
that
was
the
problem
that
they
were
addressing
so
as
far
as
I
understood
it
gets
published
where
it
is
supposed
to
be
only
on
tag.
So
this
is
what
they
were
mentioning
when
they
said
you
can
kick
start
this
process,
because
the
stable
branches
for
us
begins
with
rc42.
D
So
there
is
already
a
QA
image
for
that
and
that's
okay,
but
at
every
single
things
you
built
on
top
of
that,
will
still
run
on
the
previously
tagged
image
until
you
tag
again,
which
means
that
if
you
have
an
issue
in
the
QA
image
itself,
so
not
in
the
code
but
in
the
QA,
you
cannot
get
the
green
QA
until
you
you.
But
this
is
the
technical
details
and
it's
it's
already
in
the
pipeline,
so
it
just
it's
just
designed
to
only
publish
those
images
on
tags
on
stable
branches.
D
So
it's
just
a
matter
of
I'll.
B
B
This
is
the
thing
I'm
still
not
100
sure
is
like
it
says:
oh
yeah,
you
can
run
QA
tests,
I
I
I,
assume
that's
the
full
tests.
I
I
looked
through
the
settings.
There
was
no.
Besides,
there
was
some
smoke
you
could
set
smoke
only
for
a
subset,
but
I
believe
just
by
running
that
gem
I'm
supposed
to
be
getting
a
full
test.
Suite.
D
So
I
mean
if
my
understanding
is
correct,
there
are
several
toggles
on
how
on
what
you
can
run.
One
is
smoke
which
is
give
me
the
curated
reliable
set
of
tests
and
then
even
in
the
smoke
test.
There
is
another
distinction
about
knowing,
if
you
ever
not
admin
access
to
that
instance,.
A
A
That's
totally
fine,
yeah
I
think
the
main
thing
that
we
should
check
in
on
and
we
might
have
that
information
already
I'll
do
a
little
bit
of
a
cross
read
on
a
on
a
definition
just
see
if
we
already
have.
The
information
is
what
percentage
of
known
flakiness
is
on
this
test,
Suite
good.
That
was
something
so
I'd
admire.
C
A
C
A
Just
keep
me
trying
forever,
it's
actually
less
work
to
allow
it
to
fail
and
have
quality.
Do
some
manual
work
if
needed.
Definitely
not
a
good
long-term
thing,
though,
because
it
creates
more
manual
work,
so
getting
what
I
think
either
we
find
a
test,
Suite,
that's
stable
enough
or
we,
this
would
be
I,
think
a
good
one
for
us
to
be
able
to
give
quality
a
heads
up
that
like
hey.
This
is
the
direction
we're
going
we're
going
to
need
a
stable
test,
Suite
that
has
decent
coverage
that
we
can
set
to
be
blocking.
B
A
Exactly
well,
what
would
be
interesting
is
I'm
guessing
that
for
for
probably
for
like
15
10
at
least,
we
will
have
almost
all
three
processes
running
right.
So
we'll
have
the
sort
of
interim
one
that
Mayra
and
Sonata
figured
out,
which
is
for
allowing
developers
to
do
anything
we'll
have
the
existing
one,
which
is
right
before
we.
You
know
when
we
tag,
we
will
run
things.
B
A
C
A
B
I
also
suspect,
there's
going
to
be
a
level
of
flakiness
at
the
start
for
release
environments,
because
I
might
need
to
tune
them
like
at
the
moment.
I've
just
left
everything
default
and
might
like.
Oh
look
we're
getting
timeout
errors
because
I
haven't
made
the
pods
big
enough
or
I.
Haven't
I
forgot
to
configure
this.
You
know
so
like
getting.
You
know
just
working
through
the
like,
oh
yeah,
The
Kinks,
to
make
them
run
fine
in
a
test
environment,
yeah.
A
C
B
No
I
said
this
is
all
canonical,
so
this
is
all
canonical.
So
when
we
do
the
push
back
and
go
public,
then
they
will
deploy
to
security
to
release
environments.
This
was
another
kind
of
conscious
decision,
because
when
we
start
talking
about
security
releases
in
terms
of
deploying
them
to
real
environments
as
well
things,
the
security
question
becomes
higher.
The
questions
I
was
saying
earlier.
The
the
implications
of
security
just
start
raising
up
so
I
thought,
let's
short
circuit,
that
if
you
know
in
the
future,
if
we
actually,
we.
D
B
I'm
thinking
is,
if
we
get
buy-in
from,
you,
know,
security
and
everyone
and
they're
like
yep.
No,
these
environments
are
secure
and
the
information
you
have
in
there
in
that
repo
is
not
public.
It's
not
bad
like
then.
What
I
think
we
do.
Is
we
just
change
the
pipeline,
like
literally,
that
whole
child
pipeline
instead
of
running
on
canonical
it
just
runs
on
security
like
we
just
flip
it
over,
and
then.
B
D
D
B
Think
too,
with
the
release
environment
stuff,
we
can
look
at
reuse
a
bit
better,
like
I
can
take
all
of
the
the
stuff
out
of
it
into
a
common
repo,
and
then
you
just
like
oh
include
the
like:
you
can
create
release
environments,
you
can
create
secure
release
environments,
you
can
create
gitlab.com
deployments
or
whatever,
and
they
all
just
inherit
off
the
one
repo
like
not
even
full
Forks
but
just
empty
repos
that
just
have
the
environment
files,
but
all
of
the
like
Ci
logic
and
all
that
stuff
is
just
included
via
like
includes
or
whatever,
and
that
way
you
can
just
you
know,
you're
not
having
to
actually
like
if
I
want
to
make
a
tweet
to
any
of
this
workflow
I'm,
not
you
know
having
to
worry
about
Forks,
it's
just
all
in
one
place
and
then
all
of
our
repos
that
kind
of
work
in
this
model.
B
Well,
we'll
see
I
mean
you
know
this
is
kind
of
in
the
future.
All
right,
let's
we'll
see
how
that
works,.
A
This
is
not
a
fully
formed,
thought
and
I'm
not
expecting
it
to
become
a
thing
that
we
know
much
about
for
some
time,
but
just
as
an
interesting
one,
Graham
and
I
chatted
this
morning
about
the
fact
that
we
don't
really
have
like
an
entry
point
like
a
land,
a
single
landing
page
but
I
wonder
as
well
as
we
sort
of
think
about
things
on
stable
branches
and
in
different
places
and
things
whether
there's
you
know
something
that
sits
in
front
of
a
lot
of
this,
where
it's
kind
of
almost
you
know,
I
can
see
this
Mr
I
can
see
which
environment
it's
on
right
now
or
I.
D
D
A
I
used
to
work
in
a
place,
it
was
a
different
problem,
but
it's
something
that
worked
really
nicely
was
we
had
a
lot
of
difficulties,
keeping
track
of
dependencies?
It
was
like
a
microservice
environment.
We
had
a
lot
of
trouble,
keeping
track
of,
like
which
version
of
which
microservice
was
installed,
on
which
environment
and
somebody
in
the
end
just
created
like
a
service
with
a
landing
page,
and
it
just
listed
them
all,
and
you
basically
just
got
the
version
numbers
for
each
environment,
and
it
was
so
so
useful.
A
I
wonder
if
we
I
I
do
think
at
some
point.
We're
going
to
need
to
have
something.
D
When
I
shared
them,
the
my
the
scripts
for
The
X
bar
the
you
want
the
eugrams
recommending
the
day
only
run
they
don't
run
on
Linux,
and
so
what
I
told
them
is?
Yes,
we
need
a
page.
A
panel
we'll
say
just
on
autoplies
is
different.
It's
more
about
from
starting
setting
points,
probably
is
the
packages,
so
these
are
the
packages
and
what
happened
to
those
packages
while
on
release
environment,
the
starting
point
are
the
environments,
and
but
still
this
is
kind
of
the
place
to
go.
A
D
The
other
point
of
foreign.
D
Is
that
what
the
script
is
doing
is
just
doing,
API
calls
on
Ops
and
so
all
dot
API
calls
can
be
also
made
on
JavaScript.
So
you
can
just
build
a
single
page
application
that
just
force
you
to
log
in
on
Ops
and
then,
if
you
can
see
things
on
Ops,
it
can
rebuild
that
same
things.
If
only
we
knew
how
to
do
a
single
page
application
in
JavaScript.
D
B
Actually,
I
was
thinking
about
this
as
well,
and
it's
like
to
me
like
a
page
for
some
of
this
information
is
good,
but
what
I
was
actually
even
thinking
beyond
that
is
like
an
actual
workflow
engine
like
we
need
a
proper
workflow
engine
and
what
I
was
looking
at
actually
talking
about
backstage
and
everything,
I
was
looking
at
the
two.
B
If
a
package
is
built,
it's
so
hard,
whereas
with
these
kind
of
tools
you
can
build
plugins
and
actually
give
the
user
like
an
actual.
What's
going
on,
oh
by
the
way
I'm
waiting
for
packages
on
Ops
I'm
doing
this
I'm
waiting
for
this
Mr
to
be
merged.
You
know
these
kind
of,
like
actual
workflows,
that
just
get
the
FCI
is
a
fantastic
tool
for
CI,
but
we're
trying
to
build
actual
business
workflows
the
self-managed
release
process.
B
You
know
how
much
have
we
got
sitting
in
issue
descriptions
and
it's
very
complicated
and
it'd
be
great
if
it
could
update
dynamically.
If
the
steps
would
like
real
time
change
as
things
are
happening,
so
you
can
really
get
a
grip
on
that,
and
so
it's
interesting
because
I
think
there
there
is
a
whole
market
for
those
tools
and
it's
not
a
big
Market,
because
a
lot
of
times
people's
workflows
are
simple.
B
I
just
commit
a
thing
and
it
runs
a
Helm
install
and
Away
you
go,
but
when
we
start
hitting
the
level
we're
at
where
it's
like,
we
have
complex,
workflows,
I
think
gitlab
CI
is
just
unfortunately
it's
just
not.
It
doesn't
give
us
the
user
experience
for
complicated
things
and
it
shouldn't
just
be
gitlab
CI,
like
we
build
things
like
environments
deployments
like
protected
environments,
deployment
like
valid.
What
is
it
where
you
can
like
control?
Someone
has
to
click
a
button
deployment
approvals.
A
B
A
End-To-End
flow
is
so
difficult
for
people
who
are
not
release
managers
to
really
drop
their
head
around
because
so
complicated
that
I
think
we
can
feed
that
in
one
thing
that
led
me
to
say,
my
kind
of
original
thought
is
one
challenge:
I
can
see
coming
up
around
the
opening
up
the
stable
branches
and
adding
in
these
extra
environments
is
the
the
sort
of
the
not
a
huge
shift,
but
just
the
slight
difference
between
when
we're
in
a
security
release
and
when
we're
not
in
a
security
release.
A
At
the
moment,
that's
already
really
hard
for
people
to
see,
but
they
don't
necessarily
need
to
see
too
much
of
it.
So
we
kind
of
get
away
with
it.
I
think
that's
going
to
become
more
painful
I,
just
wonder
if
there
needs
to
probably
be
more
ways
that
people
can
actually
see
the
status
of
of
their
changes.
B
Absolutely
yeah
trying
to
give
that
that
overview
as
I
said,
like
the
a
business
like
a
business
overview,
not
just
at
a
like
some
CI
jobs
are
running
like
this
actual.
This
is
the
phase
of
like
you
know
that
a
security
fixes
in
well
accepting
their
Mars
we're
not
accepting
Mrs
yeah
yeah.
It's
like
how
do
you
capture
that
and
like
demonstrate
that
and
I,
don't
think
I
think
that's
such
a
custom
process
to
gitlab.
How
do
we
write
something?
That's
custom
to
that?
A
Exactly
I
wonder
if
we're
going
to
have
cases
where
teams
like
who
have
set
up
like
Italy
are
going
to
kind
of
accidentally,
merge
stuff
in
and
end
up
blocking
themselves,
without
realizing
they're
going
to
block
themselves
because
of
like
merging
onto
a
stable
branch
and
then
we're
in
different
statuses
like
it
feels
like
it's
probably
some
edge
cases
like
that
that
we
haven't
yet
come
across
thought
about
we'll
find
out.
Yeah.
B
C
B
Yeah,
mirroring
is
a
whole
other
problem,
like
that's
a
whole
other
problem
right,
which
is
just
like
so
much
of
our
products.
Functionality,
just
like
the
concept
of
mirroring
not
only
mirroring
Downstream
to
repos,
but
mirroring
to
a
completely
different
git
lab
instance.
You
know
that
we
we
just
it's
so
hard
to
track
and
do
because.
B
A
Mm-Hmm
yeah
for
sure,
okay,
awesome,
so
one
thing
going
from
the
back
of
this
demo
is:
it
would
be
good
to
make
sure
that
in
Myra's
documentation
and
announcements
and
things
we
just
have
the
specific
release,
environments
name,
so
we
can
say
definitely
ignore
this
stuff.
A
A
Awesome
great,
is
there
any
other
stuff
we
should
chat
about?
A
Do
we
need
to
like?
Do
we
make
a
decision?
Are
you
gonna,
wait
for
the
1510
Branch
gram
or
we're
going
to
backport
the.
E
D
B
A
Sure
awesome,
okay
sounds
good
great
thanks
much
for
the
demo
Graham
and
thanks
to
discussions
both
of
you
hope
you
have
good.