►
From YouTube: 2020-09-08 - Security Release as part of auto-deploy.
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
welcome
to
another
security
releases
as
part
of
the
auto
deploy
meeting.
I
am
very
happy
to
have
the
four
of
you
again
together.
It's
been
a
while,
and
so
let's
just
get
started
for
the
progress
update.
I
completed
my
investigation
about
pipelines
for
merge
results
and
merge
train,
and
I
have
some
points
like
on
the
discussion
and
robert.
B
Yes,
so
the
merge
train,
the
other
merge
train
our
merge
train,
our
customer
string
has
the
auto
toggle
seems
to
have
been
working
as
expected.
I
think
we're
just
about
ready
to
close
that
issue
out
there
with
some
documentation
update
that
needed
to
happen.
So
there's
one
update
from
meyer
and
one
for
me.
I
think
we
can
finally
close
that
out.
C
I've
noticed
notice
that
the
toggle
is
changing
firing
quite
frequently.
Is
this
actually
proper
failures,
or
is
it
the
the
mirroring
problems
like.
C
B
Know,
there's
there's
an
issue
where
people
are
getting
a
lot
of
emails
about
the
mirroring,
failing
on
master
and
then
it
instantly
resolves
itself.
I
think,
there's
a
maybe
a
race
condition
issue
with
the
mirroring
functionality
itself,
but
as
far
as
like
the
toggle,
it's
doing
what
it's
supposed
to
do.
C
A
Yep,
thank
you
so
to
try
to
summarize
the
status
for
pipelines
for
merger
sold
and
merged
train.
The
only
thing
that
we
need
to
enable
both
of
them
is
to
enable
the
setting
on
the
security
project
and
enabling
pipelines
for
merge
results
will
in
fact
enable
the
merge
train.
We
can
disable
the
merge
train
by
using
a
feature
flash,
which
is
how
the
canonical
project
is
currently
doing
it,
but
these
features
are
together
and
they
might
be
split
in
upcoming
releases.
A
There
is
an
issue
about
it
that
was
created
recently,
so
investigating
a
little
bit.
I
think
the
merge
train
is
not
going
to
fit
our
security
process,
mainly
for
two
reasons.
A
When
processing
merge
requests,
it
is
going
to
trigger
pipelines
and
there
is
no
way
to
prevent
much
to
prevent
specific,
merge
requests
to
be
added
to
the
merge
train.
So
it
applies
for
all
of
them.
So
in
the
case
of
the
backboard
it
is
going
to
trigger
when
we
merge
them,
it
is
going
to
trigger
a
pipeline
for
every
backboard
and
taking
the
example
of
the
security
release.
It
will
trigger
around
87
pipelines
and
for
back
parts.
A
Last
week
they
also
annotated
some
interesting
notes.
I
think
the
most
important
one
is
that
there
is
no
api
to
add
a
merge
request
to
the
merge
train,
so
we
will
need
to
implement
the
api
in
order
to
be
able
to
use
that,
and
also
it
is
still
possible
to
push
commits
once
the
merchant
quest
was
added
to
the
merge
train.
A
It
is
not
completely
compatible
with,
I
believe,
the
retry
keyword
and
well.
There
are
more
notes
on
that
experiment
and
well,
I
think
the
pipelines
for
merch
result
is
actually
a
good
fit
because
it
is
going
to
execute
the
specs
and
well
rci,
based
on
the
last
commit
on
master
and
see.
I
think
it
is
going
to
alleviate
the
problem
of
approvals
being
reset.
A
Every
time
someone
tries
to
update
the
merge
request
with
the
latest
master,
so
basically
the
proposal
would
be
to
enable
the
pipelines
and
therefore
the
merge
train
disable
the
merge
train
just
the
same
as
in
the
canonical
way
merge
the
merge
request
are
written
master
by
triggering
a
new
pipeline
and
setting
merge,
merge
when
pipeline
succeeds
and
the
back
ports
will
be
merged
like
immediately,
because
we
can
do
that.
A
No
yeah,
no,
I
don't
think
I
don't
think
it's
going
to
change
anything
because
well,
this
proposal
actually
doesn't
change
the
fact
or
how
do
we
process
the
backboards?
Because
currently
we
merge
them
immediately
like
in
batches,
and
this
proposal
is
basically
but
doing
the
same.
It
is
not
changing
anything
about
the
backboards.
D
Point
one:
what
does
that
actually
mean?
I
can't
quite
get
my
head
around
like
when
you
say,
enable
pipelines,
whereabouts.
A
Yes,
so
pipelines
for
merge
results
is
a
gitlab
feature
that
executes
the
test
and
basically,
all
rci,
based
on
the
less
on
the
last
commit
on
master.
So
if
a
merge
request
is
created
and
it
is
behind
master
by
200
or
300
commits,
it
is
not
going
to
execute
the
tests
on
the
merge
request,
but
it
is
going
to
execute
the
test
that
exists
on
master.
A
So
that
gives
us
some
sort
of
guarantee
that
when
we
merge
that
merchant
quest
it
is
not
going
to
cause
a
failure
like
a
broken
master
failure,
because
right
now
we
don't
have.
D
A
Yes,
so
the
prop:
no,
no,
not
a
single
pipeline.
Each
merch
request
is
going
to
trigger
a
new
pipeline,
but
in
the
case,
for
example,
of
the
security
release,
when
we
had
like
27
or
29
security
issues,
when
merging
those
merge
requests
targeting
master,
we
will
trigger
29
pipelines
because
we
will
only
trigger
pipelines
for
those
targeting
master
versus.
When
having
the
merge
train,
we
will
need
to
trigger
pipelines
for
every
merge
request,
not
just
the
one
targeting
master.
Also
the
backboards.
D
D
A
A
So
the
merge
train
actually
merges
merge
requests
in
sequence
from
my
understanding,
so
it
is
going
to
merge,
merge,
train.
Well,
merchant
was
one
and
then
it
is
going
to
execute
the
pipelines
using
the
latest
changes
with
that
merge
request
right
and
then
it
is
going
to
start
processing
another
one,
and
then
it
is
going
to
start
processing
another
one
using
the
latest
changes
which
includes
the
recently
merged
merch
requests.
A
Without
the
merge
train,
we
are
only
going
to
execute
the
pipelines
from
the
last
master,
which
is
not
going
to
include
the
security
merge
request.
So
there
might
be
some
scenarios
in
which
the
security
merge,
train
security.
Merge
request
might
cause
some
failures.
D
A
Which
is
using
pipelines
for
merge
results
and
then
triggering
merge
when
the
pipeline
succeeds.
It
is
the
same
approach
we
use
when
merging
regular
fixes
on
canonical.
So
we
are
basically
backporting
that
process
into
the
security.
A
A
Well,
I
don't
think
we
have
had
that
scenario
in
which
security
merge
requests
depend
on
each
other.
That
will
be
tricky,
because
currently
our
tooling
does
not
allow
that.
B
A
Cool
okay,
so
so,
just
to
the
other
point
that
I
have
is
basically
that
an
issue
that
builds
on
top
of
this
discussion
that
we
are
having
and
it
is
about
modifying
the
way
our
tooling
process
and
security-
merge,
merge
requests
so
summarize
the
new
process
is
going
to
be.
A
We
are
going
to
enable
pipelines
for
merge
results
and
we
are
going
to
disable
the
merge
train,
merge
request,
targeting
master
are
going
to
be
merged
during
a
specific
window
time
and
they
are
going
to
be
merged
by
triggering
a
new
pipeline
and
setting
merge
when
pipeline
succeeds
and
backboards
are
going
to
be
merged
during
the
actual
security
release,
without
triggering
a
pipeline.
The
way
we
do
it
now
just
merge
them
immediately
and
that's
it.
A
A
I
do
have
some
questions
that
when
I
was
analyzing
this
so
far,
we
have
been
talking
about
only
when
we
merged
merch
request
targeting
master,
we
only
merged
those
merge
requests
associated
to
the
gitlab
security
project,
but
we
have
more
merch
requests
from
different
projects
like
op
nibbles,
italy
pages
and
workhorse,
well,
actually,
just
omnibus
and
pages.
So
far
for
workhouse
and
italy
we
haven't
had
any
merch
quests.
So
what
about
those
should
we
also
merge
them
when
we
merge
merchandise,
targeting
master
or
those
should
be
merged
during
the
security
release.
C
A
Right
right,
I
didn't
remember
the
one
from
italy,
so
the
only
problem
is
that
if
we
merge
any
other
merge
request
that
does
not
belong
to
the
gitlab
project,
the
mirror
is
going
to
be
broken
and
so
far
I
think
we
only
merge
automatically
for
for
gitlab
project,
not
for
only
for
any
other
sub
project
right.
A
A
C
But
we
can't
even
not
consider
it.
I
mean
please
interrupt
me
if
you
have
something
to
add,
but.
C
We
are
running
into
a
very,
very
dangerous
situation
if
you
don't
consider
those
additional
repositories,
because
they're
the
building
blocks
of
gitlab
as
well
declare
rails
is
one
part
of
the
bigger
puzzle.
Imagine
you
merge.
Merge
requests,
bumping,
italy
version,
for
example,
and
not
merging
this,
like
a
fix
somewhere
in
master.
C
Okay,
that's
a
terrible
example
because
we
have
additional
logic
around
it,
but
the
point
I'm
trying
to
make
is
what,
if
we
end
up
in
a
situation
where
we
are
expecting
one
of
those
supporting
projects
to
have
a
certain
feature,
but
it
doesn't
and
we
expect
gitlab
rails
to
depend
on
the
feature
that
we
don't
have
and
we
try
to
deploy
this.
C
And
we're
not
talking
about
insignificant
or
small
projects,
like
I
don't
know,
auto,
auto
devops
templates
right
like
if
that
breaks
sure
it's
not
great,
but
it's
also
not
super
impactful.
C
Right,
yeah
and
and
as
someone
who
had
to
sink
in
italy
during
the
past
security
release,
I
was
not
amused
like
it.
It
wasn't
a
big
deal
right
like
it's
just
like
one
command,
but
it
was
just
like
easy
super
easy
to
forget
when
you
have
this
nice
list,
if
you
click
this,
you
click
this
or
sorry.
You
type
this
into
slack
and
you
move
on.
It's
really
really
easy
to.
C
C
B
C
C
A
C
C
I
guess
what
I'm
aiming
to
say
is:
we
have
merge
train,
we
have
the
code
that
is
auto
toggling
things
theoretically.
Well,
I
know
that
merch
stream
definitely
can
take.
Take
source
and
target
projects
source
entire
target
branches.
A
So
perhaps
we
can
give
it
a
go
and
merge
the
merch
request
from
all
different
projects
and
we'll
see
what
happens
and
well.
I
think
for
that.
We
will
need
for
you,
robert,
to
modify
the
toggling
to
include
all
the
to
be
project
agnostic.
C
C
A
So
I
am
not
sure
about
the
security
release
process
for
workhorse,
but
I
guess
it
would
require
a
bunch
of
merch
requests
right,
one
for
fixing
or
or
four
for
fixing
workhorse
and
another
four
for
updating
the
workhorse
version
on
gitlab
right.
So,
yes,
yeah
yeah.
I
think
I
think
we
will
need
to
also
merge
all
the
merch
requests
related
to
the
satellite,
because
if
we
are
going
to
merge
the
the
one
that
bombs
the
version,
we
will
need
to
also
merge
the
one
that
actually
introduces
like
fixes
that
vulnerability.
So
yeah.
A
C
About
one
case,
only
for
now
myra
you
you're
mentioning
four
merge,
requests
for
backwards
and
master.
Let's
try
to
figure
out
what
happens
actually
in
master
for
gitlab.com,
and
then
it
becomes
relatively.
D
A
Okay
got
it,
and
another
question
that
I
have
so
far
moving
on
is
that
when
we
are
merging
this
merch
request
targeting
master,
we
cherry
pick
them
into
the
auto
deploy
for
those
projects
that
have
up
to
deploy
branches.
But
now
that
we
are
cutting
auto
deploy
branches
every
six
hours,
it
might
not
be
necessary
to
pick
them
into
the
auto
deploy
because
they
are
going
to
be
included
in
the
next
one.
D
One
it
sort
of
makes
sense.
I
worry
a
little
bit
about
the
fact
that
git
lab
is
a
monolith
and
it
feels
like
treating
parts
of
it
differently.
I
wonder
what
sort
of
dependencies
might
be
surprising
there.
D
I
wonder
also,
if
it
sort
of
makes
an
assumption
that
there's
definitely
like
if
we
need
to
keep
track
of
things
around
the
security
release,
to
make
sure
that
you
don't
end
up
with
something
big
fixed
in
get
lab
on
one
auto,
deploy
and
then,
for
whatever
reason
we
have
incidents
whatever
and
we
don't
get
the
next
one
out.
Do
we
end
up
risk
like
losing
some
of
those.
A
A
D
A
Yeah,
I
mean,
I
think,
if
we
don't
pick
them
anymore,
the
only
thing
that
will
change
is
that
we
won't
deploy
it
like
as
fast
as
we
are
deploying
it
now
that
if
we
merge
one
right
now
and
we
deploy
it,
it
will
be
like
deployed
next.
It
will
be
deployed
like
in
two
hours,
whether
if
we
don't
pick
them,
it
will
be
deployed
in
four
hours.
B
I
think
one
confusing
thing
I
think,
martin
and
I
saw
in
the
last
release-
was
a
security,
merge,
quest,
gets
merged
and
then
gets
picked
into
the
current
auto
deploy
branch,
and
then
we
kind
of
immediately
cut
a
new
autoplay
branch,
but
because
the
pipeline
for
that
merge
hasn't
yet
run.
We,
the
previous
auto,
deploy
branch,
has
more
security
fixes
than
the
new
one
and
that's
just
like
confusing,
like
eventually
it'll
catch
up
just
from
pipelines
passing.
C
I'm
gonna
drop
a
bomb
and
then
leave.
I
am
challenging
the
decision
to
have
auto
deploy
branch
branches
every
six
hours,
because
I
think
it's
too
frequent
if
we
calculate
how
much
time
it
takes
for
a
pipeline
to
run
in
gitlab
project
how
much
time
it
takes
for
a
new
auto
deploy
branch
to
finish
its
pipeline
qa
to
run
all
of
that
adds
up
to
six
hours.
C
So
by
the
time
you
actually
consider
like
you,
are
ready
to
deploy
to
production,
maybe
you're
already
creating
a
new
auto
deploy
branch
and
from
what
I've
seen
in
the
past
10
days
or,
however
time
I've
been
back,
we
can
turn
around
to
fix
that
quickly,
so
every
six
hours
is
actually
really
confusing
right
now,
and
I
think
that
also
adds
to
this
thing
that
you
just
mentioned.
Robert.
B
D
C
8
hours
follow
the
follow
the
shift
offset
it's
so
much
that
whoever
comes
online
in
european
time
has
an
auto
deploy,
branch,
ready
and
theoretically
passing
through
staging
already
or
maybe
even
to
canary
same
thing
like.
If
you
offset
it
correctly
in
european
time,
you're
going
to
offset
it
okay
as
well
automatically
in.
C
Americans
as
well,
that's
the
coverage
we
have
you
and
myra
are
both
in
not
pt
time.
Just
before
that
and
theoretically
within
eight
hours,
you
can
actually
push
things
and
then
on
demand.
You
can
create
auto
deploy
branches.
If
you
want
to
like,
if
you
see
that
you
as
a
release
manager,
don't
have
anything
to
deploy,
click
it
and
let
it
run
soon.
C
And
then
I
mean
it
does
complicate
our
picking
stuff,
obviously,
because
we
have
to
do
it
then,
but
at
the
same
time
I
think
the
confusing
territory
we
were
in
last
week,
robert,
which
this
offset
is
way
more
more
dangerous.
In
my
view,
consider
it
I
need
to
leave
that
might
be
one
solution.
The
other
solution
is
we
just
follow.
What
meyer
is
suggesting
and
not
pick
and
count
on
that?
But
then
I
would
want
to
have
some
safeguards.
C
Notifications
alerts
stop
to
train
button,
something
to
actually
make
sure
that
the
release
manager
doesn't
deploy
an
older
version
without
security
fix.
We
don't
want
to
go
into
a
situation
where
we
deployed
something
for
days.
We
are
not
or
we
deployed
something
announced
it,
and
then
the
next
deploy
comes
in
and
overruns
it
with
an
older
version,
because
a
pipeline
failure
caused
us
to
go
lower
version
than
what
we
deployed.
A
Thanks
so
have
we
ever
considered
creating
auto
deploy
branches
and
ignoring
the
green
commits?
Because,
with
that.
D
Basically,
on
this
because
yeah
I
alessia-
and
I
were
chatting
about
something
similar
but
unrelated
about
tagging
and
where
we
branch
from-
and
things
like
that,
so
I
think
it's
definitely
a
good
one
to
to
to
visit.
I
would
I
certainly
am
confused
about
why
we
have
tagging
and
branches
like
it
gets
super
confusing
like
today.
I
I've
got
one
where
it's
like
new
branch
means
staging
kicks
off
and
a
new
tag
on
our
older
branch
me
staging
kicks
off
it's
it's
kind
of
confusing.
A
Yeah,
okay,
so
since
we
have
one
minute,
I'm
going
to
append
an
issue
and
move
the
discussion
there,
so
I
mute
myself
so
I
just
like,
underneath
we
have
one
minute
for
your.
Actually,
we
don't
have
any
time
for
your
discussion.
Robert
sorry,
I
took
all
the
time
here.
That's.
A
Okay,
so
just
to
try
to
summarize
what
are
we
going
to
do?
I'm
going
to
work
on
implementing
pipelines
like
there
are
so
many
things
pending
like
implementing
pipelines,
mergers
for
merch
results
in
our
tooling
process
and
then
try
to
change
the
way
the
release
tools
is
processing
and
I
think,
robert.
B
Yeah,
unfortunately,
I
think
that
stable
branch
issue
is
going
to
be
more
complex
than
just
changing
where
it
gets
created,
because,
if
we're
creating
stable
wrench
from
the
last
thing
we
deployed,
it's
usually
going
to
be
a
commit
that
only
exists
on
security.
So
we
need
to
do
the
same
thing
we
did
for
like
deployment
trackers,
where
we
find
the
intersection
of
the
first
deployed
commit
that's
actually
on
canonical.
So
we
can
create
disable
branch
from
that,
because
stable
branches
have
to
exist
on
canonical
for
source
installs.
A
Oh
jesus,
why
is
everything
so
complicated,
okay,
okay,
then,
so.
A
B
A
Okay,
awesome
well,
thank
you,
everyone
for
joining,
and
I
will
see
you
around
bye.