►
From YouTube: 2020 05 26 GitLab L2H Demonstration
Description
Presentation on Continuous work migration capabilities in GitLab to assist with environments where Development and Production environments are on secure and disconnected networks.
A
Hello,
everyone,
my
name,
is
Samir
Kimani
and
today
I'm
going
to
be
presenting
some
interesting
features
of
gitlab
that
are
valuable
for
various
government
agencies,
specifically,
the
one
that
is
most
interesting
is
the
development
of
code
in
a
less
than
classified
environment,
so
it
could
be
a
nun
class
or
lower
sensitive
environment
and
moving
that
application
up
to
the
higher
classification
environment.
So
we
call
it
tongue-in-cheek
low
to
high
keeper
buddies,
but
essentially
that
is
what
that
stands
for
now.
A
With
regards
to
why
this
is
a
problem,
so
today,
what
might
be
happening
is
that
you
might
have
a
set
of
developers
who
are
developing
an
application
in
an
unclassified
or
a
lower
classified
environment
and
moving
artifacts
for
those
projects
from
the
lower
classification.
Environmental,
higher
classification
environment
require
some
amount
of
effort,
and
particularly
what
the
way
that
has
been
optimized
is
by
compiling
the
entire
project
or
set
of
projects
into
one
big,
executable
or
one
big
package.
A
Writing
extensive
notes
around
how
to
deploy
how
to
manage
that
package,
and
all
of
that
information
is
captured
in
the
lower
classification
environment,
somehow
pushed
up
to
the
higher
classification
environment,
either
through
the
traditional
mechanisms
of
what
is
considered
sneakernet
or
with
more
modern
considerations
around
how
a
diode
might
operate,
but
essentially
the
minimization
of
what
is
transferred,
is
sort
of
how
we
have
approached
the
problem
in
the
industry,
and
we
have
tried
to
solve
the
current
problem
now.
What
does
that
really
do?
A
What
it
does
is
you
have
teams
that
are
on
the
lower
end
of
the
classification
area,
who
you
might
be
interested
in
hearing
from
or
contributing
to
the
project?
So
you
are
having
that
happen,
however,
that
information
remains
in
that
area.
The
only
thing
that
moves
up
to
the
higher
classified
environment
is
a
fully
blown
working
application.
So
there
are
two
considerations
there.
One
is
a
consideration
of
the
agility
of
how
quickly
you
are
able
to
take
information
from
all
these
groups.
A
That
can
cause
some
obvious
concerns,
especially
with
regards
to
software
that
is
updating
more
frequently
or
if
you
have
libraries
that
need
to
be
packaged,
you
have
to
go
grab
every
piece
of
artifact
packaged
it
properly
and
be
able
to
move
that
up.
So
that
then
leads
to
a
much
grander
problem
of
what
the
receiving
end
gets
in
terms
of
the
information.
A
A
So
this
is
where
I
get
lab
comes
in
get
lab
is
a
has
a
unique
differentiator
where
this
pain
point,
this
specific
use
case
can
really
be
solved
in
a
much
much
simplified
way.
So,
as
you
may
already
know,
get
lab
is
a
single
application
for
the
entire
DevOps
life
cycle.
It
has
everything
from
planning
to
source
code
management,
to
package
management,
to
actual
deployment
to
the
CI
CD
pipelines
to
containerization.
All
of
that
is
packaged
within
one
application
and
to
support
that
entire
process.
A
A
It
could
be
a
cron
job
that
can
be
set
up
or
more
interestingly,
it
could
be
a
gitlab
pipeline
that
can
be
set
up
to
run
an
export
to
generate
the
tar
file,
to
put
it
in
the
appropriate
location
and
then
on
the
flip
side,
on
the
higher
classification
and
to
be
able
to
import
that
can
also
be
scheduled.
Now
what
that
means
is
you
can
encompass
all
of
this
functionality
within
a
get
lab
instance.
A
You
do
not
need
extra
tools
to
be
able
to
do
this,
except
for
the
cross
domain
functionality,
which
is
where
we've
marked
out
very
clearly
that
that's
where
we
we
would
need
something
to
happen,
and,
typically,
in
a
more
traditional
sense.
That's
where
the
quote/unquote
sneaker
netting
might
happen,
which
is
where
that
cross
domain
magic,
we've
defined
on
the
slide
here
or
in
the
more
modern
sense.
A
diode
of
some
sort
is
implemented
so
that
packages
can
be
left
over
there
and
then
picked
up
on
the
other
side.
A
So
this
is
a
an
example
of
a
little
high
use
case
that
has
been
implemented
at
one
of
the
customer
sites
and
basically
there's
a
unclassy
environment
where
there's
coding
going
on
there's
development
of
application
management
of
application,
there's
some
security
testing
that
is
done
at
that
level.
This
is,
by
the
way,
a
potential
way
that
can
be
done.
The
flexibility
exists,
whatever
jobs
you
see
on
the
unclassified
could
be
running
on
the
classified
side
as
well.
It
really
depends
on
where
and
how
you
want
to
implement
it.
A
So
that
is
something
to
consider
if
you
have
a
security
analyzer
that
is
charging
you
by
the
project
that
you
are
running
in
they're
really
more
interesting
might
be
to
actually
migrate
that
to
the
higher
end
and
run
those
jobs
in
that
environment,
so
that
when
the
artifacts
come
out
that
they
are
properly
vetted
in
the
environment
that
you're
actually
running
it
in.
So
that
is
one
other
alternative
suggestion
that
we
have
from
this
side,
but
essentially
the
way
it
works.
Is
you
have
your
unclassified
side
where
things
can
be
happening?
A
There's
a
fail,
fast,
fixed,
fast
mentality
over
there.
So
you
could
be
fixing
things
and
deploying
things
into
an
unclassified
environment
for
people
to
view
it
and
then
at
some
point,
you're
packaging
that
up
either
as
containers
or
some
sort
of
library
or
some
sort
of
executable.
It
doesn't
matter,
but
the
fact
of
the
matter
is
that
you
could
actually
send
code
up
through
this
process
and
have
the
entire
code
as
well
as
the
history
of
what
was
there
on
the
other
side.
A
A
So
if
you
have
a
tool
like
a
security,
analyzer
or
some
sort
of
a
continuous
integration
component,
that
you
want
to
keep
and
preserve,
because
you
have
expended
quite
a
bit
of
energy
building
that
up
that's
fine
as
a
starter
as
a
first
iteration,
but
as
you
will
be
able
to
see
even
in
that
situation
as
soon
as
you
fall
out
of
that
one
application
model,
you
now
have
to
worry
about
how
to
knit
those
pieces
on
both
sides
of
the
of
the
classification
boundaries.
So
that
is
something
to
consider.
A
Whereas
if
everything
is
in
get
lab,
then
you
have
a
lower
barrier
of
worrying
about
what
is
going
up
to
the
higher
higher
classification.
What
is
not,
how
is
it
being
managed?
All
of
those
considerations
have
to
be
so.
In
conclusion,
with
regards
to
the
slide
deck
and
I'm
going
to
go
into
a
demo
very
shortly
is
to
maintain
a
single
source
of
truth.
A
You
can
then
really
get
to
that
aspect.
Right
is
in
today's
world,
where
you
have
multiple
applications,
you
might
be
storing
the
same
information
in
multiple
places.
You
might
have
a
ticketing
system
where
you're
storing
information
about
why
you
are
doing
something
where
the
requirements
are
where
the
user
stories
are.
But
then,
when
you
modify
your
code
and
check
in
your
code,
there's
probably
some
sort
of
commit
message
that
you're
putting
in
as
part
of
the
code,
which
has
some
valuable
information.
A
Arguably,
and
then
your
continuous
integration
pipeline
is
probably
kicking
up
at
some
point
and
in
a
separate
system.
Well,
all
of
those
things
are
knit
to
your
perfect
specifications
to
be
able
to
perform
in
concert.
Well,
you
have
to
replicate
that
same
architecture
in
a
separate
environment
and
make
sure
that
the
versions
of
each
of
those
components
are
same
versions
of
each
of
those.
Plugins
are
the
same.
All
of
that
has
to
be
managed
versus
with
gitlab.
A
You
have
the
ability
to
get
away
from
that
and
have
one
single
application
same
thing
with
the
ato
process,
where
you
have
multiple
applications,
you
are
worrying
about
providing
a
authorization
to
operate
to
each
of
those
components
and
that's
adding
to
your
bottom
line,
your
time
line
and
in
the
expended
energy
and
time
of
all
the
other
personnel
who
have
to
be
involved.
In
doing
that,
you
have
single
package
two
more
across
networks.
A
The
tar
file
that
is
generated
out
of
get
lab
is
a
single
file
containing
a
vast
majority
of
the
artifacts
that
need
to
go
from
one
side.
To
the
other
side,
you
can
have
a
single
user
profile
across
both
networks,
and
that
gives
you
the
governance
aspects
of
it.
Is
you
don't
suddenly
have
to
break
that
model
and
have
to
import
it
as
an
import
user
and
do
different
things
around
that
and
then,
of
course,
you
can
create
pipelines
that
are
specific
to
that
network.
A
So,
as
you
migrate
the
project,
you
can
build
out
separate
projects
that
mirror
the
code
and
do
different
things
with
it
and
add
to
that
pipeline.
You
can
also
override
the
pipeline
aspects
of
the
project
itself
that
you're
importing,
so
there
are
a
lot
of
tips
and
tricks
around
implementing
and
facilitating
that
sort
of
processes
it
can
also
be
put
in
together
and
last
but
not
least,
get
lab
is
integrated
with
a
PIV
or
a
tag,
smartcard
PGI
integration
system.
A
So
it
provides
you
the
ability
to
have
control
around
who
acts
access
to
what
and
how
you
know.
That
is
also
integrated
within
get
labs.
So
that
is
something
to
consider
as
well,
so
single
point
of
entry
single
point
of
provisioning,
as
well
with
regards
to
that
construction
so
now
to
the
devil.
So
what
I
have
here
is
a
lower
security
system
that
I
have
put
together.
The
assumption
is
that
they
develop
or
would
be
logging
in
over
here.
Similarly,
I
have
a
higher
security
system
where
the
operations
folks
would
be
to
integrate
notice.
A
The
address
bar
is
displaying
different
things,
and
that
is
because
of
the
high
and
low
also
note
that
both
of
these
machines
actually
reside
on
separate
networks
and
don't
have
visibility
to
each
other.
So
this
example
is
designed
to
imitate
what
happens
in
real
world
now
I'm
going
to
log
in
as
a
developer
and
I
see
multitude
of
projects
that
are
going
on
here.
But
for
the
sake
of
this
demonstration,
I
will
go
to
this
particular
project,
hello,
happy.
It
has
a
node.js
application.
A
It
has
a
bunch
of
files
in
it.
It
has
a
docker
file.
It
has
a
get
lab.
C
IMO
file,
there's
instructions
on
how
to
actually
manage
this,
particularly
so
think
about
how,
in
your
current
environment,
the
instructions
go
into
some
sort
of
a
word
document
or
a
confluence
area
or
something
well.
This
can
be
actually
encapsulated
as
part
of
your
code
repository
and
be
stored
in
here,
so
developers
have
easy
access
to
it.
They
can
actually
manage
things
and
work
on
it.
A
Another
aspect
of
this
project
is
a
wiki
which
is
part
of
what
get
lab
projects
already
provides.
Somebody
could
actually
go
in
and
create
a
wiki
and
also
have
that
migrated
over
from
one
environment
to
the
other,
and
therefore
you
can
keep
things
moving
along.
So
if
that
is
the
mode
of
operation
that
you
have
currently,
we
also
support
that
as
well.
Now
what
what
are
the
different
artifacts
that
are
within
this
particular
project?
This
isn't
just
a
code
repository
git
lab
project
consists
of
code
repository.
A
It
also
consists
of
issues,
so
you
think
of
issues
as
your
user
stories
or
your
requirements
or
whatever
you
want
to
call
them.
But
basically
you
can
have
information
details
around
here
with
regards
to
what
it
is
that
you're
working
on
you
can
also
see
who
it
is
assigned
to,
in
my
case,
I'm,
going
to
assign
it
myself.
I
can
put
it
in
some
sort
of
an
epoch.
A
I
can
put
it
put
some
sort
of
due
date
on
it,
put
some
labels
on
it,
so
that
all
the
natural
features
of
any
sort
of
planning
or
management
a
work
management
system
are
embedded
in
here
now,
let's
assume
that
I'm
ready
to
work
on
it,
so
at
this
point,
I
would
go
and
create
a
merge
request.
So
that
is
another
piece
of
artifact
that
we
capture
within
gitlab
is
a
concept
of
merge
request.
So
what
is
emergent
list?
Merge?
A
A
It
also
has
a
pipeline
that
has
been
run
as
part
of
this
change
and
I
can
go
and
modify
this
code
right
within
get
lab
by
clicking
on
open
web
IDE
and
I
will
be
able
to
see
all
the
files
here
is
my
docker
file
and
wherever
I
can
I
get
lab
actually
does
provide
syntax,
highlighting
so
here's
a
JavaScript
file
with
syntax,
highlighting
and
showing
me
what
it
used
to
be
what
it
is
now
in
my
case,
this
file
hasn't
changed,
so
it
doesn't
really
show
any
changes,
but
if
I
did
change
something,
it
would
actually
show
those
changes
too.
A
So
going
back
to
the
previous
page,
where
I
had
the
the
merchant
request,
there
are
other
artifacts
here
as
well.
So,
as
part
of
my
developer
process,
I
may
have
run
some
sort
of
a
security
scan
or
dependency
scan
which
get
lab
by
the
way
already
packages
as
part
of
its
pipeline
and
I
will
go
into
more
details
of
what
the
pipeline
looks
like
and
then
at
the
end
of
it.
If
I
merge
this,
then
I
would
exactly
have
a
way
of
merging
this
back
into
the
master.
A
So
the
development
lifecycle
on
this
lower-end
is
very,
very
simple.
It
is
available.
I
also
have
the
ability
to
have
approvers
within
this
area,
so
I
can
have
a
set
of
approvers
who
might
be
interested
in
moving
this
forward
and
be
able
to
see
what's
going
on.
So
with
regards
to
the
pipeline
itself,
there
happens
to
be
a
pipeline
that
was
run
over
here.
I
can
click
into
that,
and
what
it
shows
me
is
exactly
what
the
pipeline
results
where,
so
there
was
a
build
process
that
ran.
A
They
were
testing
processes
that
ran
these
were
all
run
in
parallel
by
the
way.
So
a
test
was
conducted,
a
static
analysis,
license
management,
dependency,
scanning,
container
scanning
and
quality.
All
of
those
tests
were
conducted
on
this
particular
piece
of
code.
That's
here
in
this
case
there
were
some
flags
that
were
raised
because
there
were
things
that
probably
didn't.
Look
right
and
I
can
go.
Look
at
my
security
dashboard
to
see
what
those
vulnerabilities
might
have
come
out
to
be
in
this
case.
I
didn't
have
any
vulnerabilities,
but
I
do
have
some
sales
jobs.
A
So
I
need
to
look
at
what
happened
so
assuming
that
that
was
all
fixed,
I
probably
have
some
level
of
pipelines
that
have
been
run.
I
happen
to
have
one
that
is
passed,
and
this
one
shows
that
there
was
a
bill.
There
was
a
test
and
a
release
was
created
at
the
end.
It
sowhat's
that's
the
other
artifact
that
is
available.
A
With
regards
to
the
artifacts
that
are
available
on
my
low
side,
there's
much
more
there's,
obviously,
operations,
there's
a
package
registry
which,
which
has
a
container
registry
within
it,
there's
some
analytics
pieces
in
it
and
so
on
and
so
forth.
There's
also
snippets.
Anyway,
you
can
have
a
script
that
is
run.
You
can
have
a
pipeline
that
is
run.
You
can
have
many
different
ways
of
automating
this.
A
You
could
even
have
a
cron
job
that
is
run,
but
in
a
with
regards
to
what
is
actually
happening
during
those
steps,
it's
all
API
driven,
so
I
have
here
an
application
that
I
use
to
interact
with
the
API
and
get
lab,
so
I
may
want
to
first
export
the
group,
so
that
is
functionality
that
is
available.
What
groups
allows
me
to
export
are
the
epics
and
certain
other
group
level
artifacts.
That
I
made
with
me
want
to
move
like
sub
groups
and
names
of
things
namespaces,
and
things
like
that.
A
Then
I
have
the
ability
to
migrate
a
project,
so
I
have
a
project
over
here
and,
in
my
case,
I
actually
know
which
ID
it
is
and
I'm
going
to
actually
send
the
request
over.
So
this
gives
me
details
of
the
project
that
I
have
I.
Don't
really
need
this
information
but
I'm
validating
that
this
project
is
actually
a
valid
project.
There's
another
API
where
I
can
search
for
a
project
by
its
name.
So
that
is
another
thing
that
I
could
have
done
is
instead
of
using
an
ID.
A
I
could
have
searched
by
name
received
the
ID
and
then
search
for
the
ID.
Just
again,
just
one
other
step
that
I
can
add
into
my
process,
then
from
there
I
have
the
ability
to
export
a
project,
so
I
happen
to
have
sent
an
export
message.
So
when
I
do
that,
it
basically
comes
back
and
says
yep.
Thank
you
very
much.
A
If
it
allows
me
to
save
a
specific
and
in
this
tool
that
I
have
I've
actually
put
it
into
a
specific
directory
over
here.
But
if
you
had
a
cron
job
or
a
scrip,
you
could
easily
take
this
tar.gz
file
and
place
it
wherever
you
need
to,
and
then
the
next
step
would
be
to
actually
sneakernet
it
or
to
move
it
through
the
diode
process,
to
your
higher
environment
and
on
the
high
environment.
A
What
I
will
see
is
a
bunch
of
projects
that
I
have
and
there
it
is.
There
is
my
project
right
there
import
test,
20
20.
It
was
created
about
five
minutes
ago,
which
means
that
that
I
just
created
that
and
as
I
click
into
that.
What
you
will
notice
is
all
the
different
pieces
that
I
had
on
the
lower
classification
environment.
I
have
the
releases
right
over
there
there.
It
is.
A
It's
unpackaging
and
available
for
me
all
the
particular
issues
that
I
have
are
also
available
to
me,
as
well
as
the
pipeline
runs
of
what
was
on
the
low
side
is
available
for
me
to
view
now.
Why
is
this
important
and
interesting?
It
is
very
important
and
interesting
because
at
some
point
you
can
construct
a
separate
gitlab
file
to
deploy
this
code,
and
so
you
can
rerun
all
those
jobs
in
the
environment
where
you
are
to
be
able
to
construct
the
output
file
in
the
process
and
all
that
benefit
of
doing
that
is
you're.
A
You
are
compiling
or
running
the
tools
that
you
need
to
run
within
the
environment
that
you
need
to
run
it.
So
if
it's
on
the
high
side,
there
are
some
certain
security
restrictions
that
may
cause
early
warning
signs
when
you
are
compiling,
if
you
had
compiled
it
outside
of
this
process,
you
don't
know
if
those
early
warning
until
you're
actually
trying
to
implement
it
and
then
realize
that
there
is
some
barrier
that
is
not
allowing
you
to
implement
it.
A
So
once
again
you
would
have
that
back
and
forth,
but
with
gitlab,
because
you
are
actually
constructing
it
like
here.
You
are
presumably
able
to
get
over
those
right
here
and
in
this
construct
you
can
actually
create
an
issue
right
over
here.
You
can
create
a
merge
request.
You
can
have
a
code
change
that
you
can
implement
right
here.
So
let's
assume
that
there
was
an
error
over
here,
so
I
will
go
and
create
a
new
issue,
so
this
becomes
a
live
breed
living
breathing
project
on
the
height
side,
so
I
can
say:
error
occurred.
A
So
once
the
merge
request
actually
gets
constructed,
you'll
notice
that
it
is
actually
tied
to
the
issue
yeah,
here's,
my
new
branch
that
I've
created
in
this
particular
environment
I,
can
click
on
open
web
IDE
I
can
actually
make
some
modification
to
the
code.
So
let's
take
an
example
where
I
actually
do
modify
the
code
here.
A
I
have
the
ability
to
see
the
changes
within
this
particular
merge
request
that
I
may
make
and
if
I
am
ready
to
actually
implement
this
I
can
implement
them
test
them,
validate
that
they
actually
work
in
my
environment.
But
here's.
The
big
important
piece
is
the
loop
back
to
the
development
team
to
inform
them
that
I've
actually
made
these
particular
changes
in
this
environment.
To
get
it
to
work
in
this
environment
that
they
need
to
be
aware
of.
A
That
I
may
be
interested
in
in
terms
of
the
code
that
needs
to
be
changed
and
emailed
us
to
the
individual
or
somehow
get
it
to
them
so
that
they
can
then
look
at
the
specific
lines
of
code,
make
the
same
modifications
in
the
originating
repository,
which
would
be
on
the
low
side
and
then
come
back
up
through
the
process
into
the
high
side
and
have
that
become
part
of
a
normal
process.
So
lots
of
benefits
over
here,
as
you
can
see,
I,
can
manage
my
issues.
A
So
if
I
was
a
operations,
individual
and
I'm
very
interested
in
exactly
what
the
development
team
is
still
working
on
in
terms
of
the
DevOps
model,
I
have
visibility
into
that.
Similarly,
I
can
then
go
create
issues
that
are
related
to
me
on
the
low
side
and
then
have
those
pull
up
into
the
high
side
as
reminders
that
I
need
to
work
on
something
as
well
as
have
that
same
project
overview
detail
also
come
in,
which
is
documentation.