►
From YouTube: Arm + AWS + GitLab demo
Description
With AWS's launch of its new Graviton2 instances, many companies are looking to test how their current applications with run on them. This video walks a user through the deployment process to AWS Graviton2 instances and provides techniques on how to create the necessary supporting infrastructure.
Playlists:
- GitLab and AWS (https://youtube.com/playlist?list=PL05JrBw4t0Ko30Bkf8bAvR-8E441Fy2G9)
- GitLab Alliances Tech (https://youtube.com/playlist?list=PL05JrBw4t0KqaWCrU4avIY6TQveKTDMso)
A
Hi
I'm
Kelly
hair
I'm,
a
Solutions
Architect
here
at
gitlab
and
I
focus
on
working
with
the
liens
his
team
on
a
number
of
different
initiatives.
Most
recently
over
the
last
couple
weeks
on
his
sprint,
I
was
working
with
how
we
can
take
advantage
of
the
new
ARM
architecture
or
in
the
new
instances,
are
coming
up
on
AWS
and
specifically
the
m6
genes,
which
is
the
graviton
to
arm-based
chips,
their
price,
their
better
price
performant,
just
a
lot
of
goodness,
that's
coming
into
AWS
here
very
shortly
in
general
availability,
all
right.
A
So
let
me
go
to
share
my
screen
here,
a
wick
and
just
get
into
you.
What
we'll
cover
off
in
this
in
this
demo?
First
and
foremost,
we
have
to
have
a
runner
that
runs
on
arm
64
by
default
and
get
lock-on.
We
have
runners
that
are
running
the
Intel
Architecture.
It's
also
80
60
for
x86
64.
How
do
you
want
to
put
it?
What
we're
doing
here
is
we're
actually
bringing
up
a
runner
with
with
terraform
and
then
registering
that
as
well
to
the
project.
It's
on
m6g
medium
instance,
I'm.
A
Only
using
two
instances
for
this
AWS
did
request
that
we
not
going
to
go
crazy
with
with
the
preview
and
that
we
had
so
I
thought
humans
as
well
as
was
good
enough.
So
again,
the
first,
the
first
airborne
run,
is
going
to
bring
up
that
runner
registry.
The
second
run
is
going
to
actually
deploy
the
code.
It's
going
to
basically
do
it's
going
to
bring
up
an
instance
and
m6
instance,
and
then
it's
going
to
do
a
git,
clone
and
and
off
the
go.
A
So
it's
a
simple
node
application
over
you
see
for
this
demonstration
today
and
then
one
of
the
things
I
want
to
cover
is
that
some
of
the
challenges
that
that
found
them
in
the
two
weeks
spray
all
right.
So
there's
three
projects
that
want
to
point
to
till
there
is
the
first
one,
which
is
a
hacker
project.
A
Put
this
one
heavily
after
joining
I
get
lab
peopleäôs
nuts,
because
we're
doing
a
lot
of
stuff
on
native,
but
in
reality
there's
still
a
lot
of
customers,
they're
easy
ec2
and
it
works
out
very
well
for
this
project.
So
within
that
hacker
project,
I'm
using
version
1.1.
So
you
can
see
down
below
the
actual
actual
link
to
that
to
that
to
that
tree
or
that
follower
Pulsifer
and
then
a
secondary
or
ii
ii
ii
project
team.
Using
here,
we
wrapped
on
two
projects.
A
So
this
is
the
brainy
one
put
up
get
about
couple
weeks
back
and
basically
it
again.
It's
done
what
we
talked
about
earlier.
It
works
like
brings
up
runner,
deploys
right
and
then
the
third
project
I
want
to
point
you
to
is
the
mini
o
container
build
now.
Unfortunately,
a
lot
of
doctor
containers
are
not
not
not
built
farm,
64
or
80
or
armed
version
eight.
So
one
of
the
things
I've
found
is
the
art,
the
gate,
lock,
runners
use
mini
o
locally
for
basically
s3
cache
or
or
emulating
AWS
s3.
A
So
one
of
the
things
that
I
discovered
very
quickly
is
that
when
you,
when
you
attempt
to
run
it,
you
get
you
get
a
proper.
Basically,
it
errors
out,
because
the
community,
the
medial
community,
is
not
published
today,
at
least
and
I'm,
an
armed
compliant
container
on
to
docker
hub
alright.
So
let's
go
ahead
and
just
go
into
the
different
projects
of
the
first.
The
first
one
I
really
want
to
focus
on,
is
and
really
I.
Guess
that
predominantly
through
this
this
this,
this
recording
is
the
graviton
team
project
itself.
A
So
a
grabber
talking
to
you,
there's,
there's
me
just
gonna
bring
up
the
pipeline
to
show
you
exactly
what's
happening.
The
first
thing
that
happens
and
these
whirring
up
this
most
recently
run
pipeline
where
a
runner,
a
get
my
burner
is
deployed.
So
where
that's?
That's?
That's
where
the
packer
project
comes
into
play,
the
packer
project
actually
builds
builds
a
docker
runner
based
upon
dr.
CEO
or
Community
Edition
arm
64,
and
then
the
terraform
run
itself
adds
an
additional
goodness
to
it.
A
So
it
basically
adds
the
the
URL
associated
with
what
the
what
the
get
web
server
is,
and
also
the
registration
token
for
the
project.
I'll
show
you
all
the
settings
there
all
right
so
get
justice
very
quickly
on
this
Packer
project.
What
it's
doing
is
its
building
its
building,
those
Emmys
and
it's
publishing
in
different
locations
for
the
arm,
64
for
the
docker,
Ducker
arm,
64
or
run
or
64
I
should
say
it
is
only
publishing
the
US
East
Egg,
because
that
is
where
the
preview
installations
are
yourselves.
A
So
again,
if
you
want
to
look
more
of
that
code
by
all
means,
please
take
some
time
to
do
that.
It's
a
it's!
A
fairly
mature
project,
I've
been
building
upon
that
for
the
past
year
and
change.
Alright,
we
go
and
kill
off
the
packer
and
go
on
and
move
on
or
move
back
to
the
graviton
project
itself.
Alright,
so
the
runner
comes
up
it
registers
at
that
point.
A
What
we're
going
to
do
is
we're
going
to
have
a
dummy,
build
reason
we
have
a
dummy
bill
is
that
a
lot
of
the
containers
are
actually
pretty
much.
All
the
containers
that
we
have
are
are
penned
to
amd64,
so
they
they
do
have
to
be
compiled
to
run
on
the
next.
The
next
bit
that
happens
is
deployment
of
the
code.
So
basically,
this
is
to
am-6
instance
on
AWS.
The
next
stage
I
have
is
confirmed,
deploy,
confirm,
deploy,
is
really
just
going
to
basically
check
for
HTTP
200.
A
So
it's
going
to
connect
it's
going
to
connect
this
gonna
say:
ok,
site
writing
great!
Let
me
go
ahead
and
let
me
go
ahead
and
inform
the
user
that
the
site's
available
and
then
there's
a
couple
of
manual
tasks
here
within
our
pipeline.
One
is
to
destroy
the
arm
Runner.
The
other
one
is
to
destroy
the
preview
instance
itself.
Alright.
So
again,
this
is
a
this
is.
This
is
a
very
compartmentalize
demo
to
show
how
this
could
be
built.
This
would
not
necessarily
be
a
production
system.
A
This
is
more
of
just
a
demo
itself
right.
Ok,
so
let
me
show
you
exactly
what
happens
here
within
within
the
runner
job
itself
or
or
to
register
the
winner
and
story
through
and
if
some
in
care
form
run,
you
can
see
that
this
is
happening
on
an
x86
64
architecture,
so
this
is
actually
coming
from
one
of
the
runners
that
looks
like
it's
private
runner
for
calm.
A
So
this
is
the
shared
runner
it's
coming
from,
and
it's
basically
executing
that
on
terraform
white
and
you
can
see
that
within
with
them,
you
can
see
that
the
terraform
wide
containers
being
used,
and
so
what
does
this
job
do?
This
again
basically
goes
through
the
terraforming
plan
itself
and
look
there
were
terraform
files.
It
looks
at
all
the
stuff
that
needs
to
happen,
which
is
basically
open
up
firewall
rules
or
security
groups,
basically
with
the
Navy
and
then
deploying
this
itself
and
then
whatever
whatever
else
needs
to
happen
there.
A
So
it's
not
it's
not
completely
a
dummy
project.
There
are
some
dummy
stages,
though
all
right
and
the
next,
the
next
thing
this
is
TF
bar
one
or
token.
So,
within
terraformed
beef
you
pass
environmental
variables
as
TF
underscore
VAR
caps
and
then
underscore
whatever
variable
you.
Sin
is
basically
something
that
terraform
is
going
to
pick
up
so
I'm
going
to
pass
terraform
the
runner
token
I'm
going
to
pass
it
the
runner-up
silly,
based
on
where
I'm
getting
this
information.
A
So
you
can
see,
there's
get
lot
calm
and
this
this
Runner
token,
which,
if
you'd
like
to
register
again
so
I,
don't
really
care,
but
there's
that's
the
value
for
that
where
you're
picking
these
up
is
if
I
look
if
I,
if
I
expand
the
runner
section
right
above
that,
you
can
see
that
it
has.
This
is
the
RL
that
you
should
be
using,
which
is
what
I'm
passing,
and
this
is
the
token
that
should
be
passed
and
that's
what
I'm
passing
right.
A
So,
if
I
wanted
to
I,
guess
reset
this
and
then
and
then
set
the
variables
variables
again
all
right,
you
can
see
that
I
do
have
a
runner
that
started
register
here.
He
was
a
couple
different
tack
or
a
few
different
tags.
I've
said:
okay,
this
is
an
AWS.
Runner
is
a
doctor
runner.
You
know
what
this
is
also
an
arm:
64
runner
and
it's
also
graviton
two-letter.
The
reason
I'm
putting
the
graviton
to
tag.
Is
that
that's
a
that's
what
I'm
that's
the
tag
I'm,
actually
looking
for
and
using
within
my
CI
llamo.
A
The
reason
I'm
doing
that
is
that
I
I
did
discover
some
issues
with
the
arm.
64
tag
and
I.
Think
it's
because
the
some
of
the
shared
runners
we're
doing
work
across
get
lap
to
you
enable
arm
support,
so
I
think
I
think
maybe
a
preview
runner
picked
it
out,
but
why
not?
So
that's
why
I'm
using
the
ground
for
tiny
tooth
as
opposed
to
historian
64
all
right.
So
what
this
looks
like
from
a
from?
A
Let
me
go
ahead
and
is
pull
up
this
young
yellow
photos
referring
to
earlier
just
a
level
set
this
this
dot
gitlab
to
gamify
out.
This
is
in
the
root
directory
of
a
project.
This
is
this
basically
outlines
all
the
different
stages
and
jobs
that
will
run
on
get
lap,
CI,
alright,
so
the
first
stage
we
saw-
but
we
saw
these
a
little
bit
earlier
in
the
graphical
representation,
but
I
want
to
dive
into
this.
A
To
give
you
an
idea,
one
was
to
deploy
the
runner
two
is
to
basically
do
a
dummy
build
and
that
spacely
that's
going
to
run
a
command
on
that
arm,
64
runner
to
basically
show
it
works
or
the
arm
64
monitors
up
and
working
next
is
basically
deploy
code.
They
confirm
to
place
so
we
can
just
walk
through
on
the
deploy,
deploy
the
runner.
What
I'm
doing
there
is
I'm
doing
chair
Foreman
it
and
I'm
doing
I'm.
Basically
copying
out
the
plan
to
basically
use
a
built-in
variable
ski
on
a
pipeline,
ID
and
runner
who's.
A
The
beam
is
I
had
a
couple:
it's
not
a
problem
to
just
call
them
the
same
thing,
but
I
just
wanted
to
to
distinguish
between
the
runner
in
the
deployment
I'm
then
applying
that
file
with
with
no
input
so
I
just
want
the
machine
to
do
it
right.
So
that's
what's
happening
once
it's
done.
I
want
to
track
those
artifacts
I'm
going
to
experiment
six
months
now
you
can
see
what
I've
shown
earlier
with
the
variables
runner
underscore
deploy.
If
that
exists
then
run
if
I
removed
that
variable
renard
underscore
deploy.
A
This
would
not
crime
right,
so
this
is.
This
is
some
of
it.
Some
of
the
launching
that's
built
into
the
CIA
within
the
build
code.
It's
a
dummy,
build
he's
using
Alpina
latest
image
from
docker
hub
and
all
I'm
doing.
There
is
I'm
just
doing
what
I
just
want
to
know
what
the
architecture
is
to
ensure
that
I'm
on
the
right
place
and
I'm
just
gonna
echo
test,
so
nothing,
nothing,
nothing
too!
A
Amazing
there
and
again
Auto
build
there's,
there's
multiple
dependencies
that
all
depend
upon
ABCs
for
our
Samaras
and
we
can
Paulo
70,
who
done
to
really
make
this
more
production
right,
right
and
I'm.
Only
going
to
do
this
on
on
runners
have
a
tag
graviton
to
which
I
know
that
I'm,
the
only
person
with
the
graviton
to
so
I'm
I
know
that
I
have
control
over
that
I
can
also
disable
shared
runners
plug
one
into
and
only
use
those
dedicated
runners
all
right
for
the
deploy
code.
A
What
I'm
doing
on
this
is
I,
first
of
all,
since
I'm
doing
I'm
only
to
be
doing
to
get
clone,
there's
actually
script
that
runs
in
the
user
data.
Basically,
what
happens
when
this
comes
up
and
since
I'm
doing
a
since
I'm
doing
a
get
clone
I
want
to
since
I'm
making
changes
down
what
I'm
sure
that,
if
there
is,
if
there's
already
instant
stuff,
I'm
gonna
paint
it
so
tanked
is,
it
is
a
is
a
key
word
within
terraform.
That
would
say
this.
A
This
resource
should
be
should
be
recreated
no
matter
what
so
I'm
always
doing
that.
So
it
is
recreate
that
that
resource
and
doing
it
get
clone
every
time
next,
the
next
stage
and
actually
the
next
next
job
I,
should
stay
with
the
next
stage
is
confirming
the
deployed.
So
what
that's
gonna
do
is,
if
there's
a
little
while
loop,
where
it
basically
says
okay
up
to
300
seconds
check
every
second
to
see.
If
this
side
is
up
right
and
if
it
does
come
up
in
three
minutes,
then
it's
probably
I'm.
A
Sorry,
five
minutes
it's
probably
dead,
so
I'm
just
looking
for
them,
just
looking
for
a
200
HTTP
code
and
once
I
get
that
there's
a
hand
everything's
great
and
again
I'm
gonna
wear
this
on
the
graviton
to
you'll
see
some
of
these
do
not
have
tags
so
they'll
run
on
whatever
is
available,
which
is
which
is
going
to
be
the
X
x64
right
all
right.
The
next.
The
next
bit
is
I
have
some
manual
jobs.
One
is
to
want
us
to
destroy
the.
A
And
add
the
other
one
to
destroy
the
preview
sign
itself.
So
what
does
this
application?
Look
like
very
simple!
It's
a
notes
board!
You
can
see
that
there's
some
bits
here.
I
can
I
can
update
these
this
again.
This
is
this
is
just
just
four
different
ones.
I
can
also
delete
those.
Then
you
can
you
can
see
that
there's
there's
only
only
so
many
notes.
Are
there
right,
okay,
so
say,
for
example,
I
publish
a
note
and
I
say
test.
You
can
see
that
here
I
have
this
UTC.
A
What
happens
if
I
wanted
to
change
change
the
date
for
example?
So
let
me
go
ahead
and
open
up
my
my
handy-dandy
project
itself
and
I'm
going
to
go
to
them.
Not
you
can
see
that
I
have
at
this.
Utc
I
have
already
been
running
this
back
and
forth,
so
I'm
going
to
go
and
uncomment
one
bit
I'm
going
to
come
in
an
uncommon
another,
so,
instead
of
having
an
hour
hour
minute,
second
UTC
on
that
I'm
Elena's
gonna
have
the
date
right.
So
this
is
how
the
project
was
by
default.
A
A
Hopefully
everybody
else
does
everyone's
gonna
be
in
my
mind,
okay,
so
let
me
go
ahead
and
push
that
up
and
what's
gonna
happen.
Is
that
I
push
that
up
and
now
get
lab
is
gonna
go
there's
new
code?
Alright,
so
you
know
I'm
gonna
do
now
is
I'm
going
to
here
we
go
I'm
gonna
go
ahead
and
kick
off
another
pipeline,
so
we
show
them
the
pipeline
running
there.
It
is
alright.
So
if
we
look
at
this,
if
we
look
in
the
pipeline,
you
can
kind
of
follow
along
with
what's
happening
here.
A
This
is
this
is
what
basically
there's
the
play
runner
since
the
runners
already
there
is
going
to
go.
Okay,
not
much
go
on
to
the
next
stage
or
the
next
next
next
job
next
stage,
but
while
this
is
running,
this
is
kind
of
like
baking.
The
cake
type
thing.
So
while
this
is
running,
I
want
to
go
back
and
bring
up
a
bit
that
I
thought
was
both
full
and
useful,
since
I
do
have
that
runner.
A
Actually
me
just
wait
for
this
runner
to
come
up
with
the
IP
address,
because
I'm
going
to
get
into
you
see
I've
got
my
yeah,
alright,
good,
okay,
so
there's
this
is
the
IP
address
of
the
runner.
Let
me
go
ahead
and
a
gun
of
my
private
key
on
there.
So
let
me
go
ahead
and
take
a
look
at
it
all
right,
so
I'm
logged
in.
So
let
me
just
take
a
look
at
what's
happening
right
now
and
doctor.
You
can
see
that
there's
this
this
this
this
Mineo
image
that
is
pulling
off
them.
A
A
A
Difference
there's
a
big
difference
there.
Alright!
So
let
me
do
another
one.
Let
me
do
docker
run
it
hashey,
cork,
harshly,
sheep
or
terraform,
light
okay,
so
I'm
going
to
pull
that
docker
image
again,
you
can
see
just
like
we
saw
it
really.
We
had
this
exact
format,
exact
format,
error.
What
this
means
is
that
the
the
docker
image
has
not
been
published
for
a
farm
with
this
I
actually
was
building
this
building.
A
All
right,
okay,
so
that's
basically
just
showing
how
how
it's
using
the
rudder
itself
is
using
a
image
that
has
been
created
as
part
of
this
project.
So
this
is.
This
is
how
you
can
build
automatically
with
with
get
lab,
how
you
can
build
or
images.
The
first
thing
you're
going
to
do
is
you're
going
to
be
using
docker
build
X.
A
So
this
is
on
this
mini
of
arm
64
and
build
X,
basically
use
pop
it
up
as
an
artifact
to
to
get
lab,
and
then
it's
used
in
the
second
step,
where
it's
actually
deploying
I
found
this
through
through
Stack
Overflow.
So
I
want
to
give
attribution
to
the
author
a
good
thing
to
them
on
Twitter
and
it's
it's
it's
great
to
see
the
power
of
the
community
overall,
it's
not
just.
A
So
this
is
a
solution.
I
found
on
stack
all
right,
so
the
build
axis
is
actually
building
for
NB
64
arm
64
and
our
version
7.
So
this
is
based
arm
arm.
64
is
basically
arm
version
version
8,
but
this
is.
This
is
basically
where
I'm
building
for
three
different
architectures.
So
if
I
were
to
pull
this,
this
docker
image
from
the
registry,
which
is
here
if
I,
were
to
pull
that
I
could
pull
it
from
our
base.
A
A
You
can
see
that
it
will
run
here
is
what
we
know
problem.
So
this
is.
This
is
Intel,
slash,
1864,
okay,
let's
go
back
to
our
project
and
see
where
we
are
so
our
pipelines
and
it
looks
like
okay,
we're
waiting.
It
looks
like
the
confirm
deploy
is
doing
its
thing,
so
let's
go
ahead
and
this
catch
up
with
that
almost
perfect
timing.
A
This
is
using
the
Col
image
and
again
every
second,
it's
going
on
to
test
to
see
that
side
to
available
once
it
is
once
you
get
to
200,
it
will
come
alright,
so
that
I
found
in
general.
It
takes
about
a
45
to
50
seconds
for
this
to
come
up
again.
This
is
the
old
site.
Remember
you
can
see
the
UTC
alright,
so
we
have
this.
This
this
was
highlighted,
is
basically
what's
going
to
change.
What
should
change?
It
looks
like
it's
ready,
so
let
me
go
ahead
and
take
a
look
at
that
cake.
A
Alright,
you
can
see
there's
no
data
because
I
did
not
do
any
kind
of
you
got
this
all
right.
So
this
is
what
it
looked
like
with
the
UTC.
This
is
what
it
looks
like
now,
so
we
actually
went
through
and
made
a
change
on
that
no
tap
and
there
we
go
so
that's
about
all
I've
got
to
share
there.
If
there's
any
questions,
please
please
do
come
back
with
us.
Okay,
I'm,
going
to
stop
recording
now.