►
Description
In this presentation, I will walk you through the process of creating an enterprise-level AWS infrastructure. Throughout it, we will create an infrastructure comprising a VPC with four subnets in two different availability zones with a client application, backend server, and a database deployed inside. Our architecture will be able to provide the scalability and availability required by modern cloud systems. Along the way, I will explain the basic concepts and components of the Amazon Web Services platform.
A
Then
hello,
everyone,
my
name,
is
mikhail
and
I'm
from
polish
company
called
greypap,
where
we
specialize
in
cloud
and
ai
technologies,
I'm
an
aws
certified
solutions
architect
and
today
we're
to
talk
about
aws
infrastructure.
A
Our
infrastructure
will
comprise
four
subnets
to
public
and
to
private,
distributed
in
two
different
availabilities
for
high
availability
in
public
subnets,
we'll
host
our
client
application
and
the
not
gateway,
while
in
our
private
subnets,
we'll
host
backend
server
and
rds
database.
A
First,
one
is
architecture,
scaffolding,
where
we'll
deal
with
setup
of
the
vpc,
subnet
gateway
and
configuration
of
routing
tables,
then
we'll
work
on
a
virtual
machine,
a
database
setup
so
we'll
create
ec2
instances,
we'll
use
ami
images
and
configure
our
rds
database
and,
in
the
end,
we'll
work
on
load,
balancing
and
we'll
deploy
and
run
our
applications.
A
So
first,
let's
start
with
what
is
virtual
plywood
cloud
or
another
as
a
short
name,
vpc,
it's
an
isolated
virtual
network,
which
is
a
part
of
the
aws
network
in
which
we
can
launch
our
aws
resources
and
within
the
vpc,
our
resources
have
private
ip
addresses
to
which
they
can
communicate
with
one
another.
We
can
control
the
access
to
all
those
resources,
nintendo
bpc
and
route
outgoing
traffic.
However,
we
like,
however,
we
cannot
launch
ec2
instances
directly
into
the
vpc.
We
need
for
that
subnet.
A
So
subnets
are
additional
isolated
areas
that
have
its
own
cidr
block
and
routing
policies,
and
we
can
launch
instances
inside
the
subnets
subnets
allow
us
to
create
different
behaviors
in
the
same
vpc.
For
instance,
we
can
create
a
public
subnet
that
can
be
accessed
and
can
have
access
directly
to
the
internet,
the
outside
world,
and
we
can
have
a
private
subnet
which
is
not
accessible
from
the
internet
and
has
access
to
the
internet
only
through
the
nut
gateway,
so
network
address
translation
gateway.
A
Now,
let's
move
to
the
fun
part,
so
aws
management
console
in
top
right
corner.
We
can
select
an
aws
region.
Aws
regions
are
geographical
areas
in
which
aws
has
its
data.
Data,
centers
and
regions
are
further
divided
into
availability
zones,
which
are
independent
data,
centers
relatively
close
to
each
other,
which
are
used
for
redundancy
and
data
replication.
A
So
in
my
case,
I'm
using
eos
one.
It
doesn't
matter
for
this
presentation,
but
it
might
matter
for
your
organization,
for
instance,
due
to
privacy
policies,
so,
as
I
said
before,
we're
gonna
start
with
scaffolding
of
our
application.
So
first
we'll
set
up
the
gpc,
we're
going
to
the
vpc
service
dashboard
and
we're
going
to
click,
launch
vpc
wizard.
A
A
So
we'll
start
with
abilities
on
a
word
name:
public
subnet,
public
subnet,
a
and
private
subnet
private
subnet
a
so,
as
I
mentioned
before.
In
order
to
have
access
to
the
internet
from
our
private
subjects,
we
need
a
nat
gateway.
Aws
provides
us
with
a
manage,
not
gateway
service.
We
just
need
to
allocate
elastic
ip
address
for
it,
so
we're
gonna
go
to
another
tab
once
again
to
the
vpc
service,
dashboard
and
we'll
go
to
the
elastic
ips
tab.
A
Here
we
can
allocate
elastic
ip
elastic.
Ip
is
just
a
static
ip
address
which
is
managed
by
aws,
and
we
can
use
it
to.
We
can
attach
it
and
disattach
it
from
our
resources
freely.
So
that's
where
its
elasticity
comes
from,
for
instance.
If,
if
one
of
our
resources
fails,
we
can
simply
attach
it
to
a
healthy
one.
So
I'm
gonna
allocate
one
elastic
ip
right
now
and
I'm
gonna
change
its
name
to
not
a
ip.
A
Now,
if
we
go
back
to
the
first
step,
we
should
see
our
elastic
ip
again
selected.
We
have
some
other
options
here,
such
as,
for
instance,
dedicated
hardware,
but
that's
very
costly,
and
we
don't
need
that
for
this
presentation,
so
I'm
going
to
just
click,
create
vpc
and
right
now,
aws
is
gonna,
configure
our
vpc.
A
If
we
go
to
the
ppc
dashboard,
we'll
already
see
it.
The
part
which
takes
the
most
is
the
configuration
of
nut
gateway.
So
if
we
go
to
nut
gateways,
we'll
see
that
the
nut
gateway
is
being
met
in
the
pending
state,
so
we
can
work,
wait,
wait
for
it,
but
in
meantime
we
can
configure
the
level
piece
on
b
and
in
order
to
do
that,
we
need
to
create
manually
subnets
in
that
ability
zone.
So
I'm
gonna
click
create
subnet,
select
our
vpc.
A
A
Okay,
so
maybe
it's
still
creating
in
the
meantime,
we
can
talk
about
the
security
in
the
vpc,
so
security
inside
of
pc
can
be
managed
with
use
of
three
key
structures,
which
is
security
groups,
network
access,
control,
lists,
knuckles
and
drought
tables
security
groups,
work
like
mini
firewalls.
They
define
allowed
incoming
and
outgoing
traffic
and
they
work
on
instance
level
and
can
be
shared
among
many
instances
and
allow
access
from
other
security
groups.
A
At
the
same
time,
we
can
use
knuckles,
which
are
ip
filtering
table
for
incoming
outgoing
traffic,
and
they
work
as
an
additional
security
layer
on
top
of
the
security
groups,
and
the
difference
is
that
they
work
on
vpc
or
subnet
level,
instead
of
instance,
level,
and
they
support
denyros.
So
we
can
use
it,
for
instance,
for
blacklisting
specific
ips.
A
A
And
0.3.0
and
t4
and
create
subnet.
So
now
we
have
all
the
subnets
we
need.
We
can
move
to
creation
of
nat
gateway
for
the
second
availability,
and
we
also
need
to
do
that
manually,
so
we're
gonna
click
create
not
gateway,
name
it
not
b
and
now
we
want
to
place
it
in
public
subnet
b
and
we
need
to
allocate
another
elastic
ip
address
for
it.
A
That's
it
the
last
step
in
infrastructure.
In
our
part
of
our
infrastructure,
scaffolding
is
set
up
of
routing
rules,
so
we're
going
to
move
to
route
tables,
and
we
can
see
that
aws
set
us
up
with
two
route
tables.
As
default.
We
have
the
main
route
table
and
some
additional
one.
So
we
start
with
main
routing
cable,
we're
gonna
name
it
main
rt
and
we're
gonna
use
our
main
right
table
for
our
public
subnets.
A
Now
we
can
look
at
the
routing
rules,
edit
routes,
and
we
have
two
entries
here.
The
first
one
says
that
each
ip
address,
which
is
part
of
the
vpc
cider
block,
should
resolve
locally
and
that's
fine
for
us
and
the
second
one
redirects,
the
outgoing
traffic,
through
the
nut
gateway
and
in
case
of
the
public
subnet.
We
want
to
go
through
the
entire
net
gateway.
A
A
And
now
I
need
to
change
here
and
association
with
private
subnet,
b
and
again
routing
curls.
We
need
to
add
a
routing
rule
for
outgoing
tracking
to
go
to
not
bk
trade
this
time.
A
A
So
we
finished
our
infrastructure.
Scaffolding
part.
We
have
four
subnets
in
two
different
availability
zones
together
with
a
nut
gateways
in
both
ability
zones.
So
the
next
step
is
going
to
be
setup
of
our
virtual
machines
and
database.
A
A
Here
we
can
choose
an
imagined
motion
image.
Ami
amazon
machine
images
are
basically
image
templates
that
contain
software
operating
system,
runtime
environments
and
our
actual
applications
that
are
used
to
launch
ec2
instances,
and
we
can
use
some
pre-configured
aws
emis
such
as
here
or
we
can
have
our
own
mmis.
For
now,
I'm
going
to
use
just
the
basic
amazon
links
to
mi,
but
later
on,
we
also
use
our
own.
A
I
pre-configured
two
iam
rules,
server
is
adorable
and
client
is
rule.
They
basically
provide
our
instances
with
access
to
some
other
aws
services,
such
as
system
manager,
session
manager
parameter
store,
et
cetera.
A
So
one
more
thing
we
should
like
to
do
is
use
the
user
data
script.
User
data
script
is
the
script
which
is
executed
during
the
boot
up
of
our
ec2
instance,
and
I
prepared
a
script
here
which
will
basically
install
java,
11
and
download
the
jar
file
of
our
back-end
application
from
aws
s3
service
aws,
three
services
file
search
service,
which
we
use
to
store
our
jar
file
right
now,
so
I'm
gonna
click
next.
A
Next,
here
we
can
configure
storage
of
our
ac2
instance,
but
we
can
go
with
the
default
settings
and
I
like
to
add
a
name
text
to
my
instances,
name
so
server
a
ec2.
A
A
Security
groups
allow
access
on
ssh
port,
but
we
don't
need
that
because
we'll
use
aws
session
manager
in
order
to
access
our
instance.
If
we
want
ssh
to
them
directly,
we
can
go
further
and
review
our
instance.
Details
and
click
lounge
here
adapters
ask
us
for
a
keeper
because
it
uses
a
public
private
key
cryptography
during
in
order
to
ssh
into
the
instance,
but,
as
I
said,
we're
not
gonna
ssh
to
it.
So
we
don't
need
the
keeper
large
instance
and
it's
being
launched.
It's
gonna
take
a
moment.
A
So
in
the
meantime,
we
can
create
an
ec2
instance
for
availability
on
a
for
client
application.
So
I'm
going
to
click
cloud
launch
instance
again,
select
the
same
same
mmi
on
amazon
links
to
same
instance
type
this
time
we
wanted
to
our
institute
instance
to
be
deployed
in
public
someday.
So
that's
fine
in
public
subnet.
We
need
an
public
ip,
so
I'm
going
to
enable
it
I'm
going
to
select
client
ectural
and
we'll
also
need
a
user
data
script,
but
this
time
a
bit
different.
A
A
A
So
now
we
are
both
our
instances,
which
is
on
a
are
being
created.
A
A
I
usually
go
to
system
lock
and
if
here
the
system
lock
appears,
that
means
that
the
instance
was
fully
configured
so
as
we
can
see
it
hasn't
yet
so
after
we
configured
the
instance
manual
here,
we
could
do
exactly
the
same
process
for
evolution
b,
but
we
can
also
create
an
mi
image
which
we
would
do
through
instant
settings.
A
No
imagine
template
create
image,
but
that
takes
a
while.
So
I
already
did
that
and
we're
just
going
to
use
our
pre-created
mics
to
launch
instances
and
in
evolution
b,
so
I'm
going
to
first
use
server
mi
to
large
server
instance
enable
psnp.
A
This
one
already
contains
the
jar
and
java.
We
just
need
to
place
it
in
the
proper
subnet.
So
this
time
it's
going
to
be
private
subnet
b
again
we
don't
need
public
ip.
We
need
irm
servers
to
immoral
and
this
time
we
don't
need
the
user
data
script,
because
we
already
have
the
runtime
environment
prepared
in
our
amazon
machine
image,
again
default
storage,
name
server,
b,
ec2.
A
A
A
A
So
here
we
can
see
that
our
jar
file
is
present.
I'm
gonna
move
it
to
user
manager
to
the
home
directory
system
user.
A
A
A
A
I
need
a
url,
that's
here
and
we're
just
gonna
run
npm
instant
android.
Not
to
have
to
do
it
later
on.
A
Go
back
to
our
instances
and
make
see
if
instances
aberdeen
b
were
also
configured
not
yet.
Okay.
In
the
meantime,
we
can
set
up
our
database
so
we'll
go
back
to
the
aws
management
console
find
rds.
A
Rds
is
a
rational
database
service.
So
it's
an
aws
service
for
create
information.
Databases
managed
by
aws,
so
we
don't
have
to
care,
for
instance,
about
the
underlying
infrastructure.
I'll
just
click
create
database,
and
here
we
have
different
engine
options.
Amazon
aurora
is
supposed
to
be
the
fastest
and
the
cheapest
one,
but
in
this
printer
I'm
going
to
use
mysql
because
it
provides
a
free
tier
option.
A
A
Storage,
we
are
fine
with
the
default
settings
here.
If
we
didn't
use
the
free
tier,
we
could
enable
more
tlc
deployment.
A
A
A
Okay,
that's
going
gonna
take
a
while,
so
let's
go
back
to
our
ec2
instances
and
see
if
they
are
ready,
server,
bc2
actions
that
system
lock.
Oh,
it's
almost
ready.
Okay
same
for
plan,
bc2.
A
So
if
we
this
time
go
to
com
stem
user,
we
should
have
here
the
application.
Yes,
we
do
and
we
should
have
node
installed.
A
Version,
yes,
we
have
okay,
so
the
easy
doing
something
for
client
application
in
evaluation
b
is
ready.
Let's
hope
that
this
one
also
got
set
up
no
yeah.
So
now
we
can
also
connect
to
plan
b
c2.
A
We'll
move
to
home
system
user
and
just
verify
that
our
jar
is
there.
It
does
so
our
ec2
instances
are
prepared.
Our
database
is
probably
still
being
prepared,
but
we
can
verify
that.
I
don't
need
the
tab
anymore.
A
A
We
can
talk
a
bit
about
the
ec2
storage,
so
all
ec2
instances
come
with
instant
store
volumes
for
temporary
data
that
is
deleted
whenever
the
instance
is
stopped
or
terminated,
as
well
as
with
elastic
block
store.
So
that's
the
part
which
were
always
going
with
the
default
setting,
but
we
could
change
it,
which
is
a
persistent
storage
volume
working
independently
of
the
ec2
instance
itself,
there's
also
a
third
option,
which
is
a
classic
file
system
which
is
file
storage
service,
and
this
one
can
be
shared
among
many
instances.
A
So
here
we
can
look
at
the
summary
of
what
we've
done
in
this
stage.
We
configured
our
ec2
instances,
cr
used
mice
to
configure
instances
in
secondary
predesign
and
we
configured
our
rational
database.
A
A
So,
let's
go
back
to
the
aws
network
service.
In
the
meantime,
we
can
this
one
is
still
creating.
That's
not
a
problem,
so
we're
going
to
go
to
the
ec2
service
dashboard
again
and
we're
going
to
scroll
a
bit
here.
We
have
the
load
balancing
tab,
so
the
load
balancing
in
aws
works
with
use
of
target
groups.
A
Target
groups
allow
us
to
define
a
set
of
targets
that
are
supposed
to
handle
traffic
of
the
same
type.
Then
each
load
balancer
has
a
set
of
listeners
that
consists
of
request
conditions
such
as,
for
instance,
at
this
point,
all
the
requests
at
the
specific
part
we'd
like
to
forward
to
a
specific
target
group.
So
first
we're
going
to
set
up
our
target
groups,
we'll
have
two
target
groups,
one
for
server
instances
and
one
for
client
instances.
A
We
can
see
that
the
target
groups
can
have
instances
of
targets
ip
addresses,
random
functions.
In
our
case,
it's
going
to
be
instances
so
server
tg.
Our
application
is
going
to
be
running
on
port
8080.
So
we'll
change
that
it's
part
of
our
user
manager,
ppc
and
health
check
path.
Our
application
doesn't
have
that
base
path,
so
we're
gonna
use
user's
path.
A
Next,
and
here
we
can
specify
instances
which
should
be
part
of
our
target
group,
so
I'm
gonna
select
server
a
and
server
bc,
to
instance
include
a
spending
and
I'm
going
to
create
target
group.
Now
we're
going
to
do
the
same
for
client
target
group.
Client
tg,
our
client
application
runs
on
port
5000.
A
A
A
A
A
It's
supposed
to
be
internet
facing,
and
it's
supposed
to
have
two
listeners,
one
port
8080,
another
one
on
port
5000
for
both
our
applications.
We
wanted
in
both
pub
exam
a
and
public
subnet
b.
A
We
don't
need
any
specific
configuration
here,
we'll
create
a
new
security
group
for
our
password,
so
user
manager,
lbst,
sg4,
load
balancer,
and
this
one
allows
traffic
incoming
traffic
on
port,
8080
and
5000
from
everywhere.
That's
fine
configure
routing!
So
here
we
can
select
an
existing
target
group.
We're
gonna,
select,
server,
tg
and
we'll
have
to
configure
this
client
target
group
later
on
manually,
so
register
target
the
targets
of
our
target
group
and
create.
A
A
A
So
our
backend
application
is
a
spring
boot
application
and
during
the
startup
it's
gonna
try
to
connect
to
aws
system
manager
which
we're
gonna
use
as
a
config
server.
So
during
the
startup,
our
application
is
gonna.
Try
to
fetch
configuration
from
the
parameter
server.
I
pre-configured
here
some
configuration
properties
such
as
username
name.
In
our
case
it
was
admin
the
password
addme123
and
we
have
the
url
which
into
edit
to
point
to
our
database.
A
A
The
last
part.
Before
we
run
our
applications
is
we
need
to
configure
make
some
changes,
our
security
groups
so
first
thing
in
our
server
security
group.
We
need
to
edit
inbound
roles
and
we
need
to
add
a
role
to
allow
traffic
on
point
1880
from
our
from
the
security
group
of
our
load.
Balancer.
A
So
that
our
load
balancer
can
forward
traffic
to
the
server
instances
the
same
for
client
hd,
but
instead
of
port
8080,
we're
gonna
use,
port
5000
and
as
well
user
manager,
lbsg
save
the
last
thing.
Is
the
user
manager
db
security
group
into
edit
in
controls,
and
we
need
to
allow
traffic
on
the
mysql
port.
So
it's
3306
for
the
server
security
group
so
that
our
server
instances
can
connect
to
the
database.
A
A
Script
so
we'll
go
to
this
home
here:
okay
system
user.
Here
we
have
our
jar
file,
so
I'm
gonna
use
the
command.
So
first
I
just
kill
any
process
with
shining
and
using
port
id80.
Then
I
export
spring
profile
as
production
profile,
and
I
run
the
jar,
our
application
through
execution
of
the
jar
file
and
in
the
end
I
just
disarm
the
process
so
that
whenever
we
kill
this
terminal,
the
process
doesn't
die.
A
So
now
we
can
try
to
start
and
we
can
see
that
there
is
already
some
traffic.
This
is
the
traffic
from
the
load
balancer.
So
that's
expected
now
we're
gonna
move
to
the
client
instances.
Client
eight
cents
connect.
A
And
client
bc2
connect
and
we'll
start
our
front-end
applications
so
again:
cd,
home,
ssm
user,
cd,
home
user,
we'll
go
to
inside
our
application
user
and
here
we'll
need
the
url
of
our
backend
server,
which
is
equivalent
to
the
url
of
our
load,
balancer,
which
you
can
take
from
the
load
balancer
details.
A
So
now
let's
go
here:
I
prepared
some
simple
script:
deploy.sage,
which
basically
installs
all
the
node
dependencies
and
runs
the
react
application
here
as
a
parameter.
We
pass
the
address
of
the
backend
server,
which
is
the
load
balancer
url
with
the
port
8080,
and
we
again
design
the
process
so
and
same
here.
A
A
But
actually,
if
we
just
go
here
and
try
to
access
the
url
of
our
backend
server,
we
get
the
mtra.
That
means
that
the
back-end
server
is
running
and
now,
if
we
try
to
access
the
application
for
5000,
we
started
what
gateway.
So
that
means
that
our
application
didn't
okay,
this
one
didn't
yet
start.
A
We
need
to
wait
a
moment
in
general.
The
load
balancer
should
forward
traffic
only
to
health
instances,
but
that
only
happens
if
any
of
the
instances
is
already
marked
as
healthy
and
because
of
the
delay.
Both
of
the
instances
are
unhealthy.
So
right
now
the
load
balancers
is
in
a
pass-through
mode
and
just
forwarding
the
request
to
all
the
targets.
A
A
Street
so
we
managed
to
save
our
user
if
we
go
back
to
the.
A
A
A
So
that's
it.
We
created
an
aws
infrastructure
comprising
of
four
subnets
into
different
availability
zones.
We
deployed
our
client
application
and
a
backend
server.
We
configured
and
rational
database
with
use
of
rda's
aws
service.
We
use
a
nat
gateway,
we
configured
application
balancer
and
along
the
way
we
used
aws
system
manager
to
access
our
instances.
A
So
thank
you
very
much
here
here
are
the
urls
of
the
back-end
application
code,
front
application
code
and
cloud
formation
for
the
indus
infrastructure.
The
cloud
formation
templates
were
not
created
by
me,
but
by
my
colleague,
but
feel
free
to
check
it
out.
If
you
need
what
are
your
top
tips
on
debugging
vpc
connectivity
issue,
where
do
you
find
that
things
are
usually
misconfigured?
A
So
usually
it's
in
security
groups
or
routing
rules,
I
would
say,
and
for
debugging
that
means
on
what
stage
we
are.
If
we
already
have
running
applications
we
can
use
and
we
have
logging
unlocking
in
our
applications.
We
can
use
aws,
x-ray
or
cloudwatch,
but
in
most
cases
when
it
comes
to
connectivity,
security
groups
or
routing
or
eventual
network
access
control
lists.
So
all
the
security
measures
we
have.