►
From YouTube: Creating web apps as containerized services
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Our
goal
here
is
to
be
able
to
work
on
this
application
with
our
front
end
back
end
and
database
all
running
in
containers,
so
we
can
build
out
our
app
and
then
smoothly
deploy
to
kubernetes
or
docker
swarm
big
picture.
We
want
to
make
cloud
native
development
easier,
so
you
can
focus
on
your
app
itself.
A
This
walkthrough
is
intended
as
a
hands-on
lesson
for
beginners
to
cloud
native
development,
we'll
be
working
with
the
docker
engine
and
we'll
be
pausing
frequently
to
talk
through
the
commands
we're
using
by
the
end.
You
should
feel
confident
getting
started
with
your
own
containerized
app
using
these
components
along
the
way
you
should
get
a
solid
primer
or
refresher
on
container
concepts
like
volumes,
port
forwarding
and
networking.
A
If
you'd
like
to
follow
along
you'll,
need
the
docker
engine
installed
and
running
on
your
machine
on
windows
or
mac.
That
will
probably
mean
you're
using
docker
desktop
it'll,
be
helpful
to
have
a
basic
understanding
of
the
linux
command
line
as
well,
but
you
should
be
able
to
follow
along
regardless
before
we
jump
in
a
few
details
about
me.
My
name
is
eric
gregory
and
I'm
senior
technical
writer
at
mirantis.
A
A
A
So
what
exactly
is
happening
here?
We're
creating
a
new
network
named
testnet
and
we're
using
the
d
or
driver
argument
to
specify
that
it
should
use
a
particular
model
for
the
new
network.
The
options
here
are
bridge
overlay
or
a
custom
driver
option
added
by
the
user
bridge
networks
allow
containers
within
the
network,
all
of
which
must
be
on
the
same
docker
demon
host
to
communicate
with
one
another,
while
isolating
them
from
other
networks.
A
Overlay
networks
allow
containers
within
the
network
which
may
be
spread
across
multiple
docker
demon
hosts
to
communicate
with
one
another,
while
isolating
them
from
other
networks.
This
driver
is
used
by
docker
swarm
for
container
orchestration
and
custom
drivers,
allow
for
custom
network
rules.
If
we
didn't
create
and
specify
a
new
network
for
our
containers,
they
would
live
on
the
default
bridge
network
and
when
containers
are
on
the
default
bridge,
they
can't
communicate
by
dns.
Instead,
they
need
to
know
one
another's
specific
ip
addresses
to
transmit
data
back
and
forth.
A
When
we
have
a
group
of
containers
that
need
to
communicate
instead
of
using
the
default
bridge,
we
can
place
them
in
their
own
user-defined
network.
While
this
isn't
the
only
way
to
let
containers
communicate,
it
is
the
docker-preferred
way
of
doing
things,
since
this
creates
a
precisely
scoped
layer
of
isolation.
A
So
now
we
have
our
user-defined
network
since
we'll
be
running
a
mysql
database.
We'll
also
need
a
volume
for
persistent
storage.
Our
containers
themselves
will
be
super
ephemeral,
but
we
want
our
database
configuration
and
app
data
to
last
beyond
the
lifespan
of
any
particular
container
volumes.
Give
us
a
way
to
do
that.
We'll
see
another
approach
in
a
few
minutes,
but
in
the
meantime,
let's
create
our
volume.
A
A
All
right,
let's
break
down
this
command,
we're
using
docker,
run
to
start
a
new
container
and
we're
naming
it
test.
Mysql.
The
network
argument
specifies
that
the
container
is
going
to
use
our
user-defined
test
net
network.
The
v
for
volume
argument
says
that
the
container
will
use
our
new
volume
and
associates
the
volume
with
the
directory
in
the
mysql
container,
where
it
expects
to
be
able
to
save
persistent
data.
A
The
d
argument
means
we're
going
to
run
the
container
in
detached
mode,
which
means
the
container
process
won't
be
bound
to
our
current
terminal
session,
we'll
be
able
to
keep
working
rather
than
just
watch
it
run.
The
e
argument
specifies
an
environment
variable
in
this
case
a
root
user
password
for
the
database,
I'm
using
the
password
oktoberfest
here.
Finally
we're
building
from
the
official
docker
hub
image
for
mysql.
A
Alright.
Now
we
have
our
containerized
database
running.
We
have
two
more
services
to
go
and
both
are
ultimately
built
on
node.js.
Our
backend
is
going
to
be
a
simple
express
server,
while
our
frontend
is
going
to
use
the
react
library
for
this
setup,
we're
going
to
assume
that
we
want
to
be
able
to
use
code
editors
like
vs
code
on
our
local
machine,
but
we
want
to
run
node
from
containers.
So,
let's
set
up
a
simple
project
directory
on
our
host
machine,
I'm
going
to
create
an
overall
project
directory
called
testdemo.
A
Inside
the
overall
project
directory
we'll
create
a
new
directory
for
the
backend
app
next
we're
going
to
initialize
our
projects
with
node,
which
means
we're
going
to
use
the
npm
package
manager
to
create
some
core
configurations
and
download
packages
we'll
need
for
our
app.
We
could
do
this
with
a
version
of
node
running
on
our
local
machine,
but
we're
not
going
to
instead
we're
going
to
keep
things
really
simple
and
clean
and
consistent
by
running
our
setup
from
a
node
container.
A
There
are
a
few
details
we
should
point
out
here.
The
it
argument
means
we're
running
this
container
interactively,
so
we
can
start
a
bash
session
inside
we're
using
the
mount
argument
to
connect
the
container
directly
to
our
hard
drive
and
we're
telling
it
to
start
the
mount
at
our
present
working
directory.
This
is
the
overall
project
directory
we're
also
telling
docker
to
map
that
directory
to
user
slash
source
app
inside
the
container
file
system.
The
w
argument
defines
a
working
directory
inside
the
container,
so
that's
where
we'll
land
when
we
actually
run
this.
A
A
A
A
A
With
this
command
we're
running
a
container
based
on
the
node
image,
again
we're
running
on
the
test
net
network
and
mounting
the
hard
drive
as
before,
we've
added
the
dash
dash
rm
flag,
which
will
automatically
delete
the
container
once
it's
stopped
and
we're
going
to
use
node
to
run
index.js.
Let's
give
it
a
shot.
A
A
A
A
A
A
A
Now
we
haven't
changed
too
much
here,
really
what
we've
added
is
a
way
for
this
front
end
to
fetch
a
json
message
from
the
api
and
then
pass
it
on
to
the
front
page,
but
there's
one
wrinkle,
there's
no
api
running
here.
This
is
a
dedicated
front-end
service
to
deal
with
that,
we'll
open
the
package.json
file
for
the
client
and
we'll
add
a
line
establishing
a
proxy
at
test,
app
port
3001.
A
A
We
have
three
containerized
services,
all
linked
up
and
ready
to
serve
as
a
foundation
for
whatever
you
create.
This
is
obviously
just
the
skeleton
of
an
app
and
there
are
all
kinds
of
quality
of
life
improvements
that
we'd
probably
want
to
add,
but
there's
one
major
efficiency.
We
should
definitely
talk
about
and
that's
docker
compose
you're
not
going
to
want
to
have
to
launch
all
of
these
services
independently
with
a
bunch
of
unwieldy
arguments.
Every
time
you
work
on
your
app.
A
A
Typically,
the
docker
compose
file
is
going
to
assume
that
it
defines
its
own
scope
and
wherever
you've
defined
a
volume
or
a
network,
it's
going
to
create
a
new
one,
but
here,
let's
just
use
what
we
already
have
so
we're
bringing
in
our
test
net
and
test
data
and
then
we're
using
them
across
our
different
containers.
As
we've
been
doing,
we
are
defining
all
the
different
configuration
details,
our
ports,
our
network,
that
we're
going
to
use
our
working
directories.
We
can
define
all
of
this
in
the
compose
file.
A
A
We
see
that
everything
is
running
perfectly,
and
that
brings
us
to
a
close
for
today.
I
hope
this
has
been
a
useful
introduction
or
refresher
to
the
concepts
underlying
containerized
services
deployment.
If
you'd
like
to
play
with
any
of
the
code
from
today,
you
can
find
it
on
github
at
github.com
eric
gregory
testdemo.