►
From YouTube: Hands-on Intro to Kubernetes (and OpenShift) for JS Developers - Jan Kleinert & Ryan Jarvinen
Description
Hands-on Intro to Kubernetes (and OpenShift) for JS Developers - Jan Kleinert & Ryan Jarvinen, Red Hat
Learn to build and deploy cloud-native Node.js applications on Kubernetes and OpenShift through a series of hands-on lab examples.
This interactive session involves using kubectl, oc, curl, and common command-line tools to interact with Kubernetes APIs. By the end of this lab, you’ll be deploying, scaling, and automating JS-based distributed solutions using containers, Kubernetes, and other popular open source tools for distributed computing.
These examples are designed to show JS developers how to maintain speed and productivity with a container-based development workflow.
A
A
A
A
A
You,
okay,
I'm,
put
another
copy
of
the
bitly
URL
in
the
slides,
but
if
you
are
plan
on
following
along
get
your
laptop
out
and
join
us
at
this
bit,
we
address
my
slides
are
at
the
other.
Bitly
address
k8s
interact,
and
this
is
where
we'll
be
picking
user
names.
I
have
claimed
user
1
as
my
user
ID.
You
are
all
welcome
to
grab
your
own
user
ID
out
of
this
list.
You
can
mark
your
name
if
you
like,
so
folks,
don't
don't
claim
your
particular
user
ID,
but
for
the
rest
of
the
workshop.
A
A
A
You
have
probably
heard
of
Red
Hat
before
as
far
as
Red
Hat
Linux
or
many
of
the
other
Linux
distros,
we
maintain
sentai
West
Linux,
Fedora
Linux,
core
OS
Linux
is
one
of
them.
All
of
these
distributions
are
attempts
to
help
you
all
be
productive
with
open
source
and
particularly
with
Linux
right,
and
when
we
give
you
a
distro.
We
don't
just
give
you
the
Linux
kernel
and
say
good
luck.
You're
on
your
own.
We
give
you
a
lot
more
support
than
that
to
help
ensure
your
productivity.
A
Hopefully
so
first
I
have
a
quick
survey
for
the
folks
in
the
room
here
to
get
a
little
bit
more
information
about
who
you
are
and
your
background.
So
how
many
folks
here
have
experience
using
containers,
docker
or
some
other
and
that's
cool?
Almost
everybody
here
looks
like
they're
using
containers
have
used
containers.
How
many
folks
here
have
experience
using
kubernetes
looks
like
little
half
or
or
more
about
cool.
That's
encouraging.
I've
noticed
a
lot
more
hands
going
up
at
JavaScript
events,
then
than
I've
seen
in
in
years
past.
So
that's
really
cool.
A
How
many
of
those
folks
I'm
kind
of
expecting
decreasing
numbers
of
hands
with
each
of
these?
How
many
X
consider
yourselves
to
be
basically
proficient
with
either
the
OC
or
coop
CTL
command
line
tools?
Anyone
care
to
raise
a
hand
on
that
cool
cool,
some
brave
folks.
It
looks
like
four
or
five
folks
and
how
many
folks
feel
like
they
can
name:
five
kubernetes
resource
types
or
primitives.
A
I'm
not
gonna
call
you
on
it,
but
yeah,
two
or
three
people:
okay,
not
not
a
whole
lot
of
folks,
but
a
couple
folks
feel
like
they
can
name
a
couple
of
these
things.
That's
really
cool
and
alright.
So
out
of
you
folks
that
are
remaining
how
many
of
folks
feel
like
you
can
confidently
say:
you
have
a
plan
for
iterative
web
development
that
involves
kubernetes,
or
are
you
still
doing
like
hey
awesome,
cool
nice
good
to
see
all
right?
Well,
I'm
would
be
very
curious
to
chat
with
you
afterwards
to
see.
A
What's
working
well
for
you
and
and
what's
not
working,
usually
I
hear
folks.
Sometimes
what
they
really
need
in
their
local
development
is
to
be
able
to
make
a
small
change
and
to
reload
their
browser
and
see
that
change
instantly
and
using
docker
or
using
kubernetes.
They
don't
always
have
a
clear
path
for
achieving
that
kind
of
real-time
development,
speed
on
a
container
based
platform.
So
hopefully
we
have
time
to
show
you
a
little
bit
of
that
as
well.
A
B
A
Yeah
we
have
a
small
enough
room
here,
feel
free
to
raise
a
hand
at
any
point.
If
you
need
clarification
on
anything
that
haven't
been
set
or
if
you
get
stuck
on
any
piece,
definitely
feel
free
to
raise
a
hand.
But
I
am
gonna.
Try
to
keep
the
pace
moving
along
during
this
first
hour,
because
it's
a
kubernetes
is
a
really
deep
concept
to
try
to
absorb,
and
so
it's
a
lot
of
info
I'm
gonna
have
to
work
expeditiously
to
get
us
through
the
first
hour
on
time.
A
So
at
a
high
level,
if
you're
not
already
familiar,
it
seems
like
half
of
you
folks
this
already,
but
for
the
other
half
kubernetes
is
designed
to
be
an
OPS
tool.
It
primarily
for
you,
folks,
being
JavaScript
developers
are
gonna,
recognize
it
as
kind
of
a
collection
of
api's
for
managing
container
based
workloads.
A
Some
folks
have
kind
of
migrated
from
the
OpenStack
community,
where
they
faced
certain
organizational
challenges
and
some
of
the
kubernetes
organization
is
kind
of
an
attempt
to
overcome
the
difficulties
that
past
open
efforts
have
had,
and
so
this
focus
and
scope
is
really
intentional.
But
kubernetes
is
designed
not
to
be
an
all-inclusive
platform.
As-A-Service,
like
you
may
have
seen
from
Heroku,
where
you
can
give
it
a
repo
address
and
I
tell
it
what
language
you're
working
in
and
get
a
hostname
as
the
response
right.
A
On
the
other
hand,
OpenShift,
which
is
a
CNC
F.
Certified
distribution
of
kubernetes
does
try
to
include
platform-as-a-service
style,
workflows,
multi-tenant
security,
the
container
registry
metrics
logs
other
things,
you'd
kind
of
need
to
have.
If
you
were
going
to
run
kubernetes
on
a
bare
metal
environment,
we
try
to
give
you
everything
you
need
for
a
you
know
to
run,
run
the
whole
cluster
on
your
own
hardware
without
having
to
sign
up
for
additional
cloud
services.
A
Openshift
has
its
own
upstream
source
publicly
available
and
some
pretty
decent
documentation
as
well,
so
feel
free
to
take
a
look
at
those
later
for
today's
workshop,
all
you
need
is
a
browser
laptop
and
you
should
be
ready
to
go.
Hopefully,
you've
already
picked
your
username
in
the
sheet,
so
remember
that
for
for
use
in
this
link
right
here.
So
if
you
haven't
already
clicked
through
to
the
workshop,
go
ahead
and
open
that
up
in
a
second
browser,
tab,
I'm
going
to
put
that
one
I
do
that
one
side
by
side.
A
A
That's
side
how
many
folks
are
stuck.
Anyone
need
help
No,
all
right,
cool
all
right,
so
everyone
sees
generally
what
I'm
saying
here.
You've
got
two
terminals
at
your
disposal.
You
can
also
choose
to
use
one
of
your
own
terminals
from
your
laptop,
but
you
would
need
a
couple
command
line
tools,
OC
and
coop
CTL.
A
You
can
get
those
later
and
try
to
repeat
all
of
these
slides
examples
using
many
coop
or
we
also
have
a
downloadable,
OpenShift
called
code
ready
containers
so
either
those
can
give
you
an
environment
where
you
can
run
all
of
this
later
from
your
own
laptop.
So
let's
get
started
I'm
going
to
paste
in
a
couple
variables
to
initialize
my
shell
and
just
to
make
sure
that
everyone
here
is
familiar
with
how
to
copy
and
paste
with
this
virtual
terminal.
A
Since
we
logged
in
via
the
web,
prompt
I'm
going
to
run
OC
Who
am
I
in
order
to
verify
my
user
ID
I
could
bump
this
font
a
little
bit
for
you
and
don't
run
this
next
one,
but
there's
an
example.
If,
if
you
were
not
logged
in
for
some
reason,
you
can
run
OSI
login
in
order
to
generate
some
login
credentials.
This
is
a
pretty
basic
feature.
It's
something
that's
not
included
in
kubernetes.
A
By
default
generally,
your
administrator
will
have
a
cube
config
file
that
is
kind
of
the
root
credentials,
and
hopefully
they
don't
give
out
that
file.
Hopefully
they
lock
the
system
down
but
worst
case
they're,
giving
out
admin
credentials
to
everyone.
In
the
cluster,
so
this
has
a
nice
open
shift
includes
a
nice
log
in
command.
That'll
help
you
initialize
your
access
to
the
cluster,
with
an
appropriate
level
of
resource
controls
and
and
permissions.
A
A
A
A
A
I'll
stall
a
little
bit
and
give
you
all
some
background.
While
we
have
one
extra
person
logging
in
so
one
thing
that
you'll
not
be
touching
today,
there
is
a
database
within
every
kubernetes
cluster
called
Etsy
D.
It
provides
a
it
was
developed
at
core
OS.
It's
been
donated
to
the
CN
CF
cloud
native
computing
foundation.
A
A
Let's
see
or
I
could
stop
it
do
something
like
that
and
client,
but
I
got
rate
limited.
No
two
is
already
started
anyway.
This
ought
to
give
me
a
way,
maybe
the
demos
yeah
still
rate
limited
here.
It
looks
like
okay,
node
twos
down
the
cluster
elected
a
new
leader
and
is
now
doing
replication
across
and
is
able
to
do
a
consistent
data
store
across
these
five
nodes.
So
this
type
of
high
availability
for
the
all
the
statefulness
of
the
whole
platform
is
stored.
A
Within
this
at
CD
database,
if
you
want
to
know
a
lot
more
about
at
CD,
take
a
look
at
these
links
here,
but
that's
kind
of
sitting
behind
the
scenes
in
front
of
Etsy
D.
We
have
the
kubernetes
api.
That's
going
to
kind
of
check.
All
of
our
access
control
make
sure
that
the
rights
into
that
datastore
that
the
correct
people
have
right
control.
A
If
we
allowed
anyone
to
read
from
Etsy,
D
or
anyone
to
write
from
it,
then
anyone
can
modify
the
state
of
our
cluster
they've
essentially
got
root
access
to
our
cluster
if
they
have
access
to
that
data
store.
So
the
kubernetes
api
is
going
to
be
kind
of
an
enforcement
layer
that
protects
that
@cd
database.
Every
time
we
have
an
interaction
with
the
kubernetes
api.
I
want
you
to
keep
an
eye
out
for
these
five
attributes,
they're
going
to
be
available
on
almost
every
piece
of
data
that
we
fetch
from
the
API.
A
The
two
ones
I
want
to
point
out
most
that
these
are
the
ones
I
want
to
emphasize.
The
most
critically
is
spec
and
Status
I.
Think,
if
you
don't
remember
anything
else,
remember
that
kubernetes
provides
an
API,
that's
asynchronous
and
the
two
attributes
you're
going
to
be
focused
most
closely
on,
are
going
to
be
setting
the
spec
and
then
reading
from
the
status,
and
so
when
I
said,
kubernetes
always
gives
you
two
responses.
It'll
tell
you
well
here's
what
you
asked
for
you
know.
You
told
me
you
wanted
five
containers.
You
put
you
said.
A
A
There
it'll
give
you
a
realistic
answer
about
the
state
of
the
platform,
both
in
terms
of
what
you
requested
and
what's
the
actual
state,
and
so
that's
going
to
be
the
spec
and
status
fields
for
a
full
reference
check
out
this
big
link
at
the
bottom
to
the
kubernetes
1.17
api's,
for
today
we're
gonna
focus
a
little
more
tightly
down
on
these
five
basic
API
resources.
So
the
first
one
that
we're
going
to
look
into
is
called
a
node.
A
So
everyone
here
at
the
node
plus
Jas
interactive
event,
knows
exactly
what
I
mean
when
I
am
talking
about
nodes
right.
This
is
kind
of
a
one
of
the
difficulties
I
find
with
talking
to
folks,
especially
JavaScript
folk
about
kubernetes
is
there's
a
lot
of
terminology
overlap,
and
this
is
a
prime
example
right
here
in
kubernetes
terminology,
a
node
is
a
host
machine,
physical
or
virtual,
where
your
containerized
processes
are
run
so
just
keep
in
mind
when
you're
talking
to
kubernetes
folks,
they
they
may
be
talking
about
nodes
in
a
slightly
different
way.
A
Node
activity
is
managed
to
be
at
one
or
more
master
instances
and
I'm
going
to
try
running
this
command
right
here
and
see
what
I
get.
Let's
all
try
this
out
and
see:
Oh
forbidden,
that's
exactly
what
we
should
see.
I'm
gonna
run
an
OC
login
really
quickly
and
log
in
as
an
administrator
here.
A
And
run
the
same
command
and
now
I
can
see
the
list
of
nodes,
and
it
looks
like
for
this
particular
command.
This
particular
cluster
we've
got
19
nodes
in
the
cluster,
so
since
I'm
logged
in
as
an
administrator
I
can
I
can
run
the
query.
I
can
list
nodes
on
the
API
using
this
command
line
tool,
coop
CTL
get
nodes
and
apparently
average
users
do
not
have
access
to
retrieve
that
data
from
the
API.
A
So,
hopefully,
you've
learned,
there's
a
data
store,
not
everyone
gets
access
to
it
and
coops
ETL
get
nodes
is
a
way
to
list
resources
by
type,
let's
see
Oh.
So
here's
my
observations,
yeah.
Basically,
everyone
agree
to
this
list
of
observations.
From
from
this
initial
section,
I
know,
we've
only
run
one
command,
but
any
questions
about
this
first
part,
no
perfect
is
what
I
hoped
alright.
So
your
j/s
runs
on
nodes.
A
Kubernetes
is
going
to
actively
manage
processes,
we'll
see
that
in
the
next
section
and
we're
trying
to
run
on
a
large
cluster
scale
system
where,
if
individual
nodes
fail
or
individual
processes
across
this,
we
can
always.
We
always
have
sufficient
capacity
to
route
around
these
problems
and
have
a
highly
available
solution
exposed
to
our
users.
A
So
next
section
pods
here
is
a
this
is
a
quote
from
one
of
my
team
members,
Steve
postie,
he
used
to
say
pods
scaled
together
and
they
failed
together.
This
is
one
thing
I,
like
kind
of
thinking
through
in
my
mind,
when
I'm
trying
to
architect
my
solutions
in
kubernetes,
so
I
like
to
think
of
kubernetes
in
a
way
as
kind
of
like
a
modeling
language,
for
my
solutions
and
one
of
the
most
fundamental
units
other
than
a
node,
which
I
kind
of
gave
you
a
brief
look
at
pod.
A
pod
is
the
first
resource.
A
We're
really
going
to
look
deeply
into
so
a
pod
is
a
group
in
kubernetes
terms.
It's
a
group
of
one
or
more
co-located
containers,
the
folks
at
Google,
when
they
were
scheduling.
Containers
across
their
cluster
often
found
that
sometimes
they
would
need
to
schedule
not
just
one
but
they'd
need
a
sidecar
of
some
sort
attached
to
a
container
and
if
the
sidecar
ever
failed,
they'd
want
to
make
sure
to
reboot
both
processes
as
a
group
right.
So
this
is
kind
of
a
multi
process,
but
all
co-located.
A
So
one
example
I
try
to
get
folks
to
volunteer
well
hey:
where
would
you
want
to
have
two
things
run
together
and
I?
Usually
trek
try
to
trick
someone
into
offering
WordPress
as
an
example
of
here's,
where
you
would
have
a
front
end
and
a
database
and
a
you
know:
WordPress,
it's
got
a
it's
got
PHP
and
it's
got
my
sequel
and,
and
you
want
to
run
them
together
right.
A
That's
actually
not
a
good
example
for
tying
two
containers
together
in
a
pod,
and
the
reason
why
is
just
this
quote
right
here:
pods
will
scale
together
and
they'll
fail
together.
So
if
I
wanted
to
scale
up
my
front,
my
front
end
my
PHP
instances,
I,
don't
want
to
add
a
database
with
every
web
instance
that
I
add
right.
I
want
to
be
able
to
scale
those
two
tiers
independently
and
since
they
need
to
be
scaled
independently,
they
cannot
be
grouped
together
in
a
pod.
A
A
So,
let's
try
to
run
a
basic
query.
This
one
I
swear.
You
will
be
able
to
execute
this
query
successfully.
Unfortunately,
it'll
return
an
empty
results
because
you
have
not
provisioned
any
pods
yet
so,
let's
take
a
look
at
what
a
basic
pod
spec
would
look
like,
so
I
have
up
here
on
the
screen.
Hopefully
you
can
see
the
result
of
this
curl
statement
and
inside
I
have
the
five
attributes
that
I
told
you
would
be
there
there's
a
kind
of
data.
All
this
data
is
internally
typed
and
versioned.
There's
an
API
version.
A
There's
a
metadata
section.
This
section,
particularly
we
could
see
it
hasn't
been
created.
Yet
so
the
timestamp
isn't
all
it
has
a
an
ID
or
a
name
that
will
need
to
be
unique
within
this
namespace
and
then
there's
some
labels
that
we'll
learn
more
about
labels
in
the
next
section
and
then,
like
I,
said,
there's
a
spec,
and
currently
we
don't
have
a
status.
That's
because
we
haven't
created
this
yet
and
kubernetes
will
start
filling
in
the
status
as
it
makes
progress
towards
achieving
the
spec
that
we've
requested.
A
Does
that
make
sense,
so
I'm
going
to
do
a
command
to
basically
provision
this
container
of
Jan's
Thank
You
Jan,
we're
gonna
provision,
nodejs
int
workshop
from
docker
hub
so
feel
free
to
follow
along
and
copy
and
paste
this
Kubb,
CTL
or
Kubb
huddle,
depending
on
how
you
like
to
pronounce
it
coop
huddle,
create
F
and
paste
that
file
in,
and
that
should
essentially
tell
the
API
that
you
want
to
load
that
JSON
and
you
would
like
to
provision
a
new
pod.
Any
questions
about
that
piece.
Kubernetes
is
an
API.
A
You
can
manipulate
these
API
endpoints
to
do
work
on
the
cluster.
So
congratulations!
You
have
if
you
hadn't
before
you
have
now
provisioned
your
first
pod.
So
if
you
wanted
to
access
the
API
using
curl,
not
super
advisable,
but
here's
just
an
example
feel
free
to
copy
and
paste,
if
you're
interested,
to
show
how
you
would
do
that
same
listing
data
by
type
just
using
a
raw
request.
And
if
you
look
into
the
path
here,
you
could
see
API
v1
v1
was
in
our
spec.
A
Let's
see
find
it
in
here:
API
version
v1,
and
so
that's
also
encoded
here
in
the
API,
and
this
API
is
actually
going
to
be
almost
identical
to
the
path
that
we
were
if
we
were
able
to
access
at
CD.
The
EDD,
CD
storage
path
looks
almost
identical
to
this.
The
kubernetes
api
is
really
just
doing
kind
of
enforcement
and
access
control
on
top
of
the
at
CD
api.
A
So
let's
go
a
little
bit
deeper
instead
of
fetching
all
pods
or
all
resources
by
type.
Let's
try
to
fetch
an
individual
resource
by
type
and
ID
you
can
either
do
type
space,
ID
or
type
slash
ID.
Either
format
works
fine
and
we
can
output
the
result
as
JSON.
Here's.
How
I
could
do
that
with
curl
and
the
same
thing
with
the
command
line,
so
I
did
get
pod
to
fetch
the
resource
of
type
pod
with
the
following
name:
hello,
k8s.
A
A
A
It
still
has
a
spec
field
as
well.
The
spec
field
has
also
grown
quite
a
bit.
We've
added
in
some
resource
limits
some
default
resource
limits
here.
There's
now
a
creation
timestamp,
that's
been
populated,
quite
a
bit
more
data
in
there,
so
kubernetes
will
do
a
lot
of
work
for
you
automatically,
but
it's
also
really
nice
to
have
a
clear
starting
point
that
you
can
hand
off
to
other
users
in
your
team.
A
As
the
folks
attending
this
section,
I
would
expect
you
will
need
to
do
a
lot
of
work
to
serve
up
these
JSON
or
llamo
files
to
your
team
members,
so
they
don't
need
to
learn
what
is
a
pod?
What
is
a
deployment?
A
lot
of
this?
You
almost
want
to
hide
this
as
much
as
possible
and
OpenShift
gives
you
some
nice
ways
of
providing
a
you
know.
B
A
All
right,
so,
let's
see
one
other
thing
you
can
try
is
the
coop
CTL
describe
command.
This
is
meant
to
be
kind
of
a
more
human,
readable
output,
assuming
humans
like
tab-separated
responses,
but
yeah.
This
is
a
coop
cuddle
describe,
is
another
kind
of
verb
you
could
use
in
addition
to
the
get,
and
you
know
getting
by
type
getting
by
type
and
ID
you
can
also
describe
instead
of
get
in
order
to
get
a
slightly
different
formatted
output.
That's
probably
a
little
bit
more
human
readable
observations
from
this
section.
A
Api
resources
provide
a
declarative
specification
and
asynchronous
fulfillment.
We
learned
about
spec
and
status
if
any
of
these
processes,
since
there's
only
one
process
per
container,
it's
very
easy
for
kubernetes
to
judge
whether
that
single
process
has
failed
or
not
and
then
restart
the
container.
As
as
a
result,
pods
are
scheduled
to
be
run
on
nodes.
We
can
actually
see
that
if
we
look
in
the
JSON
I
think
there
is
a:
where
does
it
get
set?
A
A
Welcome
to
pod
town.
You
you
now
know
what
pods
are
alright
services
services,
abbreviated
SVC,
give
you
a
single
end
point
for
a
collection
of
replicated
pods,
so
I
think
this
is
a
confusing
term
coming
from,
like
the
web
world,
I
think
of
a
service
as
a
as
a
web
service
like
that's
my
Apache
server
or
something
usually,
but
this
is
more
service
from
like
a
network
endpoint
perspective,
it's
a
single
identifier
for
a
group
of
web
services
and
we
can
generate
one
using
the
coop
cuddle
expose
command.
A
So
I'm
gonna
run
that
off
real
quickly
and
then
take
a
look
at
the
result.
So
I've
generated
a
new,
we
can
see
there's
API
version,
a
kind
of
data
metadata
field.
We
have
a
spec
and
a
status
all
the
things
that
I
said
we
would.
We
would
find
the
spec
selector
out
field
happens
to
have
something
that
says,
run
hello
k8s.
This
will
come
in
a
little
bit
later,
but
this
is
actually
going
to
be
running
a
query
selector
against
the
API
searching
for
these
two
labels.
A
A
key
of
run
and
a
value
of
hello,
k8s
and
this
load
balancer
will
forward
traffic
to
anything
that
matches
that
query.
Any
pods
that
match
that
query.
So
we'll
see
a
little
bit
more
about
that
in
a
second,
but
first
I
want
to
show
you
another
nice
feature
of
these
services.
Anytime,
you
create
a
service
in
kubernetes.
A
Kubb
dns
will
automatically
start
providing
a
name
server
resolution
for
this
value,
so
we
can
now
do
curl
hello,
k8s
within
our
individual
namespace
and
hopefully
you'll
see
a
response
from
the
container
that
you
provisioned
everyone
able
to
see.
That
raise
your
hand
if
you
don't
all
right.
Aha,
we
caught
him.
Everyone
saw
it,
hopefully
all
right,
cool,
congratulations!
Hopefully
that
worked
for
you,
another
nice
tip.
A
If
you
wanted
to
slice
specific
values
out
of
the
JSON
response,
you
can
use
this
get
with
a
resource
type
and
ID,
and
then,
instead
of
Oh
JSON
use
Oh
JSON
path
to
select
out
a
particular
field.
This
particular
field
is
the
node
port
value.
If
I
wanted
to
try
to
access
this
container
from
outside
the
cluster
I
could
try
hitting
an
address
like
this.
Unfortunately,
this
is
still
an
internal
IP
for
Amazon,
but
if
I
had
an
external
IP,
I
ought
to
be
able
to
curl
the.
A
This
high
numbered
port
on
any
node
in
the
system
and
it'll
get
forwarded
to
the
right
service
internally.
It
still
doesn't
give
you
full
like
a
doe
name
name.
Servicing
you'd
still
probably
need
load
balancers
in
front
of
that,
but
that's
your
shortest
route
to
getting
traffic
into
a
cluster
from
the
outside
is
this
node
port
service.
That
gives
you
a
easy
way
to
access
these
services
on
a
high
numbered
port
from
outside
the
cluster
communication
inside
the
cluster
is
super
easy,
as
we
have
just
proven.
A
We
could
see.
I've
currently
have
one
pod
running
serving
those
requests,
and
you
could
see
in
this
in
this
command.
I'm
running
get
pods
L.
This
is
a
new
type
of
query
where
we're
querying
for
resources
by
type
doing
get
pods,
but
we
don't
want
all
pods.
We
want
only
the
pods
that
match
this
particular
label.
Selector.
That's
what
shell
is
label
selector,
so
we
want
to
find
all
resources
by
type
assuming
they
match
this
key
and
value
in
their
labels.
A
Section
our
service
and
our
pods
happen
to
have
that
match
and
that's
how
it
does
the
mapping
from
the
service
to
those
pods.
So
if
we
delete
all
of
the
pods
that
that
the
service
is
routing
traffic
to
that
should
cause
this
to
fail,
even
though
the
service
still
exists,
the
service
is
no
longer
able
to
pass
the
traffic
onto
the
pod,
and
so
the
only
thing
I'm
trying
to
prove
here
is
that
the
services
and
the
pods
can
exist
independently.
You
can
have
a
service
that
doesn't
have
any
pods
associated
with
it
at
all.
A
You
can
also
have
services,
there's
a
type
of
service
called
a
headless
service.
I,
don't
know
if
I
agree
with
the
name
but
headless
service.
You
know
something
that
is
a
service
shows
up
within
the
cluster,
with
a
local
DNS
coop
DNS
resolution,
but
the
service
is
actually
pointing
back
outside
the
cluster
to
a
legacy.
Data
store
right,
a
big
Oracle
database
or
something
so.
Your
micro
services
within
the
cluster
still
have
discoverability
as
long
as
you're,
creating
this
service
abstract
for
it
to
have
something
to
resolve
against.
A
So
with
this,
hopefully
we
have
deleted
our
pods
and
deleted
our
services
and
gotten
back
to
a
clean
State.
Any
questions
from
this
section,
no
nothin
penny
are
a
quiet
group.
I
should
have
brought
coffee
for
you.
All
all
right
service
basically
means
load
balancer.
Hopefully
that's
that's
clear
label
selectors
can
be
used
to
organize
workloads.
A
A
Okay,
well
I'm
gonna
try
to
power
through
and
we'll
swap
at
the
open
shift.
Then,
okay,
so
we
still
have
a
lot
to
cover
no
one's
asking
any
questions.
Yet
so
I'm
gonna
try
to
pick
up
the
speed
and
get
a
little.
You
know
we'll
see
if
I
lose
any
of
you
in
this
next
section,
all
right,
a
deployment
now
that
you
have
all
created
pods.
Never
ever
do
that
again.
This
is
like
math
class,
where
it's
like.
A
Oh
now,
now
I've
introduced
algebra
to
and
now
you
don't
have
to
do
long
division
or
you
know
it's
all.
Here's
a
calculator
deployments
just
generally
solve
a
lot
of
the
stuff
that
we
just
did
with
pods
pods
were
an
earlier
abstraction
and
good
to
learn
because
they're
your
fundamental
unit
of
scale
but
deployments,
are
how
you
scale
up
a
collection
of
pods.
So
this
is
a
much
more
useful
abstraction,
let's
dig
into
deployments
and
learn
how
to
really
get
work
done.
A
So
this
is
going
to
help
you
specify
container
runtime
requirements
in
terms
of
pods.
Now
that
we
know
what
a
pod
is,
so
we
could
have
a
shorter
command
here.
We
could
just
run
the
top
half
of
this
in
order
to
deploy
Jannes
image
that
we
previously
had
deployed
in
that
pod
specification,
but
I'm
going
to
add
an
extra
line,
I'm
going
to
add
these
extra
flags,
dry-run
and
Oh
JSON.
What
those
two
flags
allow
me
to
do
dry
run
says
instead
of
immediately
provisioning.
A
A
The
reason
why
I
like
showing
this
extra
step
is
that
this
gives
you
a
clear
way
of
generating
your
own
deployment
spec,
and
then
you
can
hand
that
off
to
other
developers-
or
you
could
put
it
in
a
helmet
art
or
you
can
like
you-
have
a
way
of
reproducing
this
and
modifying
it.
Changing
the
labels
changing
the
resource
allocation.
A
You
know
you
have
hopefully
a
starting
point
that
you
can
continue
iterating
on
and
something
you
can
give
to
junior
developers
where
you
don't
have
to
really
explain
what
a
deployment
is
or
how
to
use
coop
cuttle
in
an
advanced
way.
Hopefully
they
could
coop
cuddle,
create
and
then
get
back
to
developing.
So
that's
why
I
have
the
second
half
here.
So,
let's
all
create
a
deployment
JSON
file,
so
we
have
something
we
can
share
with
other
users.
A
This
is
actually
showing
me
some
deprecation
warnings,
good
to
know
that
there
are
changes
upcoming
in
the
API
that
I
might
want
to
know
about.
I
tried
using
this
generator
run
pod
v1,
and
this
is
actually
if
we
wanted
to
make
that
pod
dot,
JSON
that
we
had
earlier
adding
in
that
generator
flag
here
essentially
gives
us
exactly
the
pod
JSON
that
we
started
with
earlier.
So
in
case
you
wanted
to
generate
that.
A
It
looks
like
this
deprecation
warning
we
were
just
shown
is
actually
giving
us
advice
on
this
new
feature
that
that's
a
newly
been
added,
and
now
we
have
a
clean
way
of
generating
pod
specs
as
well.
That's
a
last
time.
I
did
this
workshop.
That
was
not
available,
so
new
new
stuff
coming
down
the
down
the
pipe
as
as
we
work,
let's
also
take
a
look
at
our
deployment
JSON,
so
does
it
have
the
five
attributes
that
I
mentioned?
A
It
has
a
kind
of
data
version
of
data,
and
this
I
may
need
to
update
this
if
it's
being
deprecated
soon
it
has
some
label
selectors.
Remember
how
we
did
that
a
selector
that
was
set
to
run
equals
hello
k8s
this
label,
selector,
is
going
to
say
our
deployment
should
match
that
label.
Anyone
anytime
someone
does
a
label
based
query
with
resource
type
equals
deployment.
This
is
going
to
match
on
those
key
value.
Labels,
there's
spec
that
says
what
our
current
replication
level
is
and
there's
a
template.
That
is
basically
an
embedded
pod
spec.
A
A
A
We're
gonna
run
this
coop
cuddle
expose
command
in
order
to
make
a
service
just
like
we
did
before,
except
this
time,
we're
to
exposing
a
deployment
instead
of
a
pod
and
we're
adding
on
this
dry
run
flag
in
order
to
create
a
service
dot
JSON.
This
is
just
to
show
you
that
kubernetes
is
like
a
modeling
language
and
you
use
these
JSON
files
in
order
to
kind
of
model
the
topology
of
your
micro
services
solution.
A
So
you
may
end
up
having
collections
of
these
in
a
repo,
in
which
case
you
could
do
something
like
coop
cuddle,
creates
F
and
then
a
directory
folder,
so
everything
from
staging
dot
star.
You
know
all
the
all
the
ML
files
in
that
folder.
Let's
you
launch
them
all.
You
could
give
it
a
path,
as
well
as
a
file
name,
any
questions
about
that
piece.
A
No
I'm
going
to
create
the
service
and
we're
gonna
see
what
we
get
as
a
result
of
this
query:
I'm
running
coupe
cuddle
get
P
o,
which
is
short
for
pod,
comma
SVC
short
for
service,
comma,
deploy
short
for
deployment.
This
is
listing
multiple
resources
by
type
using
the
command
line.
Nice.
To
know
that
you
can,
you
can
easily
do
that
as
well,
and
now
that
we
have
a
pod,
a
deployment
and
a
service
I
should
be
able
to
run
curl
and
verify
that
we
have
access
to
the
container.
A
Cool
next
step
is,
let's
scale
up
that
container
and
see
if
we
can
demo
some
of
the
high
availability
features
of
this
cluster,
so
I
can
use
the
coop
cuddle
scale
command
on
the
resource
of
type
deploy,
ID
equals
hello
or
name,
is
hello.
K8S
and
I
want
to
update
the
spec
with
a
new
replica
value
and
set
the
replication
to
three.
A
Let's
do
let's
list
all
pods
by
list
all
resources
by
type
where
type
equals
pod
and
it
looks
like
I
now
have
three
containers
up
and
running.
Hopefully
you
have
three
up
and
running,
if
not
one,
it
may
still
be.
You
know
working
towards
that
goal
and
if
it's
not
there,
hopefully
it'll
give
you
the
truth
about
how
much
progress
has
been
made.
Here's
another
nice
trick
that
will
kind
of
use
use
whatever
your
default.
Editor
is
so
we
saw
a
coop
coop
cuddle,
get
getting
resources
all
by
type
getting
by
type
and
ID.
A
What
about
Kubb
cuddle
edit?
What
do
you
imagine
that
does
it
looks
like
I've
opened
this
file
in
an
editor
I'm
gonna
find
the
replicas
line
and
I'm
gonna
edit.
This
to
five
feel
free
to
follow
along.
If
you
dare
it
looks
like
VI,
is
the
default
editor,
so
I
went
over
to
this
line
and
hit
s
for
substitute
and
then
five
and
now
I'm
gonna
hit
escape
:
W
Q
for
folks
that
aren't
used
to
VI.
A
And
what
do
you
imagine
this
will
do?
Am
I
gonna
write
this
file
locally
where's
this
gonna
go.
This
actually
will
send
the
file
back
across
the
network
and
save
it
back
to
our
@
CD
database
in
the
kubernetes
api,
and
now,
if
I
get
pods,
I
have
five
pods
up
and
running,
so
not
that
I
would
recommend,
live
editing
these
resources
in
the
API,
but
if
you're
learning
this
may
be
a
great
way
to
tweak
certain
values
or,
if
you're,
just
scaling
up
a
web
service.
A
This
is
a
you
probably
want
to
use
the
scale
command
instead
of
a
coop
cuddle
edit
right,
but
this
commands
kind
of
a
medium
smart
like
if
I
open
this
up
and
write
it
out
without
any
changes.
It'll
notice,
you're,
not
actually
shipping.
Any
changes
to
the
API
and
it'll
give
me
feedback
to
that
effect.
So
it's
a
it
does
a
decent
job
of
allowing
you
to
quickly
edit
things
while
staying
out
of
your
way
and
I.
Think
if
you
customize
the
editor
variable,
you
can
use
something
other
than
VI
as
your
as
your
default.
A
So
cool
we've,
we
have
all
scaled
up.
I'm
gonna
run
this
get
pods
watch
in
this
lower
shell
down
here
and
I
am
NOT
gonna
back
ground.
It
I
had
this
ampere
stand
on
the
end,
but
I'm
just
gonna
leave
it
running
in
the
in
the
foreground.
So
that's
gonna
continually
keep
an
eye
on
my
number
of
pods
and
leave
the
connection
open.
This
is
like
a
streaming
connection,
javascript
folks,
as
a
JavaScript
person.
A
I
am
always
really
excited
when
I
see
fully
asynchronous
api's,
but
then,
in
addition
to
that
streaming,
api's
where
I
can
continually
getting
a
streaming
response
as
the
as
the
updates
come
in.
So
this
is
huge
that
the
API
supports
this
type
of
watch
functionality
in
my
opinion
and
that
there's
a
nice
way
of
accessing
it
from
the
command
line
as
well.
So
this
embedded
query
here.
A
This
is
going
to
do
basically
a
fetch
from
the
API
to
get
a
series
of
pod
names
that
are
all
random
pod
names
that
are
spaced
separated
so
that
I
can
run
coop
cuddle,
delete
pod
and
delete
three
resources
by
ID.
That's
basically
what
this
complicated
complicated
command
is
doing
so
feel
free
to
copy
and
paste,
and
what
this
ought
to
do
is
kind
of
like
a
shotgun
blast
of
damage
across
your
cluster
and
take
out
three
random
containers
out
of
your
group
of
five.
A
So,
let's
see
what
happens
when
we
sustained
some
damage,
it
looks
like
right
away
in
our
watch
down
below
the
kubernetes
api
has
recommend
or
recognized
that
these
containers
went
missing.
It
was
our
fault,
of
course
here,
but
this
could
have
easily
been
one
node
out
of
our
cluster
suddenly
went
offline
and
all
of
these
processes
are
suddenly
unaccounted
for.
The
API
is
going
to
recognize
that
a
node
has
gone
offline.
A
It's
going
to
flag
these
containers
as
being
down,
and
it's
going
to
provision
new
containers
on
other
available
nodes
in
order
to
get
us
back
up
to
our
expected
allocation,
which
was
five
running
pods,
so
hopefully
you're
back
up
to
full
health.
At
this
point,
another
thing
you
could
do
is
get
deploy,
oops
hold
it
wrong,
get
deploy,
and
that
ought
to
show
you
how
many
are
ready
up
to
date
available.
A
Even
if
you
artificially
knock
it
out
of
alignment,
it'll
keep
it's
like
like
your
thermostat,
you
set
it
to
72
degrees,
and
even
if
you
leave
the
refrigerator
door
open
and
the
window
open,
it's
going
to
keep
trying
to
heat
the
house
or
cool
it
depending
on
what
the
temperature
is
outside.
So
yeah
it'll
keep
working
to
achieve
your
goal
and
give
you
an
honest
answer
in
that
status
field.
So
observations
from
this
section
dry
run
flag
will
help.
You
generate
a
new
resource
specification.
A
A
A
A
It
will
create
the
file
and
then
post
the
file
to
the
API
it'll.
Do
two
steps
in
one
write:
create
it
post
it
to
the
API,
don't
even
write
it
to
disk.
If
you
want
an
intermediary
step
where
you
create
it,
write
it
to
disk
and
don't
touch
the
API,
and
then
you
have
a
file
that
you
can
share.
That's
why
I
added
the
extra
flags,
but
you
can
skip
a
step
just
with
run
and
then
running,
create.
A
This
is
also
just
one
step,
but
based
on
the
file
you
have
so
it
gives
you
an
opportunity
to
edit
the
file
change.
The
labels
change
the
default
replication
so
that
when
they
do
the
create
the
default
replica
is,
is
five
replicas
right,
so
I
like
having
that
extra
step.
So
when
I
share
things,
I
have
something
more
customized
that
I
share,
so
it
depends
on
what
you
need
I
do
run.
This
is
how
this
is.
A
What
I
do
on
the
command
line
is
this,
but
if
I
want
to
share
it
dry,
run
yeah
good
question.
Thank
you.
So
last
section
and
then
I
will
hand
it
over
to
Jan
for
the
OpenShift
pieces,
replica
set,
a
replica
set,
provides
replication
and
lifecycle
management
for
a
specific
image
release.
Does
anyone
remember
what
my
title
was
on
that
last
section
deployment
helps
you
it's
almost
the
exact
same
thing:
replication
and
lifecycle
management
for
a
specific
image
release.
Let's
see
how
it's
different
than
deployments,
because
this
sounds
very
similar
on
the
surface.
A
So
let's
take
a
look
at
the
current
state
of
our
deployment.
It
looks.
Hopefully
you
are
all
able
to
fetch
data.
You
were
able
to
fetch
it
when
you
had
a
single
replica
and
you
scaled
up
to
five.
We
had
some
damage,
but
we
recovered
and
it
still
looks
healthy
I'm
watching
I
was
watching
the
pods
in
this
lower
terminal,
I'm
gonna.
Do
it
control,
C
and
break
out
of
that
or
for
ground?
A
This
command
is
basically
just
going
to
edit
the
it's
going
to
pull
down
the
deployment
resource
from
the
API
open
up
the
file
find
the
spec
and
the
pod
within
the
deployment
spec
and
then
look
at
the
identifier
of
the
image
within
the
container
within
the
pod
within
the
deployment
it's
kind
of
all
nested
in
that
JSON
and
it's
going
to
update
the
image
value
and
set
a
new
tag
now
we're
adding
:
v1
on
our
container.
So
this
is
going
to
do
a
roll
us
forward
to
a
new
deployment.
A
A
We
can
get
RS
to
look
at
the
replica
sets
and
it
looks
like
there
is
currently
some
some
action
going
on
I'm
going
to
run
this.
We
could
already
see
the
new
value
and,
if
I
get
replica
sets,
it
looks
like
I'm
fully
rolled
forward
from
this
to
this.
Whatever.
That
means,
let's
see
if
we
could
find
out
some
more
info,
I
get
pods.
A
If
we
look
at
the
names
of
the
pods,
you
can
see
Hello
k8s.
This
is
named
after
the
name
of
the
deployment
and
then
there's
this
middle
identifier.
This
is
an
identifier
for
the
replication
controller
and
then
this
is
a
random
ID
for
the
individual
pod.
So
all
of
these
from
the
old
replication
controller
are
terminating
oops
scrolled
up
too
far
and
the
new
replication
controller
is
running
and
we
have
our
new
response
good
morning
for
the
classroom.
It
was
everyone
able
to
roll
forward.
There.
Ya
know
no
problems
anywhere,
perfect,
excellent,
let's
try!
A
A
Let's
watch
as
this
changes:
oh
that
should
be
8080
there
we
go
yeah.
Hopefully
that
was
that
was
a
typo
on
my
part,
but
you
should
have
as
long
as
these
applications
are
stateless,
Web,
Apps
and
and
if
you're,
storing
your
session
information
in
a
distributed
cache
like
memcache
or
Redis,
then
you
ought
to
be
able
to
do
zero,
downtime
rolling
rolling
deployments
if
you
are
reasonably
stateless
in
your
architecture.
A
So
kubernetes
is
great
for
high
availability
of
your
web
resources,
zero
downtime,
rollouts
and
rollbacks,
which
sometimes
is
a
whole
lot
of
stuff.
That
developers
aren't
always
concerned
with
so
I'm
gonna
do
a
clean
up.
Let's
do
coop
cuddle
delete
service,
comma
deployment.
We're
gonna
delete
two
resources
by
type
as
long
as
they
have
the
same
ID.
This
is
kind
of
like.
Does
that
make.
A
A
A
A
Begley
replica
sets
are
if
I
have
that
initial
image,
that's
:
latest
and
I.
Have
five
replicas
and
I
want
to
roll
forward
to
a
v1
tag.
The
replica
set
its
going
to
the
deployment
will
create
a
new
replica
set
and
it
will
start
scaling
up
the
pods
on
this
new
replica
with
the
v1
image,
and
since
we
requested
a
speck
of
five
we're
gonna,
the
deployment
is
going
to
try
to
keep
us
at
a
spec
of
five,
even
though
it's
doing
this
rolling
deployment
from
replica
1
to
replica
2.
A
So
as
it
scales,
replica
v1
up
it'll
scale,
this
one
down
and
try
to
roll
us
across
and
keep
us
at
even
five
containers
as
it
does
the
rolling
deployment,
and
so
the
deployment
resource
under
the
hood
is
actually
using
a
replica
resource
to
manage
the
pods
right.
So
the
kubernetes
api
has
higher
order
resources
that
leverage
lower-level
resources
in
order
to
do
automated
and
a
deployment
is
a
higher
order
resource
that
takes
advantage
of
replica
sets
primarily,
which
then
in
turn
take
advantage
of
pods.
A
So
it's
all
kind
of
stacked,
like
a
Russian
doll
and
best
thing
I
can
recommend,
is
use
deployments
when
possible,
because
that
already
takes
advantage
of
all
the
lower-level
pieces
and
then
that'll
keep
things
nice
and
simple
for
you.
But
hopefully
you
understand
that
this
is
like
a
modeling
language
with
building
blocks
and
the
more
you
learn
about
it.
A
The
more
you
learn
how
to
architect
your
solutions,
and
then
you
have
a
giant
pile
of
Yama,
land
or
JSON
that,
hopefully
you
can
share
with
junior
developers
to
give
them
a
clear
starting
point
and
to
make
things
easier
for
them.
So
I'm
gonna
do
a
check
in
on
folks
now
that
we're
through
the
first
half,
how
many
folks
have
experience
using
containers,
I
already
asked
this
one,
and
it
was
a
hundred
percent
right.
How
many
folks
can
say
they
have
experience
using
kubernetes,
a
hundred
percent?
How
many
feel
like
you're?
A
Maybe
basically
proficient
with
coop
cuttle
I
think,
hopefully,
you've
done
enough
command
line
interactions.
You
can
list
resources
by
type
and
grab
them
by
ID
edit
them.
If
you
need
to
how
many
people
feel
like
they
can
name
five
basic
kubernetes
primitives,
anyone
feel
like
they
can't
I'll
single
you
out.
I
saw
it
now
have
to
make
threats
at
folks.
Alright.
A
So,
hopefully
you
are
all
ready
to
see
what
OpenShift
adds
on
top
of
kubernetes,
we
saw
a
lot
of
low-level
ops,
focused
use
cases
which
are
great
to
know
if
you're
trying
to
replicate
something
as
production
quality
as
possible
nailing.
These
JSON
templates
allows
you
to
reproduce
things
really
easily,
but
allowing
for
that
real
time.
Iterative
web
development
is
super
important
and
being
able
to
not
overwhelm
junior
developers
with
terminology,
especially
when
you're
trying
to
tell
them
that
a
node
is
something
different
or
a
service
is
something
they're
not
used
to.
A
A
B
B
B
All
right,
okay,
so
for
the
rest
of
the
workshop,
we're
going
to
be
just
focusing
in
this
one
window,
so
you
have
this
panel
here
on
the
left
hand,
side
where
it
says
workshop
overview,
that's
going
to
be
your
instructions
from
now
on.
If
you
scroll
go
ahead
and
click
that
blue
continue
button
and
I'll
explain
what
we're
doing
here
so
we're
still
going
to
be
working
in
this
Web
terminal.
Let
me
get
control,
see
you
down
here,
but
this
has
some
just
click
click
and
run
so
this
OC
help.
B
B
Does
everything
coop
cuddle
can
do,
but
additionally
has
some
of
the
features
of
open
shift,
so
it's
open
shift
command
line
tool
that
does
everything
that
could
cuddle
can
do,
but
also
some
other
things
that
we'll
see
in
a
moment,
and
you
can
get
the
the
help
for
that
command
right
there.
So
hopefully
that
worked
for
you,
we're
gonna,
be
using
a
project.
B
You've
already
been
working
in
a
project,
this
user,
one
project
that
you're
in
projects
are
somewhat
analogous
to
namespaces
in
kubernetes,
but
a
project
is
an
open
shift
construct
that
also
kind
of
ties.
Ties
that
role
the
role
based
access
control
to
your
namespace.
So
you
as
user,
whatever
number
have
access
to
the
user,
whatever
project,
but
you
don't
have
access
to
my
project.
You
don't
have
access
to
all
the
projects
like
the
admin
of
the
cluster.
B
Does
so
you're
you're
working
in
a
single
project
right
now,
if
you
run
this
OC
project
command
and
you
can
type
these
or
you
can
just
click
the
button
to
execute
them.
It
should
tell
you
what
project
you're,
you
think
I've
done
it
here.
We
go
I
have
to
like
do
this
at
arm's
length.
I
do
this
on
treadmills
all
the
time
where
you
like
lean
for
it,
and
then
it
turns
off.
So
if
you
click
that
you
should
see
your
own
project
come
back
there,
so
we
haven't
looked
at
the
openshift
web
console.
B
Yet
we're
going
to
do
that
now.
So
there's
a
link
here
you
can
click.
You
also
can
just
simply
click
on
the
word
console
up
here
and
that's
gonna
drop.
You
in
you
might
have
to
login
you
probably
already
logged
in,
but
so
what
this
is
and
we've
got
tiny
resolution
here.
So
there
we
go
well
pull
it
over,
so
you
can
see
the
menu.
So
this
is
the
OpenShift
web
console.
B
If
you
don't
want
to
use
the
command
line
ever
or
sometimes
you
can
use
the
web
console
to
get
a
lot
of
the
same
things
done
by
default.
That's
gonna
drop
you
into
this
administrator
view
and
you
can
tell
that
by
this
toggle
up
here.
So
this
is
kind
of
like
the
default
view.
If
you
need
to
do
more
ops
related
things
in
the
cluster,
there
is
also
a
developer
view
and
I
didn't
click.
B
B
We're
going
to
use
some
of
the
features
too
of
the
platform
to
do
that
for
us
so
source
to
image
as
I
mentioned.
So
this
is
a
it's
an
open
source
project,
it's
included
with
OpenShift,
but
you
can
also
use
it
outside
of
OpenShift
it's
available
for
use
and
what
it
does
essentially,
is
you
give
it
a
source
code
like
a
get
URL
github
URL.
You
can
either
tell
it
what
kind
of
code
it
is.
B
So
that's
what
we're
going
to
do
now,
so,
if
you're
willing
to
and
if
you
have
a
github
account
the
best
way
to
do
this
would
be
to
fork
this
repository
of
Ryan's,
because
that
way
you
can
set
up
actually
what's
our
timing
or
see,
if
we
even
have
time
to
do
the
web
hooks
my
phone's
over
there
all
right,
you
can
try
it.
So
if
you,
if
you
want
to
go
ahead
and
fork
this.
B
B
B
Odd
there
we
go.
That's
a
little
better
like
that,
so
it's
going
to
give
you
some
options
here
of
different
ways
that
you
can
deploy
things
we're
going
to
use
from
git,
but
just
to
walk
you
through
what
else
there
is
you
can
deploy
an
image,
which
is
what
we
were
doing
on
the
command
line
before
you
can
deploy
from
a
catalog.
This
is
going
to
give
you
a
catalog
of
things
on
the
cluster
that
are
available,
that
you
can
use
to
build
off
of
so
this
is
I'll.
B
Just
show
you
really
quickly
so
in
the
developer,
catalog
you'll
see
things
like
languages
and
runtimes.
So
if
you've
got
something,
that's
PHP
or
whatever,
this
gives
you
a
starting
point
to
build
from
there's,
also
databases
you
can
deploy
CI
CD
solutions,
Jenkins,
whatever
you
want,
you
can
deploy
all
of
those
from
that
catalog.
Let's
go
back
over
here,
though,
you
can
deploy
from
a
docker
file.
So
if
you
just
got
a
docker
file
out
there
somewhere,
you
can
deploy
from
that.
You
can
drop
in
yeah,
Malheur
JSON.
B
So,
like
the
deployment
dot
JSON
file,
that
Ryan
was
creating.
When
you
did
the
dry
run
before
you
could
just
drop
it
in
inches
and
just
say:
click
it
paste
it
you're
done
or
databases
which
again
maps
back
to
the
databases
we
were
looking
at
in
the
catalog
before,
but
it's
just
a
easier
view
into
that.
So
all
of
that
to
say
click
from
get
you're
going
to
put
your
your
fork
here.
I
think
I
still
have
mindless.
Let's
find
out.
I'll
use
my
own.
B
B
B
B
Yeah
so
hopefully
we'll
get
to
that
in
the
next
step.
I
think
we'll
have
time
so
so
drop
your
your
git
repo
URL
there
scroll
down
click
nodejs
when
we're
doing
in
the
web
interface.
Here
it
allows
you
to
explicitly
select
which
builder
image
you're
using
you
can
do
this
from
the
command
line
to
with
the
OC
new
dash
app
command,
and
in
that
case
you
can
just
give
it
that
get
URL
and
you
don't
even
need
to
tell
it
it's
no
js'
it'll
figure
it
out.
B
You
can
select
what
version
of
node
you
want
to
use
I'm
just
going
to
leave
it
10
by
default,
and
then
it's
it's
giving
you
these
options
here
to
create
an
application
name.
So
this
is
really
just
creating
some
labels
on
your
deployment
and
you'll
see
what
that
means
in
a
minute.
But
it's
allowing
you
to
have
like
an
application
grouping.
B
It's
just
a
logical
grouping
of
components
in
an
application
to
make
it
kind
of
easy
to
see
and
manage,
but
it's
using
standard
kubernetes
naming
labels
to
do
that
and
then
there's
the
name
for
your
deployment,
which
we'll
just
call
it
HTTP
base
that's
fine
by
default.
Hopefully
you
can
see
this
in
the
Advanced
Options
by
default.
When
you
create
something
this
way
in
the
web,
console
it's
going
to
create
a
route
for
you.
We.
A
B
Really
get
into
routes
before
routes
are
a
OpenShift
construct
they're
like
a
an
additional
benefit
and
feature
that
OpenShift
adds.
We
talked
before
about
how
the
services
that
you
create
were
accessible
with
cube
DNS
inside
the
cluster,
but
not
unless
you
use
the
crazy
node
port
thing
not
accessible
from
outside
the
cluster,
not
saying
it
was
crazy,
but
you
know
it's
not
normal.
That's
not
how
you
normally
would
yeah.
B
B
A
A
B
B
Table
that
way,
I
can
keep
an
eye
on
the
time
all
right.
So
what
you
see
now
this
is
that
topology
view
we
talked
about
before
this
light
gray
circle
around
it.
That's
our
application
grouping
that
application
name
was
HTTP
base
app
if
we
had
more
than
one
app
or
component
in
this
application.
Grouping
they'd
all
show
up
in
this
little
bubble.
Here
we
only
have
one,
because
this
is
pretty
simple.
You've
got
these
decorators
here
this
one,
this
one
that
looks
like
a
little
circle
thing.
That's
the
status
of
your
build.
B
That
that
helps
alright.
There
we
go
so
our
build
is
running
right
now
and
that's
running
that
that
build
to
image,
I'm,
sorry
source
to
image
process
that
we
talked
about
I'll
come
back
over
here,
so
we
can
see
it
as
it
completes
it.
Looked
like
it
was
almost
done.
As
that
completes
you.
Whilst
you'll
see
this
turn
to
a
green
check
mark
and
then
so,
once
the
build
completes,
then
the
deployment
will
start.
B
B
So
if
something
were
to
go
wrong,
you've
got
the
logs
here
for
that
build
and
you
can
see
see
what
happened
so
sometimes,
if
there's
like
a
dependency
issue
and
like
the
NPM
install
process
or
something
fails,
you
can
go
here
and
say:
okay,
I
need
to
go
fix
something
in
my
code
and
then
come
back
okay.
So
now
we
see
push
successful
here,
come
back
over
to
apology
view.
That's
a
green
check.
B
Now
soon,
we'll
start
to
see
this
ring
around
that
change
as
the
deployment
there
we
go
as
the
deployment
starts
rolling
out
so
I
clicked
on
that
center
circle
there
to
get
this
little
panel
to
show
up.
This
is
information
about
our
deployment.
You
can
see
the
pod
here.
The
container
is
creating
right.
Now
you
can
view
the
logs
for
your
pod
from
here
as
well,
by
clicking
into
that
it's
still
coming
up.
So
there's
no
logs
just
yet.
Here
we
go:
oh
okay!
So
now
it's
listening
that
shows
us.
B
A
B
Web
application
right
there,
so
hopefully
you
all
got
to
that
same
point
if
you
were
following
along,
but
so
now
it
has
deployed
that
we
can
get
to
it
from
this
URL
and
it
is
running
so
you
again,
you
could
have
done
all
of
that
on
the
command
line
with
OC
new
app
as
well.
You
would
have
been
had
to
have
exposed
the
service
to
create
a
route.
You'd
have
to
do
that
extra
step.
If
you
did
it
through
the
command
line
and
we
talked
whoo,
it's
rolling
the
wrong
way.
B
We
talked
a
little
bit
about
how
you
would
do
that
here
if
you
wanted
to
do
that
from
the
command
line,
instead
any
questions
of
that
No,
okay,
we'll
move
on.
So
what
Ryan
was
talking
about
before
if
we
want
to
set
up
web
hooks
so
that
any
time
we
make
a
change
to
the
code
and
actually
push
it
out,
it'll
do
a
new,
build
and
deployment,
that's
what
we
can
set
up
now.
So
if
you
created
a
fork,
go
ahead
and
go
back
to
your
terminal,
how.
A
Many
folks
are
currently
doing
some
type
of
like
git
push
to
deploy.
Is
it
anyone
using
that
currently
not
too
many
okay?
That
was
like
a
revolutionary
five
years
ago,
but
I'm
like
curious?
How
many
people
are
actually
using
that
to
kick
off
deployments
today.
I
think
a
lot
of
folks
have
kind
of
maybe
decoupled
how
that
works,
but
it
is
definitely
nice
to
know
that
you
can
wire
up
automation,
whether
it's
just
deploying
to
your
QA
stage
or
earlier
stages
or
kicking
off
various
types
of
automation
based
on
changes
in
a
repo.
A
B
B
Kody
containers
that.
A
B
A
B
A
B
A
Usually
for
code
ready
containers
if
I
have
a
local
station
rather
than
relying
on
a
web
hook,
I
can
just
click
on
the
build
button
in
the
dashboard
to
trigger
a
new
build
whenever
I
need
to
or
what
I
like
doing
instead
of
doing
builds
based
on
whatever
whatever's
coming
in.
That
might
be
useful.
But
I
like
testing
my
code
before
I
make
the
commit,
and
we
can
do
that
using
some.
Our
sync
features
that
that
we
have
queued
up
next
I.
B
A
Feel
free,
the
rest
of
the
day
feel
free
to
try
the
web
hooks
on
your
own
or
stop
by
the
Red
Hat
booth
afterwards,
and
we
can
give
you
a
demo
of
the
git
push
to
deploy
and
show
the
automation
from
from
github
back
in
it's
nice,
but
if
you're
not
currently
using
it,
just
make
a
note
that
it
does
exist
and
I
think
this
hopefully
live.
Development
is
really
where
I
see
a
huge
opportunity
for
front-end
developers
to
get
some
traction
with
kubernetes,
because
I
think
for
me.
A
A
So
hopefully
this
is
a
big
take
away
in
a
way
to
show
you
how
to
enable
your
junior
development
developers
with
a
containerized,
workflow
and
and
more
visibility
than
they've
had
in
the
past,
for
these
more
complicated
problems
without
putting
barriers
in
their
workflow
where
they
have
to
run
a
build
as
a
prerequisite.
In
order
to
get
some
feedback
right.
We
want
to
give
you
feedback
during
your
real-time
dev
loop
and
that's
what
this
is
all
about.
Yeah,
so.
B
I
just
I
feel
like
I,
have
to
say
this,
so
don't
use
OCR
sync
in
production.
Oh,
it's
pushing
like
a
file
into
your
running
container,
so
this
would
be
definitely
for
doing
your
local
development
that
inner
loop
stuff.
It's
also
only
pushing
it
to
one
pod
so
like
if
you've
got,
you
know,
five
pods
like
just
only
use
this
for
for
local
development.
Yes,
you,
you
can
use
Jenkins
with
with
openshift
for
sure
yeah.
We.
A
A
But
there
will
be
a
collection
of
JSON
or
yamo
files
that
are
deployments
and
services
and
other
low-level
things
that
altogether
give
you
a
Jenkins
environment,
and
so
you
can
package
up
that
full
Jenkins
pipeline
as
part
of
your
dev
stage,
and
so
when
junior
developers
check
out
a
development
stage,
they
have
their
own
jenkins
and
their
own
CI
tests,
as
as
part
of
their
own
kind
of
decentralized
dev
stage
that
they
can
run
independently.
Perhaps
right
and
then
you
can
have
another
jenkins
in
in
the
staging
area.
A
That
does
a
second
round
of
checks,
but
they
could
hopefully
run
get
as
much
feedback
as
they
need
as
part
of
their
local
dev
loop.
If
that's
what
you,
if
that's
the
way,
you're
doing
CI,
then
then
yeah,
but
you
can
also
have
a
lot
of
other
testing
and
feedback
from
you
know.
Other
nodejs
based
build
processes
as
well
all
right
so.
B
Export
here,
what
this
is
going
to
do
is
get
us
the
name
of
our
pod.
Basically,
that
has
the
label
app
equals
HTTP
base
and
it's
going
to
run
the
OCR
Singh.
We
get
an
error
here,
because
we're
trying
to
upload
I
think
too
many
things.
There's
some
permission
thing.
However,
it
did
actually
do
it.
So
if
we
go
back
to
wherever
it's
running
here
and
refresh,
it
says
hello
open
shift,
so
it
did
send
that
file
up
there
right
in
this
was
telling
you
about.
B
A
A
A
A
A
B
A
A
A
A
A
B
Cool,
so
there
is,
we
don't
have
time
to
get
into
this
in
this
particular
workshop,
but
I
want
to
at
least
introduce
you
to
another
tool.
It's
it's
another
command-line
tool
that
you
can
use
with
open
chef,
called
odio
or
open
chef.
Do
but
I
call
it
audio,
and
what
that
is
also
intended
for
is
to
help
with
this
inner
loop.
This
iterative
development,
it's
not
just
for
no
it's
it
supports.
You
know
Java
PHP
Python,
whatever
whatever
you're
using,
and
it's
meant
to
help
kind
of.
B
Command
syntax.
So
if
that's
something
that's
interesting
to
you,
you
can
check
that
out
here,
but
it
you
know
to
create
something
in
your
development
loop
with
OTO,
you
would
Giulio,
do
create
nodejs
and
you'd
be
deploying
it
from
your
local
directory.
So
here
you
saw,
we
were
it's.
Actually
there
is
a
public
get
remote
URL
that
we're
deploying
from
odio
lets.
You
do
that
local
development
from
your
actual
laptop,
so
that's
kind
of
a
difference
there.
That
can
be
really
convenient.
It
also
can
do
this
kind
of
watching
loop.
B
A
Of
relying
on
that
webhook
workflow
to
somehow
call
back
into
my
local
system
and
also
coupling
all
the
builds
to
a
commit
I
can
decouple
those
two
and
use
odio
and
use
and
do
a
odo
push
and
what
that'll
do
is
push
whatever
the
call.
Whatever
the
current
contents
of
my
repo
is
whether
it's
been
committed
or
not.
Push
whatever's
in
my
repo
into
a
build
pipeline
run
a
build
and
stream
the
build
results
back
into
my
console
while
it's
building.
A
So
it
gives
me
a
quick
kind
of
evaluation
of
whether
it
will
pass
a
build
or
not,
and
it
actually
triggers
a
build
in
my
local
cluster
or
whatever
cluster
I'm
pointed
at
and
while
decoupling
the
process,
so
I
could
kind
of
test
the
code
before
I
make
my
commit,
and
then,
if
it
looks
good
great,
then
I
make
my
commit
and
my
get
push
and
maybe
that'll
trigger
a
build
in
some
other
pipeline.
For
my
CI
team,
you
know,
but
I
can
do
just
photo
push
while
I'm
iterating
or
odo
rsync.
A
B
So
there's
also
something
called
node
shift
which,
if
you
went
to
Luke
Holmquist
lab
yesterday,
I
think
he
may
have
have
talked
about
it
there.
That
is
something
it's
you
can
like
it's.
You
can
run
it
with
like
NP
x,
noches
blah
blah.
It's
just
another
way
of
helping
you
deploy
nodejs
applications
on
openshift
easily
we're
again
having
time
to
get
into
it
their
code
ready
containers.
It
sounds
like
at
least
one
of
you
is
using
that
already.
B
If
you
want
to
run
and
openshift
a
very
minimal
OpenShift
cluster
locally
on
your
laptop
to
do
local
development,
that's
what
code
ready
containers
can
do
for.
You
takes
a
bit
of
memory,
so
you
need
to
have
a
fair
amount
of
memory
available
on
your
laptop
to
run
it,
but
it's
pretty
easy
to
get
set
up
and
is
nice
for
doing
local
local
work
with
OpenShift.
A
B
So
that's
code
ready
containers
if
you
want
to
check
that
out.
If
you
haven't
seen
and
learned
on
open
chef,
comm
I'm
gonna
open
this
up
really
quickly
here.
Just
to
give
you
a
quick
view,
there's
a
bunch
of
tutorials
here,
but
there's
also,
if
you
just
want
to
kick
the
tires
or
have
access
to
a
cluster
for
a
little
while
to
try
something
out
these
OpenShift
playgrounds.
There's
one
for
open
chef
for
two,
which
is
the
version
we
were
just
using.
B
If
you
go
in
here,
there's
no
login
or
anything
you're
gonna
get
a
four
to
cluster
for
60
minutes
do
whatever,
and
then
it
goes
away.
So
if
you
just
need
to
try
something
out-
and
you
don't
want
to
install
code
ready
containers,
you
don't
need
to
you're
not
ready
to
actually
like
do
more
than
just
try
something
out
here.
You
go
it's
similar
environment
to
what
we
were
using
in
the
workshop,
but
you
can
login
as
admin
and
have
full
admin
access
here
to
this.
You
know
time
limited
cluster,
so
that's
another
another
option.
B
If
you
want
to
try
things
out
on
your
own
yeah,
so
we
do
have
some
time
for
questions
and
I
also
just
want
to
mention,
as
Ryan
said,
we'll
be
at
the
booth,
the
rest
of
the
day,
the
the
open
shift
booth
out
there
in
the
sponsor
showcase
area.
So
we'd
be
happy
to
talk
to
you.
If
you
have
any
questions
that
we
can't
answer
right
now,.
B
A
A
I
got
two
two
last
things
to
shout
out
for
you.
So
Jan
just
mentioned
this
learned:
openshift
calm
I
have
some
of
these
cards.
If
anyone
wants
a
reminder
about
learned
OpenShift
for
like
one-hour
sessions
without
any
signup
or
other
expectations,
the
other
thing
that
I
wanted
to
point
out.
We
have
a
link
in
the
slides
to
this
Oh
Riley
book.
If
you're
interested
in
a
free
Riley
book
on
OpenShift
click
on
that
link
and
you'll
get
a
PDF
download,
I
will.
B
A
Both
you
know
still
kubernetes
under
the
hood
and
trying
to
achieve
a
path
style
solution
on
top,
so
we
covered
some
of
this
kubernetes
terminology.
We
didn't
really
dig
too
deeply
into
all
of
these,
but
we
learned
about
routes
and
asked
me
about
other
details.
If
you
like
last
thing,
I
wanted
to
give
you
a
link
to
was
tried,
OpenShift
com.
This
is
a
good
way
to
get
started
with
new
clusters.
If
you
are
interested
in
trying
openshift
on
any
cloud,
you
like
you
can
deploy
it,
you
need
a
developer.
A
So,
even
if
you're
running
on
bare
metal,
we
want
you
to
have
really
solid
access
to
products
like
Retta,
backed
by
you
know.
The
actual
maintainer
zat
Redis
laps
and
if
you're
using
MongoDB
that
we've
got
actual
provided
by
MongoDB
incorporated,
so
we
try
to
work
with
all
the
maintainer
z--
in
the
industry.
A
So
each
of
these,
if
you
want
to
give
this
operator
hub
a
try,
we
have
this
operator
hub,
embedded
in
the
in
the
dashboard
logged
in
as
a
standard
user.
So
you
don't
have
access,
that's
an
admin
only
feature,
but
you
can
try
it
out
on
your
own
with
openshift,
four
and
so
administrators
can
go
in,
let's
say
CouchDB.
If
they
wanted
to
install
this,
you
can
run
this
coop
cuttle
create
on
any
cluster,
even
a
gke
cluster
or
a
amazon
kubernetes
anybodies
kubernetes.
A
Hopefully,
through
this
workshop,
the
API
is
going
to
be
asynchronous
and
it's
gonna
have
a
spec
and
a
status
right.
If
you
don't
remember
anything
else,
it's
asynchronous:
it's
JSON,
you
could
do
yeah
Mille,
but
spec
and
status,
and
you
set
the
spec
and
you
read
from
the
status
and
that's
how
all
of
these
data
stores
work
on
kubernetes.
They
create
a
new
resource
type.
A
You
set
the
spec,
you
say
what
I
need
and
kubernetes
goes
to
work,
fulfilling
your
dependencies
and
so
we're
encouraging
all
the
major
data
service
and
kind
of
soft
infrastructure
providers
to
jump
in
and
develop
their
own
extensions
for
kubernetes.
So
hopefully,
you
folks
find
a
lot
of
success
with
the
information
we
put
out
here.
Definitely
give
us
feedback
if
you
have
any
thoughts
on
any
of
this
and
find
us
in
the
booth.
Oh,
these
are
my
old
slides,
I
tried
to
add
I
tried
to
add
Jan.