►
From YouTube: Windows Containers in Red Hat OpenShift
Description
This video demonstrates how to run hybrid workloads - Windows and Red Hat Linux on Red Hat OpenShift.
To learn more, visit openshift.com
A
So
here
is
a
demo
of
running
builders
containers
on
top
of
openshift.
In
this
demo.
We
want
to
start
simple
and
just
hit
the
basics,
so
this
is
going
to
involve
the
following
steps:
first,
step
we're
gonna
creative
in
those
instance
of
as
your
second
step,
we're
going
to
join
that
Windows
instance
to
a
OpenShift
cluster,
also
running
on
Azure.
And
thirdly,
we
will
go
ahead
and
deploy
some
real-life
workload
on
top
of
that
Windows
container.
A
So
here
is
the
architecture
we
came
up
with.
It
consists
of
an
ansible
playbook
that
initializes,
the
prep
of
the
windows
vm,
the
core
engine,
that's
really
driving
the
whole
setup
is
Windows
machine,
can
fake
boot,
strap
ER
or
what
we
call
WM
CB.
That
sets
up
the
cubelet
to
run
as
a
Windows
service
that
perhaps
the
C&I
that
lays
out
the
cube
proxy
that
sets
up
be
hybrid
overlay
and
any
other
networking
plumbing
that's
needed
so
that
this
Windows
instance
can
join
the
köppen
shift.
A
Our
cluster-
and
this
also
runs
a
Whovian
hybrid
overlay
as
against
the
default
open
ship
Sdn.
This
is
because
we
want
to
be
able
to
carve
out
a
portion
of
the
network
per
windows.
So,
let's
start
with
the
cluster
that's
in
place,
so
we
already
have
a
open,
shipped
cluster
already
in
place,
which
we
can
see
now
there
isn't
really
any
application
deployed
on
it.
As
you
can
see.
A
So
if
we
say
OC
get
nodes,
we
will
see
a
three
master
3
worker
node
cluster,
and
if
you
want
to
see
what's
powering
the
worker
nodes
and
the
master
nodes,
essentially
it's
rel
Carabas
with
the
kernel
version
of
4
PT.
So
it's
a
complete
Linux
based
cluster
and
so
am
I
gonna
join
the
videos
note
to
it
so
first
step:
let's
actually
go
ahead
and
create
the
Windows
instance
on
Azure,
so
we
go
to
Azure
and
create
a
Windows.
A
A
A
A
We
can
turn
off
monitoring
and
diagnostics,
because
that's
really
not
the
objective
of
this
demo,
so
we
can
go
ahead
and
turn
that
off
in
the
tags
section
make
sure
you
specify
the
right
tags,
that's
needed
for
this
VM
and
then
you
always
have
an
option
of
downloading
the
azure
resource
manager.
Template
for
future
automation
or,
you
can
say,
go
ahead
and
create
it
in
this
case,
I
already
have
the
windows
VM
created.
A
You
can
go
and
examine
all
the
resources
created,
in
particular,
the
network
security
group
for
the
node,
make
sure
that
you
have
all
the
right
in
Bound
security
rules
in
place
to
allow
traffic
into
the
appropriate
notes,
and
so
right
now
the
windows
node
has
been
created
in
Azure.
The
next
step
is
to
bootstrap
it,
so
what
we
can
do
is
with
the
help
of
a
hands
upon
script.
A
We
can
join
this
particular
windows
instance
to
the
OpenShift
classroom
and
before
we
do
that
there
is
a
little
minor
thing
we
have
to
do
inside
the
windows
node.
We
actually
have
to
run
a
couple
of
scripts
to
enable
the
ansible
connection,
so
this
script
will
basically
enable
remote
ansible
connections
to
be
entertained,
and
next
we
need
to
open
up
TCP
port
10
to
50
so
that
when
we
say
OSI
get
logs,
we
can
get
the
logs
from
this
Windows
container.
A
A
So
till
now
we
have
created
the
Windows
instance
and
we
have
successfully
joined
into
an
open,
shipped
cluster.
So
the
next
step
is
to
actually
deploy
some
application
or
some
real-life
workload
on
the
Windows
node.
So
we're
gonna
take
a
Windows
container
and
we
gonna
schedule
it
on
this
Windows.
Now
before
we
do
that,
we
need
to
know
the
concept
of
paints.
A
Node
affinity
is
a
property
of
the
parts
that
attract
them
to
a
set
of
nodes
and
paints
are
exactly
the
opposite.
They
allow
a
node
to
repeat
a
set
of
pods.
So
if
you
issue
this
command,
we
can
actually
get
all
the
nodes
and
all
the
taints.
So
as
you
can
see,
the
cube
masters,
three
master
nodes
have
some
things,
because
you
obviously
don't
want
to
schedule
any
workloads
on
them.
A
The
three
linux-based
worker
nodes-
don't
have
any
things,
but
the
windows
known
has
a
taint
of
OS
is
equal
to
Windows
and
unless
a
corresponding
container
has
the
toleration
of
OS
is
equal
to
Windows.
It
will
not
be
scheduled
on
this
particular
node.
So
let's
look
at
this
particular
web
container,
which
is
a
windows-based
application
that
we're
going
to
schedule
on
this
Windows
node,
you
can
see
it
has.
A
toleration
of
OS
is
equal
to
Windows,
which
means
it
will
be
scheduled
on
that
Windows.
Now,
so,
let's
go
ahead
and
deploy
this
windows-based.
The
container.
A
Sikri
f,
that
should
go
ahead
and
deploy
this
windows
container
on
the
windows.
Note.
So
if
I
say
OC
get
nodes,
you
see
that
I
have
my
windows
known
and,
if
I
say
poster
yet
pods
I
should
have
a
newly
created
part
for
the
windows
web
server.
If
I
say
OC
describe
pod
and
I
give
the
name
of
the
part,
you
would
clearly
see
that
it's
running
on
the
videos
node.
So
in
this
case
the
windows
webserver
container
is
scheduled
to
run
on
the
windows
node
right.
A
This
deployment
also
exposed
this
pod
as
a
service,
which
means,
if
you
say,
we'll,
see,
get
services.
We
should
be
able
to
get
the
external
IP
address
for
this
windows-based
application
copy
that
and
let's
help
in
the
browser
and
there
we
go.
We
are
able
to
access
the
application,
which
is
a
Windows
based
web
container
running
inside
the
windows.
No,
that
shows
an
example
of
not-self
traffic.
A
A
A
Describe
part,
the
name
of
the
engine
next
part
will
clearly
see
that
the
engine
X
part
will
be
scheduled
to
run
on
the
winners.
No
I'm,
sorry
on
the
Linux
table.
It's
been
scheduled
to
run
on
one
of
these
nuts
were
corrodes
right.
We
can
go
ahead
and
expose
this
engine
X
paste
pot
as
a
service.
So
we
can
hit
that
as
well
as
a
cube
CTL.
A
A
As
you
can
see,
this
is
now
available
with
an
external
IP
address
and
if
I
hit
it
from
a
browser,
I
should
be
able
to
see
my
nginx
container
come
up.
So
wrapping
up,
we
deployed
both
a
windows
container
and
a
Linux
container
on
the
same
osep
cluster.
The
windows
container
got
scheduled
on
the
windows
node
and
the
learns
container
god
schedule
on
the
linux
spoken
odds,
and
all
of
this
is
managed
by
the
same
OCP
control
plane.