►
From YouTube: OCB: Windows Containers on OpenShift
Description
Windows Container Support for Red Hat OpenShift is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is now possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With Windows nodes available, you can run Windows container workloads in OpenShift Container Platform. In this briefing, Red Hat's Anand Chandramohan will discuss the development of the WMCO, which provides all Windows container workload capabilities in OpenShift Container Platform and members of the technical staff will demonstrate how it all works.
A
B
Sure
my
name
is
anand
product
manager
for
the
windows
containers
on
openshift
offering
and
with
me
is
aravind
arvind.
Would
you
like
to
say
a
few
words.
B
Sermon
so
the
agenda
for
today
is
obviously
windows
containers
and,
as
you
know,
karina-
and
you
know
chris
mentioned
this-
is
you
know,
been
a
very
highly
popularly
requested
feature
and
we're
glad
to
let
you
know
that
this
ga
da
a
couple
of
weeks
ago
in
december-
and
you
can
now
try
windows,
containers
on
openshift
on
aws
and
azure,
with
other
platforms,
soon
to
follow
for
the
next
45
minutes
or
so
here's
the
agenda.
B
We
wanted
to
talk
about
a
brief
introduction
to
windows
containers,
a
technical
overview
of
the
windows
machine
config
operator,
how
you
can
schedule
different
types
of
workloads
on
openshift
including.net
workloads,
which
includes
dartmouth,
framework
and
dotnet
core,
how
we
differentiate
against
other
windows
offerings
out
there
in
the
market,
and
then
we
will
wrap
the
presentation
with
a
peek
into
what's
coming
in
the
roadmap
right
vga
a
couple
of
weeks
ago,
and
then
you
know
we
will
continue
to
will
continuously
refresh
the
operator
every
couple
of
months
or
so.
B
So
we
want
to
give
you
a
peek
into
what's
coming
in
the
next
three
six
nine
months
and
then
plenty
of
you
know
q
and
a
plenty
of
time
for
q,
a
hopefully
at
the
end
and
we'll
point
you
to
all
kinds
of
resources,
that's
available
and
feel
free
to.
You
know
pop
in
a
question
in
the
chat
you
know
karina
and
chris
and
urban
will
help
me.
You
know,
look
at
those
questions
and
feel
free
to
interrupt
me
as
well.
B
Having
said
that,
why
windows
containers
right
windows
server
still
enjoys
a
significant
presence
in
the
server
operating
system.
I
think
about
50
of
the
market
share
for
enterprise
os
is
still
windows
and
on
windows.net
is
a
widely
used
programming
language.
If
you
look
at
redmonk,
if
you
look
at
a
lot
of
popular
programming
language
surveys,
dotnet,
like
you
know,
vb.net
still
ranks
you
know
highly
amongst
those
surveys
in
terms
of
the
choice
for
programmers
to
use
for
application,
development
and
traditionally
windows
has,
you
know,
remained
largely
independent
of
linux
in
its
own.
B
You
know
island
right
and
that
did
not
enable
a
lot
of
windows
native
developers
to
embrace
cloud
native
technologies
like
microservices
and
containers.
You
know
when
I
say
cloud
native
workloads
on
windows.
It
almost
seems
like
a
paradox
right
because
of
that,
but
if
you
wanted
to
get,
you
know,
windows
to
a
cloud
native
world.
If
you
wanted
our
older
windows,
customers
to
you
know,
make
the
big
leap
of
faith
to
public
cloud
and
do
cloud
native
development.
B
Obviously,
that
you
know
addresses
some
of
the
pain
points
mentioned
in
the
previous
slide,
but
the
first
of
the
biggest
benefit
is:
you
know
this
serves
as
a
bridge
for
legacy
windows,
customers
to
adopt
public
cloud
or
hybrid
cloud
for
that
matter,
and
we'll
talk
about
the
different
options
you
have
for
from
going
from.
Let's
say
running
older
windows,
you
know
2012
or
in
newer
windows
2019
to
a
more
modern
cloud
native
strategy.
B
Next
benefit
is
pretty
much
the
benefits
of
you
know,
general
containerization,
you
get
application
portability,
which
means
once
containerized
you
can
run
it.
You
know
the
platform
of
your
choice
and
the
infrastructure
provider
of
your
choice
obviously
get
more
agility
because
you
can
release
faster
and
more
often
you
get
more
control
of
the
you
know
the
container
infrastructure
and
last,
but
not
the
least,
moving
from
a
vm
to
a
container-based
world.
Obviously
you're
going
to
be
reducing
infrastructure
and
management.
B
Costs
is
why
do
we
need
red
hat
for
running?
You
know,
windows,
containers
and
we'll
talk
about
some
of
the
differentiation
for
red
hat
windows
containers
you
know
towards
the
later
part
of
the
slide,
but
one
of
the
things
I
want
to
point
out
was
with
red
hat
linux,
with
red
hat
openshift
for
windows
containers
you
can
co-locate
windows
and
linux
containers
in
the
same
cluster,
which
means
both
windows,
nodes
and
linux.
B
Worker
nodes
can
be
happy
citizens
of
the
same
cluster,
communicating
with
each
other
and
you're
trying
to
build
a
first
class
management
experience
for
windows
containers
as
well.
So,
for
instance,
if
you
go
to
the
ocp
console,
you
know
manage
your
windows,
pawns
and
applications
and
services
from
there
we're
trying
to
get
a
similar
experience
for
windows
as
well,
and
obviously
you
know,
red
hat
open
shift
is
supported
on
a
wide
variety
of
platforms.
You
know
public
clouds,
private
clouds,
openstack
bare
metal.
B
Here
is
the
lay
of
the
land,
and
here
is
how
the
offering
is
placed.
So
like
I
mentioned,
you
know,
red
hat
openshift
is
a
truly
hybrid
cloud
offering
it
can
run
on
physical
servers
like
bare
metal
servers
can
run
on
virtual
servers.
You
know,
vsphere
clusters
can
run
on,
private
clouds
can
run
on
public
clouds.
Aws
azure
whatnot
can
run
on
public
clouds
as
a
managed
offering
right,
for
instance,
on
azure.
B
We
have
azure
at
openshift
on
amazon
we
have
rosa,
but
the
predominant
operating
system
of
choice
for
running
openshift
used
to
be-
and
still
is
you
know-
red
hat
enterprise,
linux
and
rel
os.
B
What
we're
now
adding
as
a
part
of
this
offering
is
for
cluster
admins
to
let
them
run
windows
servers
as
worker
nodes
right.
So
now
you
can
have
windows
worker
nodes.
Alongside
you
know,
rel
worker
nodes
on
the
same
cluster
scheduled,
managed
and
controlled
by
the
same
control
plane
communicating
with
each
other
communicating
with
the
outside
world,
and
just
you
know,
being
happy
citizens
right
and
rest
of
the
stack
is,
you
know
pretty
consistent,
so
all
the
other
cluster
services
layout
services-
you
know
it,
doesn't
you
know,
change.
B
One
caveat
is
a
lot
of
the
you
know.
Other
cluster
services,
other
platform
services
that
work
seamlessly
with
rel
and
rel
coreos
today
have
not
yet
been
tested
on
windows.
You
know
that's
a
story
in
progress,
that's
something
that
we
are.
You
know
working
to
harden,
but
a
quick
question
could
be
hey.
B
Here
is
the
schematic
of
how
you
can
you
know:
co-locate
windows
and
linux
worker
nodes
on
the
same
openshift
control,
plane
and
alongside
them.
You
can
also
co-locate
windows,
virtual
machine
enabled
by
an
another,
offering
called
red
hat
open
shift,
virtualization,
which
lets
you
wrap
a
windows
virtual
machine
as
a
pod
and
run
it
inside
openshift
and
all
these
three
entities
you
know,
like
I
said,
can
be
happy
citizens.
You
know
intermingling
co-mingling,
you
know
happily
talking
to
each
other
and
to
the
outside
world,
managed
by
the
same
control
plan.
B
And
the
next
question
becomes,
you
know
what
do
you
use
for
what
right?
How
do
you
position?
Let's
say
open
shift,
virtualization
versus
windows,
containers
and
the
quick
answer
to
that
is:
if
you
have
legacy
workloads
that
cannot
be
containerized
so
say,
for
instance,
your
microsoft
sql.
You
know
that's
that
needs
to
be
a
long-running
instance.
Obviously
you
know
that
might
not
be
a
good
fit
for
container.
B
There
are
some
of
the
newer
windows
applications
right
so,
for
instance,
an
iis
web
server
or
you
know
even
a
dot-net
code.
Application
which
is
more
micro
services
enabled
might
be
a
good
fit
for
containerization,
and
if
it
is
dotnet
framework,
you
know
that's
obviously
support
only
on
windows,
but
if
you've
migrated
dart
and
framework
to
core
you
have
the
choice
of
running
it
on
windows
or
on
the
nuts,
because
document
code
is
supported
and
certified
on
linux.
B
Here
is
a
positioning
table.
You
know
this
is
really
to
you
know.
Tell
you
that
you
know
red
hat.
Openshift
gives
you
a
spectrum
of
choices
for
modernizing
your
older
windows
workload.
You
could
start
by.
You
know
virtualization,
which
is
just
forklifting
vms
to
openshift,
which
is
easy,
low
friction,
but
offers
little
benefits
of
containerization
next
step.
If
you
wanted
some
benefits
of
containerization,
you
can
obviously
you
know
strangle
the
monolith
decompose
to
microservices
convert.
B
You
know
containerize,
some
of
your
applications
to
net
containerize,
some
of
your
applications,
like.net
framework,
bring
them
to
openshift,
get
some
benefits
of
containerization
some
benefits
of
openshift,
but
at
the
same
time
the
windows
you
know
container
ecosystem
is
you
know
evolving
right
I
mean
the
linux
container.
Ecosystem
has
been
out
there,
for
you
know
six,
seven,
maybe
more
years,
whereas
the
windows
ecosystem
is
still
catching
up.
A
lot
of
the
features
have
not
been
hardened
out
and
that's
still
work
in
progress
next
step.
B
If
you
wanted
the
full
benefits
of
the
linux
container
ecosystem,
you
could
re-architect
your
application,
which
means
you
could
let's
say,
take
your
older
document
framework,
apps,
migrate
them
to
dotnet
core
and
then
run
those.net
code
applications
on
rel,
rel
coreos,
thereby
getting
the
full
benefits
of
linux,
containerization,
the
full
benefits
of
openshift
and
the
full
leverage
of
a
highly
evolved
linux
community,
the
trade-offs.
Obviously,
being
you
know,
there's
migration
effort
involved
in
the
the
migration
from.net
framework
to
dotnet
core.
You
might
need
a
development
team,
so
that's
going
to
consume.
B
You
know
time
cost
and
effort,
and
the
last
step
is
you
know
if
you're,
starting
as
a
green
field,
if
you
just
you
know,
you
want
to
go
complete
cloud
native,
you
don't
care
about
windows,
you
don't
care
about!
You
know
net.
You
could
just
start
with
real
real
core
os,
which
is
you
know
the
most
dominant
operating
system
of
choice
on
the
cloud
on
openshift
right
and
you
could
just
start
building
open
source
applications
right.
Your
java
applications,
your
ruby,
your
node.js,
your
your
python.
B
B
So
you
will
be
working
with
the
highly
modern
stack
with
this
approach.
But
again
the
trade-off
says
you
need
a
development
team
and,
if
you
let's
say
you're
an
older,
you
know
customer
who's
running
in
maintenance
mode.
You
may
not
have
the
luxury
of
staffing,
a
development
team
to
build
new
cloud
native
applications.
B
So
again
the
the
takeaway
from
the
slide
is
openshift
gives
you
a
spectrum
of
choices,
and
we
are
here
to
handhold
you
from
one
step
to
the
next.
You
don't
need
to
go
from.
You
know,
step
one
to
step
four
in
one
shot,
you
can
make
incremental
progress
at
the
same
time,
you
can
start
at
any
step
and
enter
any
step
right.
You
can
start
with
virtualization
and
end
with
containers,
or
you
can
start
with
windows,
containers
and
end
with
linux
containers,
or
you
can
just
start
with
linux
containers.
B
B
It
is
the
entry
point
for
openshift
customers
who
want
to
run
and
schedule
windows
workloads
on
the
openshift
cluster.
It
is
a
day
to
operation,
which
means
day
one
you'll
set
up
your
openshift
cluster.
You
know
your
default
cluster
right
with
you
know
three
masters,
three
rel
core
os
worker
nodes
and
then
day
two
you
will
set
up
windows
workouts
with
the
help
of
this
operator.
B
The
only
prerequisite
being
your
cluster
should
have
been
built
with
the
networking
provider
called
ob
and
hybrid.
If
you
had
a
different
networking
provider
like
openshift
sdn,
you
just
have
to
make
a
new
cluster.
B
Here
is
the
architecture
of
windows,
machine
config
operator.
It
involves
three
steps
to
adding
the
windows
worker
node.
The
first
step
involves
a
cluster
admin
installing
the
operator
via
the
operator
hub.
So
you
launch
your
cluster.
You
know
go
to
the
console,
you
know
navigate
to
the
cluster
operator
hub
and
then
look
for
the
windows
operator.
You
will
install
it.
It's
literally
a
click
and
it
literally
takes
a
couple
of
minutes.
B
The
next
step
is
a
cluster
admin
will
define
a
machine
set
that
has
a
bunch
of
machines
that
have
specific
labels
on
them.
For
instance,
os
is
equal
to
windows,
and
then
what
happens
next,
as
a
third
step
is
that
the
operator
is
watching
for
those
labels
on
those
machines
and
if
it
finds
machines
that
match
those
labels,
it's
going
to
take
each
of
those
machines
one
by
one
and
on
each
of
those
machines.
B
It's
going
to
set
up
all
the
plumbing
that's
needed
for
that
machine
to
be
bootstrapped
into
the
ocp
cluster
as
a
worker
node
right
so
to
take
each
of
these
machines
set
up
all
the
infrastructure
components
like
cube
proxy
hybrid
overlay,
cubelet
cni,
so
it
does
all
the
plumbing
work
right
and
once
the
plumbing
work
is
done,
it
joins
that
node
to
the
cluster,
and
you
can
start
scheduling
workloads
on
that
node
and
then
it
takes
a
second
machine.
It
does
the
same
thing
right.
B
So
that's
really
the
secret
sauce
here
is,
you
know
the
work.
That's
been
investor
to
make
the
operator
functional
and
highly
automated
as
possible.
B
B
In
terms
of
the
platforms
we
support
today,
like
I
mentioned,
we
went
ga
sometime,
I
believe
in
december,
and
we
went
ga
with
a
cloud
first
approach,
which
means
we
support
ipi
on
aws
and
azure,
with
support
for
vsphere
ipi
coming
very
soon.
It
says
eta
is
jan
21,
which
is
most
likely
true,
because
we'll
most
likely
get
support
for
vsphere.
Ipi.
B
Excuse
me
in
the
community
version
of
the
operator,
and
then
other
teams
are
working
on
a
bring
your
own
host
story
right,
so
ipi
is
about
you
know
bringing
you
know,
cattles
right.
So
if
you
have
like
a
bunch
of
computer
instances
on
aws
and
azure,
obviously
they
have
access
to
limitless
computer
instances,
and
you
can
usually
you
know,
provision
them
and
de-provision
them.
B
B
We
will
consider
requests
for
other
versions
of
windows,
so,
for
instance,
if
you're
running
older
versions
of
windows
like
1803
or
more
modern
versions
of
windows
like
2020
h1,
we
will
consider
customer
requests
for
those
and
we'll
prioritize
accordingly.
But
right
now
we
are
supported
on
windows,
server,
2019
and
if
you're
running
an
older
version
of
windows,
you
have
two
choices:
either
you
migrate
to
this
version
of
windows
and
you
run
containers
or
you
use
openshift
virtualization.
B
B
Next,
the
operator
also
is
responsible
for
upgrading
all
the
software
components
laid
by
the
operator.
So,
for
instance,
if
microsoft,
let's
say
releases
a
new
version
of
q,
proxy
or
the
cubelet,
we
will
actually
take
that
newer
version
of
cubelet.
You
know
put
it
through
our
openshift
build
system
to
make
sure
there
are
no
cv,
you
know
vulnerabilities
and
it's
you
know
secure.
B
There
are
no,
you
know
bad
things
that
can
happen
with
the
cubelet
if
you
put
it
on
your
nodes,
so
we'll
basically
take
it
through
our
openshift
build
system
build
a
newer
version
of
the
operator,
make
the
new
version
of
the
operator
available
in
the
in-cluster
operator
hub
and
once
the
operator
gets
upgraded.
B
The
operator
will
make
sure
that
each
of
the
machines
that
was
configured
by
the
operator
is
upgraded
to
get
the
latest
version
of
the
software
right
so
say.
For
instance,
you
had
four
machines,
you
know,
let's
say
running
the
1.0
version
of
cubelet
microsoft
release.
Let's
say
the
1.2
version
of
the
cubelet.
B
We
will
rebuild
an
operator.
The
newer
version
of
the
operator
will
make
sure
that
it
upgrades
the
cubelet
on
all
your
four
windows
working
nodes
from
1.0
to
2.0,
1.0
1.2
actual
question
that
begs,
you
know
is:
does
the
operator
upgrade
the
underlying
windows
operating
system
as
well?
And
the
answer
is
no?
The
end
user
is
responsible
for
upgrading
the
windows
operating
system.
This,
we
feel,
is
something
that
the
cluster
admin
should
be
responsible
for,
and
there
is
no
way
you
know.
B
The
windows
machine
config
operator
can
upgrade
the
underlying
os
for
a
lot
of
reasons-
and
this
is
you
know,
pretty
much
a
common
stand
taken
by
you
know
google
and
microsoft
themselves,
so
the
cluster
admin
will,
you
know,
provide
a
updated
image
of
windows
and
he
will
specify
that
image
in
the
machine
set
and
we
will
bootstrap
that
you
know
machine
set
into
the
classroom.
B
B
Cluster,
how
do
you
go
about?
You
know
thinking
about
placing
dot-net
workloads
on
openshift.
B
I
think
we
spoke
about
this,
but
let's
say
you
have
older
dartmouth
framework
apps,
like
3.5
4.6,
you
can
target
windows,
you
can
target
them
on
the
windows
operating
system
and
if
you
have
a
more
modern
version
of
net
like
dotnet
core,
which
is
more
micro
services
enabled
you
can
obviously
target
them
on
both
windows
and
the
mouse.
B
There
is
some
guidance
product
by
microsoft
as
to
when
to
use
net
core
versus
dartmouth
framework
when
you
use
linux
containers
versus
windows
containers.
So
there
is
a
red
book
or
a
book.
You
know
an
ebook
that
microsoft
has
put
out
with
all
the
guidance
on
the
architect
architectural
choices.
You
should
be
making
as
you
go
about
moving
these
apps.
B
Here's
a
simple
decision
tree
so
say
you
start
with
dartmouth
framework,
it's
a
compatible
version
right.
So,
for
instance,
it's
let's
say
running
4.7.2
that
is
supported
on
windows,
server,
2019,
that
is
supported
by
openshift.
You
can
run
those
workloads
on
windows
notes
if
it's
an
incompatible
version,
say,
for
instance,
it's
running.net,
3.5
or
4.6.
B
Microsoft
is
again
put
out
something
called
a
dotnet
api
porting
tool
that
you
know
goes
in
as
an
extension
to
visual
studio,
and
that
will
help
you.
You
know
analyze
your
existing
project
for
portability.
So,
for
instance,
if
you
have
let's
say
4.6
of
net,
you
want
to
migrate
to
a
new
version.
Of.Net
core.
You
input
your
project,
you
say
analyze,
project
portability
and
that
will
give
you
an
assessment
report
of
you
know
what
aspects
can
be
ported,
how
difficult
it
is
going
to
be
to
do
that.
B
Migration,
competitive
differentiators,
I
think
the
two
biggest
according
to
me,
is
we
support
or
we
we
intend
to
support
a
lot
of
platforms
right.
For
instance,
if
you
have
a
cloud
offering
like
aks,
you
cannot
run
aks
on
your
on-prem
bare
metal
clusters
right
with
red
hat
openshift
and
windows,
containers
and
radar
openshift.
We
intend
to
support
all
the
cloud
all
the
popular
clouds
like
aws
azure
majority
of
the
on-prem
popular
platforms
like
vsphere,
bare
metal,
red
hat,
virtualization
openstack.
What
not?
And
so
you
get
a
coverage
for
a
lot
of
platforms.
B
You
know
making
it
a
truly
hybrid
cloud
offering
and
the
second
you
know
big
differentiator
according
to
me
is
the
operator
that
you
know
arvind
and
team
have
built
out
right.
That's
really
the
secret
sauce!
That's
you
know,
gives
you
a
whole
bunch
of
automation
within
a
couple
of
clicks.
You
know
you're,
talking,
windows,
servers
with
windows,
several
containers
right-
and
I
think
that
to
me
is
you
know.
The
real
takeaway
is
the
operator
that
we
have
built
and
then
you
know
obviously
other
secondary
benefits.
B
B
B
We
will
harden
out
other
stories
like
logging,
monitoring,
storage,
moving
to
container
d
and
then
at
the
long
term,
which
is
really
nine
months
and
above
we
want
to.
You,
know,
look
at
the
the
scope
of
customer
requests
and
you
know
see
if
customers
are
asking
us
for
running.
You
know
service
meshes
on
top
of
you
know
worker
nodes.
If
they
want
to
let
these
windows
knows
be
managed
by
things
like
you
know,
open
policy
agent
and
the
gatekeepers
of
the
world.
B
B
And
windows
containers
obviously
has
limitations.
Today,
like
I
said
you
know,
this
is
work
in
progress.
I
think
arvind
and
team
have
done
a
phenomenal
job
of
you
know,
helping
a
cluster
admin
bring.
You
know,
windows
workloads.
I
think
that's
a
great
first
step,
there's
still
a
long
journey
to
go.
For
instance,
we
don't
support
serverless,
we
don't
support
pipelines,
we
don't
support
service
mesh.
You
know
cost
management
code
ready
containers.
So
there's
still,
you
know
this
is
still
an
evolving
story,
so
you
know
watch
the
space
pretty
closely.
B
Because
of
this
limitation,
we
offer
support
as
at
a
standard
level,
not
at
a
premium
level
because
of
the
reduced
feature
set
and,
if
you're
looking
for
the
skew.
This
is
the
skew
you
need
to
be
looking
for.
B
B
So
you
would,
you
know,
obviously
the
starting
point
for
this
is
the
openshift.
You
know
cluster,
so
you
would
launch
your
cluster
and
then
you
will
navigate
to
operator
hub
in
the
operator
hub.
You
will
search
for
windows
and
you'll,
see
two
operators
pop
up
right,
the
community
operator
and
the
windows
machine
config
operator.
I
want
to
point
out
a
difference
here.
This
is
a
community
operator,
so
this
is
something
that
you
know
we
put
out
as
a
skunk
works
project.
B
We
release
it
like
every
couple
of
you
know
weeks,
maybe
every
couple
of
sprints
and
it's
kind
of
really
use
it
at
your
own
risk
right.
There
is
no
level
of
support.
You
can't
like
really
ask
any
questions
I
mean
you
can,
but
you
know
there
is
no.
You
know
promised
level
of
support,
whereas
the
windows
machine
config
operator
is
the
hardened
version
of
the
community
operator.
You
know
it's
gone
through
our
internal
secure,
build.
You
know,
systems,
we
make
sure
it's
you
know
fully
baked.
B
We
make
sure
that
it's
fully
docked
fully
qed
q8,
it's
fully
tested
and
it's
fully
supported
at
a
premium
level
right.
You
can
raise
a
ball.
You
can
raise
a
support
ticket.
You
can
call
us,
you
can
get
access
to
full
support
with
this
offering
and
we
will
refresh
this.
You
know
as
and
when
we
land
new
features
and
as
and
when
you
know
there
are
big
milestones
and
so
say.
For
instance,
you
want
to
use
the
red
hat.
B
You
can
choose
your
update
channel,
you
can
choose
the
namespace,
you
can
choose
how
the
approval
strategy
should
be
manually
automatic
and
then
obviously
already
have
it
installed.
So
it
says
already
exists,
but
you
can
click
on
install
and
you
know
take
a
few
minutes
to
get
the
operator
installed.
B
Once
it's
installed,
you
can
go
to
the
install
operators
and
see
that
the
windows
machine
config
operator
was,
you
know,
successfully
installed
on
this
date.
You
can
click
on
it
again.
You
know
you
can
look
at
the
same,
you
know
sort
of
documentation
for
it,
and
so
now
the
operator
is
installed
now.
The
second
step
is
to
actually
create
a
machine
set
right
which
will
on
board
or
which
will
tell
the
operator
what
machines
to
watch
for
so
you
could
come
to
compute
under
machine
sets.
B
You
can
say,
create
a
machine
set,
and
here
you
can
specify
a
machine
set
right.
For
instance,
let
me
see
if
I
have
a
machine
set
handy
here
yeah,
so
you
can
specify
a
machine
set
where
you
instruct
the
operator
to
look
for
specific
machines,
in
this
case
machines
that
are
windows
and
that
have
specific
labels
right.
B
So
let's
say
you
want
to
add
a
windows
as
a
worker,
node
you'll
specify
you
know
the
label
here
that
says
id
is
windows
and
then
you'll
go
ahead
and
specify
the
compute
shape
and
size
of
this
windows.
Instance,
right
you'll
say
this
is
a
microsoft
windows
server.
This
is
this
cue
to
go.
Look
for
you
know
which
suppose
you're
deploying
to
the
cloud
which
region,
let's
say
in
azure.
You
want
to
get
this
deployed,
which
network
resource
group
you
want
to
deploy
on
azure.
B
What's
the
disk
size,
what's
the
disk
type,
what
resource
group
in
azure
you
want
to
get
this
deployed,
what
subnet
and
azure
you
want
to
get
this
deployed?
What's
the
the
size,
for
instance,
is
it
like
d2s
v3
whatnot?
What
v-net
and
what
a-z
you
want
to
get
this
deployed
right.
So
it's
pretty
much
like
creating
a
windows
instance
on
azure
right
you'd
have
to
go
specify
these
things
anyway,
in
this
case
you're
just
automating
that
into
a
machine
set-
and
you
can,
you
know,
copy
paste
this
machine
set.
B
We
provide
your
machine
sets,
for
you
know
all
the
providers
we
support,
and
then
you
can
stick
that
machine
set
here
and
then
say
create,
and
then
that
goes-
and
you
know,
creates
a
machine
set
and
simultaneously
as
those
machine
set
is
created.
The
operator
is
watching
for
machines
with
specific
labels
in
this
machine
set
and
if
it
finds
machines
in
the
machine
set
with
those
labels,
it
takes
those
machines
and
it
bootstraps
it
and
sets
it
up
as
a
worker
node.
D
B
Wind
worker,
again,
you
can
go
see
the
yaml
for
that,
but
the
key
thing
is
once
the
machine
set
is
deployed,
the
operator
watches
the
machine
set
and
it
you
know,
takes
machines
from
the
machine
set
and
it
onboards
it
the
cluster.
So,
for
instance,
in
this
case
you
know
there
are
a
bunch
of
machines
that
is
onboarded.
One
has
been
provisioned,
because
if
you
go
look
at
this
machine
set
windows
worker,
it
has
a
desired
count
of
one.
So
if
I
let's
say
bump
it
up
to
three
scale,
it
come
back
to
machines.
B
You
can
see
that
one
has
been
provisioned
and
it's
the
process
of
provisioning.
You
know
two
more
right
and
it
usually
takes
about
15-20
minutes
to
get
each
windows
notice
provision
because
it's
you
know,
windows.
It
takes
a
little
more
time,
but
I
do
have
one
provision
windows
note,
and
you
can
see
that
this
is
a
worker
node.
B
The
windows
node
so
now
you're
ready
to
start.
You
know
placing
workloads
or
scheduling
workloads
on
this
windows
node.
So
now
what
you
can
do
is
you
can
go,
deploy
a
pod
right,
so
you
can
take.
Let's
say
a
you
know
like
in
my
case,
I
have
a
windows,
nano
server
deployed
right.
B
The
only
key
thing
to
note
here
is
the
operator
taints.
All
the
windows
nodes
with
labels
as
os
is
equal
to
windows.
So
if
you're
deploying
a
pod,
you
need
to
make
sure
that
it
is.
You
know
having
the
right.
Toleration
of
os
is
equal
to
windows,
and
the
cube
scheduler
will
take.
This
part
see
that
it
has
a
toleration,
find
a
node
that
has
a
corresponding
taint
and
then
place
this
part
on
that
node
right,
you
would
deploy
it.
You
can,
you
know,
go
to
your
command,
prompt
and
say
something
like
you
know.
B
B
Services
and
you
see
that
the
windows
web
server
has
also
been
deployed
as
a
service,
which
means
it
can
take
traffic
from
the
external
world
from
the
public
internet
because
it
will
provision
the
external
ip
address
and
then
take
this
ip
address,
and
then
you
know
put
in
the
browser
and
boom.
You
see
that
it's
you
know
accessing
traffic
from
the
outside.
It's
able
to
get
traffic
from
the
outside.
B
Going
back,
this
is
the
application
we
deployed,
which
is
windows
web
server.
We
exposed
it
as
a
service,
so
you
should
see
a
windows
web
server
service
that
has
this
external.
You
know
ip
address
and
you
can
take
this
external
ip
address
and
you
can
hit
the
application
right.
So
now
you
can
pretty
much
take
any
application,
make
sure
it's
tainted,
deploy
it
expose
it
as
a
service,
and
then
you
know,
access.
B
A
B
A
Awesome-
and
we
have
so
many
questions
in
the
chat
since
you
are
already
in
the
console.
A
One
of
the
last
questions
is
asking
about
storage.
Can
you
show
how
or
you
would
add,
storage
to
the
windows
container.
B
Yeah,
so
we
do
support
on
azure
we
support
azure
disks
and
files
and
on
aws
we
support
ebs
volumes.
There
is
a
and
on
vsphere
we
will
support
what
is
supported
today
for
storage,
which
is
entry,
volume
plug-in
and
then
once
vsphere
moves
to
csi
proxy.
We
will
support,
you
know
csi
proxy,
as
well.
B
B
A
Thanks
we'll
search
for
that
and
then
post
it
in
the
chat.
Okay,
let's
see,
I
would
presume
that
for
windows,
machine
sets,
cluster
auto
scaling
is
available
or
automatic.
B
It
is
one
of
the
things
in
our
roadmap.
We
haven't
tested
it
out,
but
you
can
scale
at
least
manually.
Like
I
mentioned
you
can
go
to
machine
sets,
you
can
you
know
you
know,
increase
your
account
decrease
your
count.
You
can
do
what
not
with
this
manually
cluster
auto
scaler
is
something
we've
not
yet
tested,
but
it
is
something
in
our
roadmap.
So
if
you
look
at
our
roadmap,
you
know
flash
driver
scaling
is
you
know?
Definitely
one
of
the
things
we
want
to
make
sure
is
tested
and
supported.
A
Nice
all
right,
let's
see,
support
for
nfs
and
sifs.
B
A
C
The
thing
that
I'm
most
excited
about
at
the
moment
is
bring
your
own
host.
We
see
a
lot
of
questions,
a
lot
of
requests
both
from
you
know,
on
the
okd
side
and
from
our
customers
that
they
have
a
set
of
windows,
pets
that
they
want
to
add
to
the
add
to
openshift
clusters
as
as
worker
nodes.
So
that's
a
that's
my
number
one
priority
at
the
moment
and
that's
what
I'm
working
on
and
very
excited
to
get
that
out,
hopefully
very
soon.
A
Thanks,
what's
your
favorite
feature
that
everybody's
working
on.
B
In
terms
of
a
feature
that
has
been
complete,
I
would
say
it's
machine
sets,
I
think
the
way
machine
sets
is
used
to
glue.
B
You
know
the
windows
machines
at
least
on
ipi
clusters,
I
think,
is,
I
think,
is-
is
magical
I
mean
you
literally
saw
me
going
through,
like
you
know,
like
onboarding
windows,
a
couple
of
windows
nodes
onto
the
cluster
within
a
couple
of
clicks.
You
know
pretty
much
and
I
think
that
that's
really
my
favorite
part
of
the
story
that
has
been
completed
so
far.
That's
my
favorite
feature
in
terms
of
the
favorite
feature.
That's
coming
ahead.
I
think
I
will
named
it.
I
think
it's
you
know
bring
it
on
hosts.
B
I
think
you
know
a
lot
of
our
windows.
Customers
are
still
on-prem,
I
think
they're
still
running.
You
know
windows,
workloads
on
non-cloud
platforms
like
vsphere
and
bare
metal
and
openstack
and
whatnot,
and
we
really
want
to
make
sure
this
offering
works
for
them,
and
it
you
know,
serves
them
with
the
purpose
of
bringing
and
modernizing
those
you
know,
workloads
to
open
shift
to
containers
and
to
public
cloud,
and
so
the
bring
it
on
whole
story
is.
I
think,
the
one
that
I'm
really
looking
forward
to.
A
So
as
far
as
the
you
mentioned
modernizing,
and
then
you
also
early
on
in
the
presentation
you
mentioned,
you
know
rehosting,
refactoring,
re-architecting
and
rebuilding
right.
I
mean
the
different
ways
to
bring
your
windows
applications
into
openshift.
What
are
you
seeing
most?
What
are
you
being
asked
for
the
most.
B
I
think
that
again
you
know
a
lot
of
customers
are
at.
You
know
different
points.
We
talk
to
customers
in
healthcare
and
manufacturing.
They
still
have
a
lot
of
old.
I
guess
windows
baggage,
they
are
really
brown
field
and
they
don't
have
a
lot
of
you
know.
Development
resources,
especially
you
know,
in
a
core
video
like
this
they're
really
trying
to
you,
know,
run
lean
and
mean,
and
so
they're.
Looking
at
a
combination
of
you
know
our
first
two
you
know
strategies
which
is
you
know,
rehosted
and
refactor
right.
B
If
I
have
windows,
2012
windows
2016,
you
know,
can
I
easily
bring
it
over
to
machine?
You
know
pop
it
into
openshift.
I
have
a
newer
version
of
windows
like
2019.
Can
I
you
know
quickly
take
an
is
web
server.
You
know,
drop
it
into
a
container,
bring
it
to
openshift
right.
So
these
are
the
two
most
important
techniques
that
I
have
seen
so
far,
at
least
because
you
know
I
talk
to
customers,
mostly
in
manufacturing
and
healthcare,
and
so
that's
you
know
pretty
much.
You
know
what
I
have
seen
so.
E
A
B
B
Highlight
something
right,
so
the
components
of
windows,
pricing
on
openshift
involves
three
components
right,
the
first
component,
obviously
being
the
control
plane
needs
to
be
licensed
right.
So
that's
you
know
the
first
step.
Second
step:
your
windows
worker
nodes
needs
to
be
licensed
with
you
know,
microsoft,
windows,
licenses
and
the
third
step
is
for
running
these
windows
workloads
in
openshift.
You
need
to
license
it
again
with
you
know:
red
hat
for
windows,
container
support
and
the
charge
for
that
is
hundred
dollars
per
vcpu
or
400.
B
A
C
A
Thank
you.
There
is
also
a
question
about
naming
standards.
Did
you
answer
that
one
too
or
we
could
ask
it
all
right,
recall.net,
5.0.net,
dash50
and
then.netcourse.net31
chuck's
worried
that
some
developers
might
see
dash
50
as
a
newer
version.
A
C
Yeah,
so
the
publishing
of
those
containers
actually
is
sort
of
outside
our
wheelhouse.
We
don't
really
control
or
get
involved
in
that,
but
it's
something
I
think
anand
and
I
can
take
back
and
ask
internally
about
what
the
what
the
deal
is
around
that
I
don't
know
off
the
top
of
my
head.
Why?
That
sort
of
decision
making
happen.
B
E
B
Question
about
you
know:
microsoft,
images
or
any
of
the
net
core
images.
One
point
I
want
to
emphasize
is
what
alvin
said
just
want
to
uplevel
that
and
say
that
we
do
have
a
very
tight
working
relationship
with
microsoft,
both
on
the
business
side
and
on
the
engineering
side.
B
You
know
we
have
built
this
offering
working
very
closely
with
them
and
so
do
post
the
question
you
know
we'll
make
sure
we
get
it
answered
by
microsoft,
because
this
question
seems
to
be
a
microsoft
question
and
not
a
red
hat
question,
but
since
we
enjoy
the
tight
relationship,
glad
to
get
that
answered
for
you
and
on
top
of
that
I'll
also
point
out
that
this
solution
is
jointly
supported
by
red
hat
and
microsoft.
B
So
if
it's
an
issue
with
the
operator,
you
would
raise
a
bug.
You
know
with
the
operator
that
comes
to
red
hat,
support,
red
hat
support.
Will
you
know
triage
it
and
if
they
find
out
that
it
lets
it's,
let's
say
an
issue
with
the
microsoft
os
right.
We
will
actually
open
a
support
case
with
microsoft.
Microsoft
will
acknowledge
it
and
if
it
happens
to
be
genuine
generally
a
problem
with
microsoft
windows,
they
will
actually
publish
a
fix
upstream
again,
like
I
said
we
will,
you
know,
take
that
fix.
B
You
know,
put
it
back
through
our
build
system
and
provide
a
refreshed
version
of
the
operator
right.
So
that's
kind
of
the
high
level
workflow
of
how
a
customer
can
expect
to
get
fixes
from
red
hat
and
microsoft.
You
will
not
have
to
call
microsoft,
you
will
call
red
hat,
we
will
engage
microsoft,
get
it
fixed
upstream,
so
everybody
has
a
fix.
We
will
downstream
it
to
our
build
system.
You
know
which
is
kind
of
giving
you
that
security
and
hardening,
and
then
you
know
refresh
it.
You
know
back
to
you.
A
I'm
glad
you
mentioned
the
relationship
with
microsoft.
They've
been
a
really
really
good
partner.
I
know
you
guys
all
have
a
really
good
working
relationship
all
right.
Another
question:
let's
see.
A
Upi
installations-
I
know
our
venue
did
answer
this,
but
can
you
address
upi
users.
C
Yeah,
so
we're
going
to
treat
upi
in
the
same
bucket
as
bring
your
own
hosts,
so
our
bring
your
own
host
solution
is
going
to
work
across
the
board.
It's
going
to
work
for
upi
installations,
it's
also
going
to
work
for
on-prem
and
we're
hoping
it's
going
to
be
a
magic
bullet
even
for
other
platforms
that
don't
support
machine
sets.
A
And
okd
was
also
mentioned,
so
are
you
doing
a
lot
of
this
work
in
okd
as
well.
C
C
A
C
Do
that
would
be
the
that
would
be
one
place
to
go.
You
can
also
engage
us
on
the
openshift
slack
channels
on
the
kubernetes
slack.
I
actively
monitor
those
so
anytime,
you
whisper.
The
word
windows
you'll,
most
likely
see
me
respond.
So
that's
another
channel.
C
E
B
There
is
also
a
windows
openshift
mailing
list,
and
you
know
I
can
have
that
you
know
available
through
karina,
so
all
new
announcements,
all
new.
You
know
refreshes
to
the
community
operator
new
releases
of
the
red
hat
operator.
We
usually,
you
know,
constantly
keep
our
mailing
list.
You
know
notified
about
these
latest
and
greatest
changes.
B
So
you
can,
you
know,
subscribe
to
that
mailing
list
as
well.
That's
another
way
of
engaging
with
us.
A
Or
one
that
you
think
would
be
good
all
right,
so
everybody
we
have
10
minutes
left.
You
want
to
add
some
more
questions
and
no
there's
another
one
asking
about.
I
know
you
did
talk
about
it,
but
on-prem
installs.
B
It's
going
to
fall
under
the
umbrella
of
bring
your
own
hosts,
and
I
will
maybe
just
since
we
have
10
minutes,
can
you
spend
a
minute
or
so
just
talking
about
the
high
level
design
for
how
you're
thinking
about
the
bring
your
own
host
problem
with
config
maps?
I
think,
even
if
the
design
is
not
set
in
stone,
I
think
it'd
be
good
to
share
with
the
audience
what
our
thought
process
is
for
that.
C
Sounds
good,
so
what
we're
trying
to
do
is
we're
trying
to
make
it
as
easy,
as
it
is
possible
for
folks
to
onboard
their
existing
windows,
vms
or
windows
instances
in
their
data
center,
so
the
way
they'll
do
this.
The
way
they'll
express
this
intent
of
adding
these
instances
is
by
creating
a
config
map
inside
the
windows
machine
config
operator
namespace
in
the
config
map,
they'll
specify
the
the
username
for
accessing
this
windows
instance
and
the
ip
address,
and
what
we
would
at
that
point
suggest
is
this
instance.
C
This
windows
instance
needs
to
be
configured
with
the
same
private
key
that
we
are
using
for
our
machine
set
installations.
This
will
allow
the
operator
to
ssh
into
that
machine
and
set
it
up
the
same
way.
It's
doing
for
a
a
machine
that
has
been
created.
So
this
way
the
customer
doesn't
have
to
give
us
too
much
information.
There
is
no
need
to
store
like
use
it
like
passwords
anywhere,
we'll
we'll
just
reuse
the
same
private
key
and
we'll,
of
course
take
feedback.
C
If
customers
come
back
and
say
we
would
like
to
use,
you
know
different
sets
of
private
keys
for
different
instances,
we'll
take
that
feedback
and
in
upcoming
releases,
we'll
have
a
way
to
specify
specific
private
keys
for
specific
instances.
So
at
the
moment,
our
our
intent
is
to
make
it
as
simple
as
as
possible.
Just
you
know,
drop
a
config
map
in
the
namespace.
The
operator
will
watch
for
this
config
map
will
try
and
then
access
instances
specified
in
in
there
using
the
same
private
key
and
configure
the
windows
nodes.
C
So
we
are
sort
of
ambivalent
about
it.
As
long
as
your
cluster
supports
those
dns
resolutions,
you
can
use
dns.
If
not,
you
can
use
static
ips.
We
really
don't
have
a
reason
to
support,
not
support
dns
okay,
but
we
will
not
do
anything
to
enable
that
you
know
the
cusp.
The
cluster
setup
should
have
dns
resolution
working
properly
for
like
any
linux
worker
or
any
windows
worker,
and
then
this
should
also
work.
A
A
Now
for
everybody
else,
this
isn't
the
only
windows
session
that
will
be
done
so
obviously
we're
getting
all
kinds
of
ideas
on
what
would
be
other
good
areas
to
dive
into
and
sounds
like
definitely
bring.
Your
own
host
will
be
a
great
session
to
dive
into
so
if
you
also
have
other
wish
list
items-
and
I
see
that
we're
being
asked
for
the
links
in
the
chat
and
bruce
we'll
have
those
available
after
the
session.
A
B
A
B
That
is
right.
That
is
right,
so
vsphere
ipi
should
be
supported
through
the
community
operator.
I
would
say
pretty
soon.
You
know,
I
want
to
say
like
really
soon
and
then
once
you
bring
your
own
whole
story
that
event
is
working
on
is
done.
Vsphere
upi
will
also
be
supported,
yeah,
so
watch
for
us.
You
know,
like
I
said
you
know,
subscribe
to
our
mailing
list.
As
soon
as
the
vsphere
ip
is
available
through
the
community
operator.
B
You
know
we'll
send
a
notification,
and
then
we
will
let
that
harden
out
for
a
couple
of
weeks
and
then
we'll
support
vsphere
ipi
on
the
red
hat
operator
and
then
once
bring
it
on
host
is
available.
We
will
let
you
know
as
well.
C
A
That's
a
great
point,
especially
for
everybody
that
if
you're
using
the
4
6
eus
release,
you
want
to
stay
on
it
for
a
bit.
Definitely
let
them
know
whether
you
need
that
support
all
right.
Anon
do
you
have
handy
and
if
you
want
to
close
out
the
presentation
to
grab
it
the
the
link
to
the
mailing
list,
so
at
least
we
can
tell
people
right
now
to
sign
up.
A
D
A
C
A
E
A
A
B
Right
what
happened?
That's
right,
so
yeah!
If
they
just
there's
a
subscription
request,
I'll
make
sure
they're
added
to
the
list
and
yeah
they
should
start
receiving
notifications.
A
Very
cool
all
right
well
with
one
minute
to
go.
Thank
you
both
so
much
that
was
awesome.
I
know
everybody's
probably
shocked
that
it's
finally
out
the
door.
We
have
windows
container
support
now.
It's
just
keeps
going
from
here
right.
So
thank
you,
and
until
next
time,
thanks
and
chris,
if.