►
From YouTube: Introduction to Windows Containers in Kubernetes- MICHAEL MICHAEL, VMware & Mark Rossetti, Microsoft
Description
Don’t miss out! Join us at our upcoming events: EnvoyCon Virtual on October 15 and KubeCon + CloudNativeCon North America 2020 Virtual from November 17-20. Learn more at https://kubecon.io. The conferences feature presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
Introduction to Windows Containers in Kubernetes - MICHAEL MICHAEL, VMware & Mark Rossetti, Microsoft
The leaders of SIG-Windows will provide an update on the efforts to bring Windows to Kubernetes. This will concentrate on presenting an introduction of Windows Containers in Kubernetes and new features that are being delivered.
https://sched.co/ZeyH
A
A
Special
interest
group
in
kubernetes
and
our
sig
is
designed
to
help
accelerate,
define
and
deliver
windows
container
support
in
kubernetes.
A
So
our
agenda
today
is
we're
going
to
talk
a
little
bit
about
where
we
are
today,
some
of
the
continuous
improvement
that
we've
done
sseg
to
kind
of
advance
the
vision
of
windows
containers
in
kubernetes,
we're
going
to
do
a
quick,
deep
dive
on
csi
and
a
high
level
overview
on
container
d
took
a
little
perf
updates
and
our
future
roadmap.
I
want
to
stress
here
that
we
have
one
more
session
on
windows
containers
that
maz
one
of
our
key
contributors
on
container
dm
myself
did
so.
A
A
Let's
take
a
look
at
first
on.
Why
do
users
and
customers
deploy
windows
and
kubernetes?
Our
vision
has
always
been
to
make
kubernetes
truly
ubiquitous
and
continue
its
lead
as
the
top
container
orchestration
platform.
So
our
goal
is
kind
of
to
enable
users
operators
administrators
to
leverage
the
same
operational
efficiencies
on
windows
as
they
are
have
today
in
linux.
That
means
the
same
knowledge.
The
same
training
that
you
have
on
kubernetes
applies
on
windows.
A
You
get
to
have
a
scalable
platform
that
can
span
multiple
operating
systems,
both
linux
and
windows
and,
most
importantly,
your
developers
are
asking
for
a
self-service
container
as
a
service
style
platform.
They
want
to
take
advantage
of
cloud
native
tools.
The
new
way
of
building
deploying
applications,
new
way
of
scaling
applications
and
kubernetes
delivers
on
that
promise
for
both
developers
and
operators.
A
But
the
key
capability
of
windows
containers
in
kubernetes
is
that
you
get
to
retain
the
benefits
of
application
availability
while
decreasing
cost,
and
that's
super
important
in
scenarios
where
you
get
to
containerize,
existing
or
legacy.net
or
windows
applications,
because
either
you
want
to
eliminate
hardware
or
you
want
to
eliminate
under
utilized
servers
and,
most
importantly,
if
you
want
to
streamline
the
migration
of
applications
from
end
of
support
operating
systems
into
a
new
operating
system.
Kubernetes
our
windows
support
and
the
new
way
of
building
applications
is
really
the
right
way
to
go.
A
Think
about
this,
you
go
from
a
different
way
of
deploying
and
managing
your
applications
to
putting
everything
into
a
simple
docker
file
that
you
get
to
build
and
update
over
time
and
through
that
docker
file.
You
have
this
programmatic
way
to
define
what
should
your
application
look
like
by
modifying
a
few
yaml
files
and
json
files?
It's
amazing
it's
great
to
actually
have
that
support.
A
Let's
take
a
look
at
first
some
things
to
consider
and
then,
after
that,
we're
going
to
go
into
our
roadmap
here
and
and
where
we've
been,
we
want
to
make
sure
that
everyone,
that's
looking
into
windows,
containers
in
kubernetes
that
they
read
the
documentation,
it's
very
important
to
understand
the
differences
between
windows,
containers
and
linux
containers
as
it
pertains
to
kubernetes
as
a
special
interest
group.
Our
goal
has
always
been
to
make
sure
that
we
keep
in
lockstep
all
the
features
are
happening
in
linux
with
windows.
A
That's
not
always
easy
and
it's
very
important
to
understand
where
there's
differentiation
where
the
windows
community
is
ahead
and
what
is
behind
on
certain
features.
Second,
most
important.
You
need
to
look
at
the
fact
that
in
windows,
unlike
linux
containers,
you
have
to
enforce
the
host
and
guest
compatibility,
so
the
windows
server
kernel
version.
The
major
version
should
match
for
now
with
the
version
of
your
guest
container,
that's
running
on
the
host.
A
So
if
you
build
it
on
windows,
server
2019,
you
must
run
the
container
on
windows,
server,
2019
host
and
how
you
achieve
that
by
using
node
selectors,
and
rather
the
new
node
selector
called
windows,
build
that
includes
the
windows
build
in
it.
You
can
use
stains
and
tolerations
and,
as
as
of
the
last
couple
of
releases,
you
can
leverage
runtime
class,
which
you
can
define
once
per
cluster,
to
simplify
the
steering
of
windows
or
linux
spots,
to
the
appropriate
nodes
and
in
the
other
presentation
I
mentioned
earlier,
we'll
also
talk
about
hyper-v
isolation.
A
That
was
our
alpha
release
and
essentially
we
basically
ported
the
cube
led
and
the
q
proxy
to
run
on
windows.
It
was
great
work,
it
was
very
buggy,
it
had
a
lot
of
limitations,
but
it
essentially
showed
the
art
of
the
possible
to
everyone
and
rally
the
community
to
come
in
and
contribute
and
make
this
a
reality
with
1.9.
A
year
later
we
had
our
beta
release,
tremendous
updates.
We
had
cni's
up
and
running
so
a
lot
of
work
came
in
from
the
community
you're
going
to
notice.
Here.
A
A
Since
then,
it's
been
four
more
releases
of
kubernetes,
115,
16,
17
and
18
would
have
made
advancements
in
cube,
adm,
csi,
rana's,
username
and-
and
we
took
some
of
those
advancements
to
beta,
as
well
as
ga.
In
fact,
with
118,
we've
made
two
significant
advancements
to
ga
gmsa
and
run
his
username
and
both
of
those
capabilities
enable
windows
containers
developers
to
define
an
identity
that
the
container
will
carry
on
the
network,
really
critical
part
for
running
enterprise
line
of
business
applications
on
windows
in
kubernetes.
A
We
also
took
cube
adm
to
beta
cluster
api
to
alpha
csi
proxy
to
alpha
container
d2.
Alpha
118
was
probably
our
biggest
releases
since
1.14
and
then
119
came
along
and
now
we
have
tsi
proxy
to
beta.
We
have
container
d
to
beta.
We
have
networking
enhancements
like
like
dsr
and
endpoint
slices
and
then
additional
stability
and
performance
improvements.
A
All
right,
let's
talk
a
little
bit
about
csi
and
some
of
the
updates
to
the
work
that
happened
in
the
community
if
you're
not
familiar
with
csi,
it's
really
the
standard
for
exposing
block
and
file
storage
to
containerize
workloads
on
kubernetes,
and
it's
an
out
of
three
provider
model
that
enables
csi
drivers
and
providers
to
operate
and
release
independently
of
the
core
kubernetes
release
cycle
and
the
way
that
it
works
from
an
architecture
standpoint.
A
It
has
a
controller
plugin
and
a
node
plugin
and
the
node
plugin
requires
direct
access
to
the
host
for
making
storage
operations
like
calling
block
devices
file
systems
file
system
calls
in
windows.
The
node
plugin
cannot
operate
as
expected,
and
this
is
because
on
windows
containers,
you
don't
have
support
for
privileged
containers,
which
are
the
containers
that
have
a
higher
level
of
access
on
the
node
and
because
this
node
plugin
needs
to
run
with
these
elevated
privileges.
A
We
couldn't
really
follow
the
same
architecture
for
csi
support
as
it
exists
today
in
linux.
So
to
solve
that
we
create
a
new
component
called
csi
proxy,
so
it
kind
of
bypasses
that
that
that
regular
pipeline,
so
that
node
plugins
can
now
be
deployed
as
unprivileged
pods
and
then
use
the
proxy
to
perform
a
lot
of
the
privileged
storage
related
operations
that
are
needed
on
the
node.
A
So
it
kind
of
allows
us
to
bypass
that
that
existing
mode
of
how
csi
plugins
work
so
with
119,
we
have
moving
our
csi
support
and
csi
proxy
to
beta.
So
we've
had
a
tremendous
amount
of
updates
here.
We've
added
support
for
api
versioning
and
also
version
api
groups
to
support
disk
volume
smp
operations.
A
What's
next
we're
gonna
continue
innovating
in
this
work
we
plan
to
have
a
full
ci
cd
engine,
as
well
as
add
additional
csi
providers
like
vsphere,
aws
and
so
on
and
so
forth.
So,
if
you're
interested
in
that
work
come
and
help
us
enable
that.
So
now,
let's
take
a
look
at
the
cs.
Architecture
diagram,
like
I
mentioned
we
you
have-
you
have
docker
and
cubelet
and
standard
components
and
kubernetes,
and
in
order
for
some
of
the
storage
related
operations
to
to
be
handled
at
the
higher
privilege
that
is
required.
A
We
have
this
new
component
called
the
csi
proxy
that
is
able
to
make
the
calls
to
the
storage
volumes
to
the
file
system
to
the
file
volumes
on
behalf
of
the
csi
node
part.
So
in
the
csino
port
you
have
your
node
driver
as
well
as
a
node
plugin,
like
I
mentioned
earlier,
they're
calling
to
the
csi
proxy.
That
bypasses
and
can
proxy
some
of
those
requests
down
to
the
file
systems
and
then
on
the
right
side.
You
have
your
windows
pods,
which
have
the
different
containers
that
can
be
attached
to
persistent
volume.
A
The
next
area
that
we're
taking
to
beta
with
with
our
latest
release
for
119
is
container
d.
So
why
did
you
do
container
d?
Like
I
mentioned
earlier,
please
attend
the
other
session
on
windows,
because
we
do
a
deep
dive
on
container
d,
but
container
d
essentially
allows
us
to
align
the
direction
of
our
investment
with
the
rest
of
the
kubernetes
community,
as
well
as
the
other.
Active
investments
are
happening
across
the
both
the
cri
community,
as
well
as
the
kubernetes
community.
A
It's
optimized
and
maintained
by
our
community
and
inclusive
windows
includes
the
windows
operating
system
team.
You
can
see
a
huge
consortium
of
of
of
contributors
are
trying
to
get
container
d
to
work
better.
What
are
some
of
the
benefits
of
that
work?
Number
one.
Is
it's
going
to
enable
hyper-v,
isolated
containers
to
run
on
kubernetes?
A
That
means
that
you're
going
to
be
able
to
have
a
secure,
multi-tenant
boundary
across
windows
containers
on
kubernetes.
This
is
huge.
It's
going
to
enable
a
new
class
of
use
cases
for
the
users
where
multi-tenancy
is
an
important
aspect
of
their
delivery
in
the
kubernetes
environment
and
then
what
we
don't
have
today
is
gmsa
support
and
we
plan
to
add
it
if
you
notice
earlier,
when
I
mentioned
that,
gmsa
is
stands
for
group
managed
service
accounts
and
it
enables
windows
containers
to
carry
a
windows
active
directory
identity
on
the
network.
A
Today
it
works
by
having
the
cubelet
with
the
embedded
docker
ship,
all
in
one
release,
calling
docker,
which
in
turns
called
hcs
version,
1
schema
and
then
it
goes
down
to
the
container
and
hcs
is
a
service
that
microsoft
started
right
when
they
delivered
the
first
release
of
windows,
containers
called
host
compute
service
and
that
basically
acts
as
a
as
an
overlay
or
the
boundary
for
connecting
to
the
underlying
container
engine
with
container
d.
The
first
thing
that
happened
was
it
decoupled?
The
container
d
effort
from
the
cubelet,
so
now
they
can
be
version
independently.
A
They
can
be
released
independently
and
it
gives
us
a
lot
of
flexibility
here
and
the
cri
is
embedded
in
container
d
in
this
architecture
and
from
there
it
calls
the
hcsc,
which
is
basically
a
shim
layer
that
was
added
in
front
of
the
hcs
that
enables
you
to
make
calls
to
both
hcs
or
hns.
The
host
networking
service
as
well,
and
now
we
support
the
v2
schema
with
docker
hcs
v1
schema
was
the
only
one
that
was
supported.
A
The
v2
schema
was
experimental
while
in
this
case
supports
the
v2
schema,
allowing
us
to
advance
the
vision
of
windows
containers
and
also
support
a
new
critical
update
functionality
like
hyper-v
containers
in
the
future,
looking
at
our
technology
matrix
now.
So
what
does
that
mean
when
we
started
1.14?
A
Our
first
release
of
ga
for
windows
containers
only
supported
one
operating
system,
and
that
was
windows,
server
2019,
and
that
was
the
ltsc
release
that
stands
for
long-term
servicing
channel.
That's
the
release
of
windows,
that's
supported
for
many
years
and
microsoft
gives
you
an
option
to
purchase
and
extend
the
support
agreement
as
well.
A
2004
is
the
latest
and
greatest
release
of
windows.
If
you
are
more
information
around
the
different
servicing
information
and
the
different
releases
of
windows,
there's
a
link
at
the
bottom
of
this
page
that
you
can
follow
now.
One
of
the
last
parts
of
119
is
some
perf
reliability.
Improvements,
we've
heard
from
users
that
windows
containers
need
to
execute
better.
We
need
to
improve
both
the
stability,
reliability
of
heard
your
feedback
and
we've
done
a
lot
of
advancements
in
that
area.
A
A
A
Today
we
gather
a
lot
of
the
cubelet
metrics
through
the
metric
server,
so
prometheus
can
can
integrate
with
that
and
some
of
those
metrics
time
out
when
you
have
a
large
number
of
of
pods
on
windows
nodes.
That's
not
great!
That's
not
a
great
experience
for
our
users
so
now
added
support
to
make
that
better.
So
we
have
a
much
improved
performance
for
the
cubelet
stats
and
and
summary
work
that
we
have
and
we've
also
added
support
where
cpu
limits
are
not
expected
on
windows
containers.
A
So
if
you
know
this
over
provision,
your
limits
are
going
to
be
respected
and,
more
importantly,
cpu
limits
are
important
because,
when
you
think
of
memory
on
a
windows
container,
there's
no
pod
evictions
due
to
memory
pressure
on
windows.
So
when
things
get
processed
to
page,
when
they're
page
to
disk,
then
that
means
you
get
slow
performance.
So
the
way
to
bypass
that
is
to
use
the
cubelet
reserve
and
the
system
reserve
capabilities
to
to
basically
allocate
some
of
those
resources
for
the
node
processes.
A
A
A
The
first
one
is
compute
we're
going
to
keep
investing
in
container
d,
we're
going
to
get
that
to
ga
we're
going
to
support
hyper-v,
isolated
containers
like
I
mentioned
earlier,
and
we're
also
going
to
support
gpus
on
windows,
and
that
was
something
that
barely
missed
the
cut
for
1.19,
but
rest
assured
we're
going
to
deliver
it
with
1.20
we're
actually
actively
working
on
that
right
now
from
a
deployment
lifecycle
management.
You
know,
cuba,
dm
and
cluster.
A
We're
going
to
add
additional
storage
providers
like
vsphere
and
aws,
and
we're
gonna
light
up
valero
support,
so
you
can
take
advantage
of
csi
snapshots,
so
you
can
backup
and
recover
your
cloud
native
applications
running
in
kubernetes
and
in
in
turn,
we're
also
gonna
deprecate.
Some
of
our
in-store
entry
storage
plugins
in
lieu
of
the
csi
support
that
we're
adding.
A
How
can
you
contribute
come
join
our
weekly
meetings,
every
every
tuesday
at
12,
30
instant?
We
have
all
our
conversations
are
recorded,
come
and
help
us
write
documentation,
user
stories.
If
you
want
to
find
some
bugs
to
fix
and
engage
with
our
community,
we
have
a
project
board,
look
for
the
bugs
that
are
tagged
as
good
first
issue.
Those
are
usually
the
ones
with
the
minimal
amount
of
knowledge
that
you
need
to
know
in
windows,
containers
and
kubernetes
to
get
started
and,
as
you
advance
your
involvement
with
our
community
review,
open,
pr's,
create
your
own.
A
Pr's
can
become
a
contributor
or
even
a
technical
lead.
We
really
are
an
open
and
welcoming
community
and,
most
importantly,
we
get
to
work
on
pretty
much
every
area
of
kubernetes.
So
if
you
get
involved
in
windows,
you
can
attach
storage,
compute
networking
security
api.
So
if
you
want
to
learn,
kubernetes
end-to-end
sig
windows
is
one
of
the
few
sigs
where
you
can
do
that.
A
If
you
want
to
reach
us
on
the
left
side,
we
have
the
two
co-chairs
myself
and
mark
rossetti
and
deep
de
bruy
or
technically
we
have
slack
and
github
and
how
you
can
connect
with
us
and
then
sseq
windows
community,
we're
on
slack.
We
have
a
mailing
list.
We
have
our
documentation
over
here
we
have
github
getting
started
guides.
A
If
you
want
to
kind
of
have
a
one-stop
shop
for
everything
around
windows
in
kubernetes,
we
have
our
youtube
playlist
with
pretty
much
every
recorded
meeting,
we've
had
going
back
three
plus
years
and
then
our
zoom
link
for
our
meetings
every
tuesday.
Thank
you
all
for
attending
this
presentation.
I'm
going
to
open
it
up
for
q
a
now.
B
So
mark-
and
I
would
like
to
thank
you
for
for
attending
our
talk-
thank
you
for
spending
time
with
us,
whether
it's
morning
afternoon
or
evening,
depending
on
the
time
zone
where
you
are
we're
gonna
start
answering
some
of
the
questions
that
are
on
the
q
a
right
now.
B
In
addition
to
that,
be
aware
that
you
can
find
us
on
slack
and
towards
the
end
of
this
presentation,
the
the
engineer
is
gonna
post
the
message,
what
it
will
be,
but
it's
gonna
be
on
the
cncf
slack
and
it's
gonna
be
under
the
to
dash
cubicon
dash,
maintainer
channel
and
there's
a
thread
for
sick
windows
and
in
this
meeting
and
mark-
and
I
are
going
to
be
there
after
this
talk,
so
feel
free
to
interact
with
us
more.
B
You
can
also
find
us
on
the
sick
windows
channel
in
kubernetes
like
like
this
slide
has
indicated
you
can
send
us
an
email
in
our
mailing
list
or
you
can
come
to
one
of
our
community
meetings,
so
there's
lots
of
ways
to
engage
with
us.
So,
let's
get
started
answering
some
of
these
questions.
B
All
right
I'll
start
from
from
the
earlier
one
so
mark
answered
the
question:
can
there
be
linux
as
well
as
windows,
machines
on
the
as
worker
nodes
in
the
same
clusters?
Yes,
that
is
correct.
We
support
heterogeneous
clusters,
where
you
can
have
multiple
nodes,
be
aware
that
the
cni
that
you
use
needs
to
be
compatible
with
both
linux
and
windows.
So
not
all
cni's
can
support
both
operating
systems
so
be
aware
of
that,
like,
for
example,
does
calico
does
andrea
does
but
flannel
on
the
host
gateway.
B
Next
question:
how
kubernetes
understands
that
this
container
has
to
be
scheduled
on
windows,
worker
node,
there's
many
ways
that
you
can
do
that?
Actually
in
in
previous
conversations
in
our
documentation,
we
talked
about
a
couple
of
critical
ways
that
you
can
do.
That
is
basically
setting
up
the
operating
system
in
your
in
your
yaml
file
to
say,
targeted
to
windows.
B
We've
also
added
a
new
variable
that
allows
you
to
define
the
build
of
windows
that
you
want
the
scheduling
to
happen
on,
so
you
can
define
it
there.
You
can
also
use
the
runtime
class,
which
is
what
you
recommend,
so
you
can
define
it
once
and
use
it
cluster-wide
so
that
all
of
your
deployments
can
basically
target
the
runtime
class
and
the
random
class
dictates
where
the
scheduling
needs
to
happen,
and
you
can
also
use
tens
and
tolerations
where
you
basically
paint
every
windows
node.
With
a
specific.
B
C
I
was
just
going
to
say
if
you're
deploying
kind
of
charts
off
helm
or
other
kind
of
deployments,
where
you
just
kind
of
grab
a
pre-canned
deployment.
It
is
important
to
either
look
at
the
deployment
or
to
add
taints
and
tolerations
to
your
nodes,
because
most
kind
of
helm
charts
that
we've
seen
don't
restrict
the
containers
that
must
run
on
linux
to
run
on
linux.
So
this
is
just
an
easy
way
to
help
kind
of
protect
yourself
from
kind
of
running
into
some
scheduling
issues
in
your.
B
All
right,
the
next
question:
how
will
c9
work
on
windows
machine?
Will
it
work
the
same
as
the
linux
machine?
Fundamentally,
the
way
the
cni
will
work
will
be
pretty
similar.
It's
gonna
have
an
agent
or
a
pod
or
something
running
on
the
windows.
B
Note
it's
gonna
connect
with
the
master
component
of
kubernetes
as
well,
so
that
they
can
actually
connect
to
the
api
and
know
when
new
parts
are
being
created,
we're
networking
it
needs
to
be
networking
components
that
the
addresses
need
to
be
allocated
or
english
needs
to
be
set
for
some
of
these
pods
but,
like
I
mentioned
earlier,
be
aware
that
not
all
have
a
windows
implementation,
the
ones
that
you're
more
familiar
with
in
our
community
are
are
ovh,
flannel
and
anthea,
and
other
versions
of
flanders
are
at
ga
level.
C
Also,
if
you're
interested
members
of
the
windows
container,
networking
team
from
microsoft
have
delivered
talks
deep
dive
talks
into
how
container
networking
works
at
past
cubecons.
So
I
would
recommend
looking
up
those
videos
on
youtube.
If
you're
interested
in
kind
of
looking
under
the
hood
a
little
bit.
B
Actually,
david
also
published
a
blog
post
on
understanding
the
state
of
container
networking
on
windows
in
kubernetes.
So
if
I
can
find
it
really
quickly
here,
so
I
can
post
it
on
the
answers
as
well
mark.
If
you
can
answer
the
next
question
with
the
monitoring,
I
will
try
to
see
if
I
can
find
a
blog
post.
C
Sure
the
next
question
was
how
are
users
monitoring
windows.
Pods
is
metrics
api
reporting,
the
same
metrics,
granularity
and
as
possible
for
as
possible
for
linux
pods.
The
answer
is
mainly
yes,
so
there
was
a
lot
of
changes,
as
michael
mentioned,
in
the
talk
specifically
around
helping
perf
for
the
metrics
apis
for
windows
nodes
most
of
the
general
metrics,
especially
those
needed
for
operations
like
pod,
horizontal
pod,
auto
scaling
are
the
same
as
linux.
C
C
Okay-
and
the
next
question
is:
are
there
developments
for
a
free,
open
source
option
to
use
for
network
policies
on
windows
nodes?
I'm
actually
not
sure
about
that.
Do
you
know
about
that?
Michael.
B
One
second,
I
mark,
I
posted
the
link,
the
blog
post,
that
david
created
on
understanding
the
state
of
networking
so
yeah,
I
posted
under
the
question
under
how
c9
works,
and
they
can
read
that.
I
want
to
add
one
more
thing
to
your
monitoring
and
metfix.
We
actually
did
improve
a
lot
of
the
metric
apis
with
119,
so
folks
are
basically
using
from
promises.
For
example,
monitoring
they're
going
to
see
improved
performance
on
how
monetaries
are
monetary,
metrics
are
heard
and
reported.
So
both
of
those
are
great
here.
B
B
There
are
only
two
team
nights
and
windows
that
support
network
policies.
One
is
over
this
ovh
and
the
other
one
is
andrea.
All
right,
alec
also
supports
network
policies,
but
it
I
it's
under
the
paid
offering
the
aligot
caligo
together
subscription.
C
C
At
this
point,
I
would
say
I
think
that
most
of
the
components
required
to
run
masternodes
can
technically
run,
but
it
hasn't
really
been
a
focus
of
the
community
today
to
get
masternodes
running
or
windows
nodes
running
as
master
simply
because
there's
just
so
much
work
to
kind
of
keep
and
maintain
windows,
nodes
running
as
agents
and
that's
where
kind
of
the
community's
focus
has
been.
C
If
this
is
something
that
you're
particularly
interested
in,
there's,
no
reason
why
you
wouldn't
be
able
to
join
the
community
and
help
kind
of
move
that
agenda
forward.
B
Yeah
and
also
one
thing
to
add,
there
is
a
tremendous
amount
of
testing
for
the
web
required
to
actually
convert
from
linux,
to
windows
and
test
everything
all
over
again
and
it's
a
non-trivial
cost
to
us
as
a
kubernetes
community.
On
top
of
that,
there
are
certain
assumptions
that
different
components
have
made,
even
though
the
language
is
portable,
it's
written
in
go.
There
are
some
assumptions
from
the
master
components
that
are
running
on
a
linux
node,
whether
it's
security
assumptions,
and
otherwise
it's
just
a
huge
effort.
B
If
any
one
of
our
users
or
customers
are
interested
in
that,
please
be
vocal
about
it.
We
can
start
collecting
the
evidence,
but
this
is
done
right
now.
This
is
not
any.
We
have
zero
goals
around
this
ralph
had
a
question
about.
Is
microsoft,
working
with
red
hat,
to
get
windows,
containers
supported
in
open
shift.
B
B
Question
number:
nine:
how
microsoft
licensing
looks
like
in
terms
of
using
windows,
servers,
cluster,
node
and
container
images
mark
yours,
and
do
you
want
me
to
find
the
link
on
licensing
as
well
to
post
it
mark?
While
you
talk.
C
C
Yeah,
the
next
question
is:
are
there
migration
tools
to
help
migrate
legacy
apps
to
windows
containers?
The
answer
to
that
is,
there
are
some.
I
know
the
microsoft
team
has
been
working
on
a
set
of
containers
that
are
that
they've
all
been
made
available
on
github
to
help
with
things
like
logging.
B
Okay
I'll
check
the
next
question:
we're
having
kubernetes
cluster
on
vmware.
If
we
have
windows
as
worker
node,
then
cpi
and
cns
of
vmware
will
work
for
windows
worker
node.
Not
everything
will
work.
I
want
to
be
a
little
bit
clearer.
There.
Sheer
volume
will
work
from
a
storage
standpoint
for
persistent
volumes,
then
from
a
compute
standpoint.
Obviously
it
will
work
but
tsi
and
the
new
work
that
we're
doing
in
stick
windows
with
the
csi
beta,
that's
coming
out
in
1.19.
B
B
B
Any
other
questions
we
have
about
two
minutes
or
one
minute
left
in
our
session.
There's
any
additional
questions.
Ask
them
here
or
come
find
us
on
slack
find
us
in
our
community
meetings.
We
have
a
lot
of
ways
to
engage.
This
has
been
a
very
lively
discussion
with
all
125
of
you
that
joined
today.
Thank
you
so
much.
B
Is
github
such
as
argo
tv
possible
at
this
time?
You
know
a
lot
of
the
devops
and
csd
tools
today
use
the
kubernetes
api
to
connect
to
kubernetes
and
enable
the
cst
pipelines.
Windows
containers
don't
have
any
difference
from
a
pipeline
standpoint.
We
use
the
same
kubernetes
apis.
You
can
target
the
same
artifacts
and
kubernetes
for
windows,
as
you
can
do
on
linux.
B
B
You
know
I
I'm
gonna,
do
a
shameful
plug
here,
but
the
dm
world,
both
u.s
and
europe
of
last
year.
2019.
If
you
go
and
connect
you
should
be
able
to
for
free
download.
I
did
a
presentation
if
you
search
for
my
name,
michael
michael,
you
find
it
on
how,
on
some
best
practices
for
migrating
legacy,
applications
to
containers.
It's
a
one
hour,
long
presentation,
I'll
talk
about
the
caveats,
some
things
that
may
work
some
things
that
may
not
work.
It's
a
fairly
involved
conversation.
B
Maybe
given
these
questions
might
be
worthwhile
for
us
to
actually
do
a
presentation
like
that
in
the
next
cubicle
so
mark,
and
I
will
take
down
their
advisement,
but
please
go
to
vmworld.
Look
at
2019
find
my
presentation.
It's
one
hour.
Long
has
lots
of
good
content
in
there
or
come
to
one
of
our
community
meetings
and
we
can
chat
more.
But
it's
it's
not
a
big
topic.