►
From YouTube: Kubernetes SIG Windows 20170808
Description
Kubernetes SIG Windows 20170808
B
Right
good
morning
or
good
afternoon,
where
you're
coming
from
this
is
Jason
Messer
I'm
a
PM
lead
on
the
windows
devices
group,
core
networking
team
and
joining
me
today
is
Dinesh
Kumar,
Govinda,
Sami
and
modern
rajim
Conde,
who
are
software
engineers
on
our
team
I'm,
going
to
talk
to
you
about
the
networking
improvements
we've
made
for
kubernetes.
Let's
see
a
quick
agenda.
B
D
Guess
so,
similar
to
the
next,
we
have
different
Network
modes
and
Windows.
So
I
thought
I'll,
give
a
quick
recap
on
the
different
network
modes
that
we
have
in
Windows
and
what
is
the
importance
of
each
network
mode,
so
the
default
network
mode
in
Windows
2
is
NAT
similar
to
the
bridge
network.
Modern
Linux
in
this
network
mode,
both
Mac
and
IP
of
the
container,
will
be
rewritten
to
the
Mac
and
idea
of
the
container
host
in
overlay
network
mode.
D
The
inner
the
inner
containers
space
will
be
in
capped
with
an
outer
header
from
the
container
host
in
the
transparent
network
mode.
This
is
basically
the
the
mode
where
both
container
smack
and
IP
will
be
on
the
same
underlying
network
as
the
container
host,
and
then
we
have.
In
addition,
we
have
two
other
network
modes
called
l2
bridge
and
l2
tunnel
in
both
of
these
Network
modes.
The
Mac
of
the
container
mac
containers
mac
will
be
rewritten
to
the
container
host
Mac,
but
the
IP
will
be
from
a
routable
space
from
the
underlay
Network.
D
The
myth,
the
difference
between
transparent
and
L,
to
pretend
L
to
tunnel
is
that
basically,
the
physical
network
will
be
learning
the
containers,
Mac
and
transparent
network
mode,
whereas
in
L
to
bridge
in
Delta
tunnel,
the
physical
network
will
not
be
learning
the
containers,
Mack
and
Macks
cooking
needs
to
be
enabled
in
transparent
and
in
L
temperature
del
toral.
You
don't
need
to
enable
Mac
spoken,
the
differences
between
L
to
bridge
and
L
to
tunnel
network
motors.
D
There
is
one
small
difference:
if
there
are
two
containers
in
the
same
container
host,
the
traffic
will
be
bridged
in
case
of
an
L
to
bridge
network
mode
within
the
container
host.
Whereas
an
L
to
tunnel
network
mode,
the
packets
will
be
sent
to
the
fabric
host.
That
is
in
a
cloud.
It
will
be
the
L
own
fabric
host.
The
reason
we
did
this
network
mode
is
specifically
for
cloud
environments
where
you
want
to
apply
policies
on
your
physical
host
on
behalf
of
the
containers.
D
So,
with
with
the
with
2016
weeks,
we
had
lot
of
gaps
in
terms
of
our
networking
oxide
Shoni
compared
with
linux,
and
we
are
bringing
in
all
those
features
the
platform
features
and
enable
the
community
to
build.
On
top
of
these
features,
and
the
architecture
in
in
windows
is
going
to
be,
you
have
an
external
v
switch
which
is
bound
to
your
physical
link,
and
then
we
have
an
extension
called
VFP,
which
is
basically
a
virtualization
filtering
platform.
D
This
is
the
same
filtering
platform,
which
is
basically
similar
to
IP
tables
in
linux,
where
you
have
a
rule-based
action
rule
this
matching
for
both
ingress
and
egress,
and
you
can
define
what
that
rule
is
going
to
look
like
and
the
attendance
is
the
host
host
networking
service.
That
is
exposing
these
vfp
rules
in
a
way
that
we
can
program
easily.
D
So
we
are
creating
an
external
research
and
we
are
enabling
this
virtually
virtualization
filtering
platform
and
through
which
we
are
programming.
The
rules
through
HLS,
both
cubelet
and
Q
proxy
and
basically
are
modified
to
use
the
CNI
say
in
a
plugin
and
program,
the
HS
directly
for
these
policies,
and
we
are
going
to
show
them
they
more:
the
modified,
cubelet
and
q
proxy
for
windows
and
how
we
are
bringing
all
the
requirements
of
cuban.
It
is
in
windows.
D
Again,
we
are
bringing
in
the
support
for
running
multiple
containers,
sharing
the
single
namespace
so
that
you
can
have
multiple
containers
inside
the
single
port.
That
is
one
feature
we
are
bringing
in
and
we
are
also
bringing
in
where
support,
where
you
can
have
single
container
NICs
and
provide
both
port,
the
port
connectivity
and
also
internet
connectivity,
and
this
the
netting,
the
internet
connectivity
is
done
through
the
VAP
route,
similar
to
the
IP
table
rules
in
Linux.
D
We
are
also
bringing
in
the
native
load
balancing
support
in
the
container
host
so
that
it
can
provide
service
based,
load
balancing
and
we
are
also
bringing
in
a
curling
support
through
vfp
through
so
that
you
can
apply
app
Ackles
on
behalf
of
this
containers
within
the
container
host.
So
these
are
the
different
features
we
are
bringing
in
a
in
the
in
our
next
release
of
Windows,
which
is
our
s3.
D
So,
just
a
quick
recap
on
with
respect
to
our
different
network
requirements
in
Cuban
it
is
and
how
we
are
doing
it,
respect
to
different
Network
modes
that
we
have
in
Windows,
both
l2
bridge
and
l2
tunnel.
The
sub
will
be
supporting
all
of
this
network.
The
network
features
as
I
talk
to
you
in
the
previous
slide.
In
addition,
all
a
can
also
be
supported,
but
we
need
to
provide
and
kvs
to
control
control
path,
kvs
to
basically
transfers
the
policies
across
different
host.
D
The
port.
The
port
connectivity
is
basically
provided
through
a
routable
IP
can
be
via
net
or
logical,
IP
and
port.
The
internet
connectivity
is
provided
by
a
single
interface
and
Natta
will
be
applied
using
VAP
roads,
shad
containers
support.
Now
we
can
have
multiple
containers
service
with
support.
Both
queue,
proxy
user
mode
and
queue
proxy
kernel
mode
will
be
supported
and
service
with
that,
so
the
load
balancing
acting
support.
In
addition
to
now
applying
a
cologne,
the
fabric
host.
E
E
Four
1.0
and
master
is
on
time
to
forty
four
2.0
and
each
host
is
on
an
azure
phoenix
pace
of
ten
to
forty
0
dot,
0
/
6
teen
space
and
and
the
the
and
the
entire
as
we
denote
this
one
10
dot,
0
dot,
0
/
16
space
so
here,
and
we
also
carve
out
a
cluster
with
center
space
of
1000,
slash
its
keynote
sitter
of
10
to
40,
16
and
whips
it
out
of
ten
to
forty
four
one.
Two
and
three
slash
24.
E
So
I'll
also
go
through
us
having
a
given
input
on
the
changes.
What
was
done
on
cubelet
and
cube
proxy?
So
as
part
of
cube
proxy,
we
enabled
the
kernel-space
proxy.
So
we
just
added
a
new
driver
for
a
supporting
the
kernel
space.
The
kernel
space
will
basically
program
DVF
P,
as
explained
by
finish
through
HMS
and
whatever
AP
has
been
exposed.
We
just
usually
use
the
lose
APs
and
program,
the
vfp,
and
so
for
every
services
that
has
been
exposed
in
the
to
cube,
L
cube,
cube
notice.
E
E
E
So
all
the
cuban
cuban
address
information
is
basically
plumbed
into
vfp
into
a
way
what
we,
if
we
can
understand,
so
let
me
go
and
deploy
a
sample,
and
before
that
I
ought
be
also
as
part
of
cubelet.
We
introduced
the
support
to
UC
and
I
the
window
cna
plugin.
So
we
we
start
using
the
CNA
and
we
develop
the
local
CNA
plug-in
and
we
also
have
have
a
configuration
to
deploy
the
network
in
the
l2
bridge
mode.
E
Using
our
see
any
local
see
any
plug-in
and
some
of
a
input.
What
we
give
and
some
other
policies
which
we
passed
to
the
HMSO
CNA,
so
cubelet
invoke
CNA
and
CNA
doesn't
have
much
information
as
to
how
to
deploy
it.
It
simply
uses
this
information
and
gives
it
to
HMS
and
H&S
has
the
knowledge
of
deploying
the
networks
and
program,
the
vfp
and
all
the
Polish
studying
flump
through
that
so
see
and
I
just
act
as
a
proxy.
In
this
case
it
doesn't
have
any
it
just
uses.
The
configuration
sends
it
down.
E
E
E
E
So,
as
we
can
see,
there
is
only
one
interface
that
we
are
using.
This
is
the
this
is
one
difference
from
twenty
students,
16
and
the
degrees
where
we
had
two
interfaces,
one
for
out
bone
Internet
and
one
for
the
port
connectivity.
There
single
interface
is
being
used
for
both
and
the
cube
DNS
is
being
used
here,
and
so
this
is
on
0.1
24,
so
I
will
pull
I'll
show.
You
part
the
pod
connectivity
is
show.
E
D
E
D
E
E
B
So
I
just
want
to
go
over
what
modern
internet
showed.
Basically,
we
had
some
featured
gaps
in
the
in
market
product
Windows,
Server
2016,
which
are
being
filled,
as
you
saw
in
the
Windows
Server
feature
release.
We
have
a
semi-annual
release
channel
for
Windows
Server,
so
every
six
months,
there'll
be
another
Windows
Server
release
the
ones
what
these
capabilities
are
available
today
with
the
windows.
Insider
builds,
so
basically
Windows
Server
entered
in
to
the
windows
insider
build
program
so
that
just
like
people
get
new
client
builds.
B
They
can
also
get
new
server
builds
lighted
to
their
machine
every
two
weeks,
and
so
this
is
just
a
demonstration
of
the
latest
flighted
bill.
Basically,
they
will
have
this
functionality,
so
multiple
containers
per
pod.
This
is
just
a
caveat
here.
This
is
for
Windows
Server
containers
using
shared
kernel,
so
it's
not
using
hyper-v
isolation
with
an
isolated
kernel.
It's
the
shared
kernel,
variant,
sub
containers.
We
also
decrease
the
number
of
endpoints.
So
before
you
add
f21
was
an
external
for
external
connectivity.
B
Using
a
Windows
component
called
win
that,
and
then
we
had
a
separate
one
for
basically
into
a
clustered
traffic.
That
was
attached
to
a
transparent
network.
Now
we're
able
to
collapse
those
into
one
single
shared
endpoint
using
that
vfp
engine,
the
de
nition
modern
referenced.
So
V
of
P
is
our
virtual
filtering
platform,
which
is
analogous
to
IP
tables.
Basically
on
Linux,
it's
a
flow
engine
that
does
match
action
rules,
and
so
all
the
different
policies
we
have
like
encapsulation
or
natty
or
a
clean,
can
be
done
through
that
V
of
P
engine.
B
B
We
can
do
the
same
type
of
thing
now
in
Windows,
using
that
vfp
component,
which
is
running
in
kernel
mode,
also
added
some
new
support
for
DNS
search
suffixes
before
we
were
just
kind
of
adding
these
on
after
the
fact
through
queue
proxy,
and
then
we
do
have
our
own
local
scene,
I,
plug-in
that
supports
the
host
networking
services,
l2
bridge
and
l2
tunnel.
Networking
modes,
so
we
believe
that
we
have
come
a
long
ways
in
the
past
few
months
since
I
presented
to
this
group
last
to
really
get
to
feature
parity
with
Linux.
D
We
are,
we
have
created
issues
for
the
1.8
release
to
basically
so
the
it's
kind
of
a
bug
fix,
because
there
was
no
support
for
CNI
plugin
in
Windows,
and
there
was
no
sternum
or
proxy
support
and
that
the
changes
to
the
common
code
is
very
less
and
we
have.
We
are
raising
peers
to
basically
for
the
different
reports,
and
here
are
the
links
for
the
CNI
plugin
and
then
the
adding
the
kernel
mode
support
for
the
queue
proxy
and
we
also
modifying
the
ACS
engine
that
is
basically
used.
D
Inertia
to
support
this
new
features
and
attendance
is
basically
a
comm
service.
The
inbox
con
service
that
runs
on
Windows
and
we
have
developed
the
open
source
interface
project
called
exertion,
which
is
basically
has
API,
is
to
program
this
H&S
service,
though
so
that
any
anybody
can
program
it
soonest
to
wheel
and
vfp
through
itchiness,
and
this
is
the
same
thing
that's
being
used
across
all
of
our
Sienna
plugins
and
docker
drivers,
dr.
Windows
Tech.
Of
course,
any
questions
on.
C
B
You
yeah,
we
really
appreciate
the
community
supporting
other
sig
windows
group
has
been
active
even
before
we
join,
but
we're
wanting
to
add
value
to
this
group
by
doing
a
lot
of
the
platform
improvements
which
Ganesh
and
Madan
demoed
for
us
today,
as
well
as
then
contribute
to
the
common
code
base
and
really
with
the
desire
to
have
this
main
line
into
the
version.
1.8
branch
you
go
ahead
to
the
next
slide.
B
B
So
hearing
none
a
lot
of
this
work
was
done
in
the
azure
container
service,
the
ACS
engine
templates,
but
there's
no
reason.
This
couldn't
also
work
for
on-premise
deployments
and
in
other
public
clouds
and
so
I
know.
Efforts
have
been
done
in
the
sig
windows
group
to
get
the
OBS
switch
extension
up
and
running
both
in
Google,
which
has
been
demo
before
and
I.
Believe
Azure
is
close
behind,
but
with
this
basically
was
our
vfp
engine
natively
in
box
inside
Windows
Server
the
new
releases
that
shouldn't
be
required
anymore.
B
There's
no
reason
you
can't
use
it,
it's
just
a
preference
basically
about
how
to
deploy
those
clusters,
but
we
will
love
the
community's
help
and
actually
starting
to
deploy
these
clusters
for
on
premanande.
Other
public
clouds,
like
Google
using
the
server
insider,
builds
that
we
talked
about
earlier.
It's
very
easy
to
get
these
and
the
public.
The
PRS
for
the
public
code.
B
At-Least
are
also
available
to
build
the
cube
net
or
cubelet
and
queue
proxy
components
with
our
updates
in
them
as
well,
then,
just
a
quick
plug
on
what's
next,
what
we're
focusing
on
is
really
getting
some
better
performance
out
of
our
containers,
both
in
terms
of
throughput
and
decreased
latency,
but
also
in
density
and
making
sure
we
can
scale
out
to
have
multiple
containers
on
a
single
host
without
any
startup
degradation.
Sin
time
also
Dinesh
mention
the
Calico
efforts.