►
From YouTube: Kubernetes SIG Windows 20170404
Description
Kubernetes SIG Windows 20170404
A
Hi
everybody:
this
is
Michael
Michael,
and
this
is
another
meeting
of
the
special
interest
group
for
windows
in
the
kubernetes
platform.
Today,
we
don't,
we
don't
have
any
updates
from
from
a
seek
perspective,
but
Jason
Messer
from
Microsoft
is
gonna.
Give
us
an
update
on
native
networking
improvements,
specifically
how
they
affect
kubernetes
and
and
give
us
a
presentation
or
with
what's
coming
up
next
for
Microsoft,
so
Jason
go
ahead.
Okay,.
B
I!
Don't
think
I
need
much
of
this,
since
this
is
the
special
interest
group
for
Windows,
but
there's
a
reason
we
should
care
about
this
I
think
and
it's
there.
The
number
of
windows
container
instances
is
actually
dramatically
on
the
rise,
we're
seeing
a
lot
of
positive
growth
and
so
there's
a
very
quick
adoption.
Just
since
we've
released
this
last
fall,
in
fact,
with
the
Windows
Server
2016
release
along
the
customer
interest.
B
B
So
a
lot
of
the
feedback
we've
heard
from
customers,
we've
talked
to,
they
don't
just
want
to
one
run,
a
Windows
cluster
or
just
a
Linux
cluster.
They
all
actually
want
to
run
both.
Maybe
services
are
doing
wet
windows
on
the
front
end,
and
maybe
something
Linux
based
on
the
back
end.
Is
that
something
that
the
sig
has
seen
as
well?
If
there's
a
desire
to
do
kind
of
those
mixed-mode
clusters.
A
Absolutely
there's
you
know:
there's
there's
kind
of
the
two
types
of
customers
who've
been
talking
to.
One
has
been
the
guys
that
have
traditional
network
loads
and
I
want
to
migrate
them
into
the
cloud
and
when
I
say
my
greatest,
you
know
containerize
them
and
and
start
taking
advantage
of
cluster
manager
like
kubernetes
to
orchestrate
their
workloads,
because
they
see
that
there's
a
significant
time
saver
and
it's
a
significantly
better
platform
for
they
use.
And
then
there's
folks.
A
That
basically
said
you
know:
hey
the
now
there's
a
platform
that
allows
me
to
pick
the
best
of
breed
type
of
component,
for
my
applications
are
competing
to
some
Linux
components,
some
windows
components
and
build
my
app
so
I
could
use
her
Eddie's
bag
and
running
on
Linux,
which
is
the
best
platform
for
it,
and
then
use
an
asp.net
workload
on
the
front
end
or
something
else.
So
the
fact
that
we're
going
down
this
approach
affords
them
the
ability
to
enable
search
workload,
definitions.
B
Okay,
yeah,
so
we're
really
wanting
to
work
and
improve
to
work,
to
improve
and
also
expand
some
of
the
capabilities
that
we're
giving
for
networking
does
the
port
really
any
deployment,
configuration
or
environment
and
all
the
different
network
topologies
that
we've
seen
out
there,
both
for
our
premise
and
kind
of
private
and
public
cloud
as
well,
so
we're
moving
in
that
direction.
It's
really
a
journey
that
we
want
to
join
you
guys,
and
how
do
you
join
us
as
well?
B
I
think
we
could
do
a
better
job
being
more
active
in
the
sig
and
I'm
hoping
we'll
move
in
that
direction.
I
know
several
people
are
already
very
active,
especially
on
the
azure
side,
so
I
think
you'll
see
more
and
more
of
the
Windows
Network
et.
You
start
to
be
more
active
there,
so
just
kind
of
a
start,
a
glance
of
where
we
came
from.
You
know
this.
B
The
containers
initiative
really
started
with
our
Windows
Server
2016
release
and
that
development
cycle
is
took
a
number
of
years,
and
so,
if
you
look
back
just
a
year
and
a
half
ago,
maybe
a
little
bit
more,
it
was
still
using
a
lot
of
PowerShell
to
do
a
lot
of
the
container
Network
management
to
set
up
all
the
plumbing.
All
the
policies
was
a
very
manual
intensive
effort.
B
We
created
a
new
servicing
layer
called
a
host
networking
service,
that's
kind
of
a
pluggable
servicing
layer
to
interface,
not
only
with
something
like
docker
and
user
mode,
but
then
also
down
down
below
in
the
OS
itself,
to
figure
out
all
this
plumbing
to
do
it,
for
you
also
worked
to
create
a
docker
live
network
plugin
for
Windows
and
out
of
support
for
a
bunch
of
different
multiple
networking
modes
and
drivers
right
before
or
Server
2016
is
released.
We
actually
had
a
major
push
to
kind
of
get
some
more
developer
tooling
in
place.
B
We
didn't
have
the
best
support
for
things
like
compose
or
service
discovery,
and
so
we
added
some
of
those
features
at
the
very
end
of
Server
2016
release
milestone
last
fall.
We're
continuing
to
do
that,
realizing
that
it's
not
just
you
know,
kind
of
IT
admins
that
we've
traditionally
target
in
the
past,
but
also
developers
who
are
wanting
to
use
these
new
workflows
with
containers.
B
The
different
tooling
that's
available
out
there,
and
so
things
like
supporting
having
a
better
Nats
story
on
Windows
to
support
multiple
prefixes,
multiple
types
of
virtualization
like
Ino,
VMs
and
containers
and
emulators
that
all
might
need
that
network
access
just
to
make
the
developer
experience
better
and
now
we're
getting
the
point.
We're
actually
extending
this
out
to
scale
out
these
services,
not
just
on
a
single
development,
node
or
one
small
deployment.
B
No,
but
across
multiple
node
clusters,
and
so
you
may
have
seen
you
know,
a
new
overlay
network
driver
with
swarm
support
for
the
Windows
tank
creators.
Creators
update,
which
was
just
released,
I,
think
this
past
week
and
there'll
be
something
for
Server
2016
coming
soon,
so
stay
tuned
for
that
they're.
Just
new
features,
I
think
we're
always
we're
wanting
to
be
in
this
movie,
we're
continually
innovating
and
developing
and
releasing
new
things.
And
so
it's
words.
The
end
of
this
talk
I'll,
give
you
some
more
opportunities
for
how
you
might
join
us
on
that
journey.
B
So
let's
go
ahead
and
go
a
little
bit
deeper.
Now,
in
the
technical
side
of
things,
just
to
give
you
kind
of
a
high-level
overview
of
what
our
virtual
network
stack
looks
like
I
have
a
picture
on
the
right
hand,
side
that
represents
just
a
Windows
hyper
beat
host.
Basically,
so
we
have.
Obviously
we
have
our
own
Microsoft
hypervisor,
which
is
hyper-v,
and
then
we
also
have
from
a
networking
perspective.
We
have
our
hyper-v
virtual
switch.
B
This
hyper-v
virtual
switch
is
able
to
provide
layer,
2
connectivity
to
you,
know
VMs
or
containers
and
different
variants
of
containers
as
well,
and
it
does
this
through
creating
these
Network
endpoints,
and
so
we
have
different
names
for
these
Network
endpoints
there.
You
know,
there's
some
sort
of
a
virtual
network
adapter
either
a
host
B,
NIC
or
a
VM
NIC,
and
so
that
basically
provides
that
layer,
2
connectivity
to
these
VMs
or
containers.
B
As
far
as
isolation
is
concerned,
we
actually
have
a
bunch
of
different
isolation
mechanisms
to
start
I'll
start
with
compute
side
of
things.
We
have
two
different
types
of
Microsoft
containers
or
two
different
types
of
Windows
containers.
One
is
the
Windows
Server
containers,
which
is
what
shares
the
kernel
with
the
container
host
itself.
B
So
each
time
a
container
endpoint
is
created
and
attached
to
a
container
it's
placed
within
its
own
networking
compartment.
The
host
itself
has
a
default
compartment
where
all
the
host
networking
stack
resides.
So
we
have
hyper-v
containers
on
the
left
of
this
picture,
connecting
to
a
BM,
NIC
and
then
windows.
Server.
Containers
on
the
right,
so
I
want
to
talk
a
little
bit
about
security
after
we've
done
isolation
right
now,
we're
doing.
We've
done
a
lot
of
work
with
the
windows
firewall
to
basically
enlighten
it
to
be
aware
of
containers
and
these
networking
compartment
ideas.
B
From
a
management
perspective,
again,
I
talked
about
this
host
networking
service
that
we
added
as
a
servicing
layer
between
the
user
mode
apps
and
the
OS
itself.
So
it's
really
responsible
for
a
few
different
things
and
it
works
in
conjunction
with
a
host
compute
service.
So
we
have
host
compute
service,
it's
kind
of
bringing
up
the
VMS
or
the
containers,
as
the
case
may
be,
and
the
host
networking
service
that's
handling
all
the
network
plumbing
behind
the
scenes,
and
so
whenever
docker
client
or
maybe
an
Orchestrator
invokes
some
commands
on
the
docker
REST
API.
B
It's
actually
flowing
through
that
docker
engine
through
our
Windows
Live,
Network,
plugin
and
interfacing.
With
the
host
networking
service
host
networking
service,
then
is
able
to
create
a
network
networks
usually
represented
by
a
V
switch
for
each
network.
It
will
create
any
of
the
NAT
and
IP
pools
that
are
needed
and
manage
those
IP
pools
for
some
of
the
networking
modes.
B
It
will
also
work
with
the
host
compute
service
HCS
to
create
the
correct
endpoints
that
put
those
endpoints
in
the
correct
network
compartment
and
then
assign
IP
addresses
DNS
routing
information
to
those
endpoints
in
the
container
itself.
Then,
lastly,
is
the
policy
pieces
such
as
you
know,
as
I
mentioned
before?
If
you
have
a
port
mapping,
either
a
static
or
a
dynamic
port
map,
you
know
go
ahead
and
create
those
rules
and
win
that
it
will
plumb
the
correct
windows
firewall
rules
as
well.
B
So
this
is
really
a
big
table.
I
don't
expect
us
to
go
through
right
now,
but
we
do
have
a
bunch
of
different
networking
modes
that
are
exposed
to
our
doctor.
Live
network
plug-in
completing
that
transparent
overlay,
LT
bridge,
know
to
tunnel
I'm
gonna,
just
pictorially
show
what
these
modes
look
like,
and
then
people
can
come
back
to
the
slides
later
for
a
reference
so
NAT
the
way
we
implement
nad.
This
is
similar
to
the
bridge
mode
and
Linux.
B
Basically,
we
have
a
component
today
called
wind
NAT
that
actually
does
that
natty
operation
it
provides
and
internal
prefix
for
the
containers,
and
it
also
has
using
host
networking
service
IP
addresses
are
actually
allocated
in
the
sign
from
that
subnet
prefix
pool
to
an
individual
container
for
external
access.
It's
really
kind
of
as
a
routing
operation.
B
We
have
what's
called
a
management
host,
V
NIC,
that's
sitting
off
to
the
side
there
and
it
has
the
gateway
IP
address
of
that
NAT
network
assigned
to
it
all
the
default
routing
gateways
for
all
these
containers
that
will
go
through
that
host
Enoch
and
then
can
be
routed
out
externally
and
back
in
internally
for
different
port
mapping.
Center
community.
B
That's
in
that
the
next
moto
I
talked
about
real
quick
is
the
transparent
mode.
This
mode
basically
provides
raw
access
from
the
containers
to
the
physical
network
or
the
container
host
network
underneath.
So
we
have
to
be
careful
with
this
mode.
Since
there's
no
address
translation
happening,
there's
no
encapsulation
going
on
it's
all,
just
the
raw
frames.
All
the
MAC
addresses
all
the
IP
addresses
are
visible
on
the
physical
network
itself.
B
B
Just
a
quick
note:
if
the
hyper-v
hosts
or
the
container
host
itself
is
actually
running
as
a
VM,
we
need
to
make
sure
that
we're
doing
max
spoofing
on
that
e/m
network
adapter
is
going
to
be
seeing
all
the
MAC
addresses
from
each
container
running
in
that
container
host
on
the
physical
network.
So
that's
a
lots
of
security
concern,
obviously,
and
something
we
don't
really
recommend.
We've
designed
some
other
networking
Mo's
to
overcome
some
of
those
limitations.
B
Next
one
I'm
really
excited
about
talking
about
is
overlay.
We
we
announced
support
for
overlay
and
the
Windows
10
creators
update,
and
if
you
had
some
of
the
insider
program
preview
builds,
you
would
have
been
exposed
to
this
in
the
past,
but
basically
we
have
a
new
14
extension
component
called
the
virtual
filtering
platform.
Vfp.
Does
this
be
explained
encapsulation
operation
for
us
in
the
container
host
itself?
So
we've
previously
used
this
for
our
software-defined
networking
stack
in
Windows
Server
2016,
but
that
had
a
dependency
on
the
network
controller
itself.
B
What
we've
done
now
is
basically
remove
that
dependency
kept
the
same
data
plane,
but
we
used
the
host
networking
service
to
program.
All
the
different
policies
and
flow
engine
rules
into
the
V
of
P
extension
itself,
so
we
can
create
these
overlay
tunnels
to
separate
different
container
networks
from
each
other.
So
in
this
picture,
basically,
we
just
have
two
networks:
a
dot
one
and
a
dot
2/20
across
two
different
hosts.
This
is
all
done
through
docker
swarm
mode.
Today,
dr.
Stormare
provides
the
KBS
store
and
the
dr.
B
B
L2
bridge
is
a
mode
we're
going
to
start
recommending
more
especially
I.
Think
further
kubernetes
based
deployments
is
similar
to
the
transparent
mode,
except
that
it
does
a
Mac
rewrite
on
ingress/egress
and
so
we're
using
that
virtual
filtering
platform,
the
vfp
to
actually
update
either
the
source
Mac
or
the
destination
MAC
address
on
ingress
and
egress
for
that
container
traffic.
So
this
saves
on
the
number
of
MAC
addresses.
B
Last
one
I
want
to
talk
about
really
quickly
is
the
l2
tunnel
mode,
and
this
is
really
designed
specifically
for
sure
in
this
mode
we
have.
The
windows
container
host,
which
are
VMs
flows,
are
connected
to
an
azure
virtual
network.
In
fact,
and
so
we
still
have
that
tunnel
I
believe
as
we
use
that
material.
B
So
those
are
the
five
modes
that
we
support
today
and
again,
here's
an
overview
I'll.
Let
you
guys
reference
us
later
on
when
I
make
these
slides
available
in
the
presentation
it
talks
about
some
IP
address
management.
How
we
do
that
as
well.
We
have
different
static
and
dynamic
allocation
and
assignment
schemes
depending
on
the
networking
mode,
actually
using
all
right.
So
now,
I
want
a
deep
dive,
a
little
bit
further
into
how
we
actually
do
kubernetes
networking
in
Azure
container
service
itself.
B
I
know
the
group
has
been
working
a
lot
with
cloud
blaze,
plugins
of
OBS,
nouvion
controller
and
I
saw
a
great
demo
by
Michael
a
little
while
ago
about
how
he
demonstrated
Windows
working
with
kubernetes,
using
that
OBS
switching
section
in
GC
I
believe
at
the
Google
next
conference.
You
might
have
seen
the
announcement
from
us
a
few
weeks
ago.
A
container
world
where
we
announced
basically
support
for
kubernetes
in
our
own
a
sure
container
service
in
Microsoft
Azure
public
cloud
behind
the
scenes.
B
B
What
type
of
worker
notes
you
want
to
use
and
you
set
up
some
security
parameters
like
ssh
public,
please
so
on
and
so
forth,
behind
the
scenes
that
actually
goes
out
and
creates
a
bunch
of
different
resources
on
Azure
itself
like
load,
balancers
security
groups,
virtual
networks,
public
IPS,
so
on
so
forth?
What
it
also
does.
So
it
uses
one
of
the
arm
templates
for
these
window
windows
worker
nodes
with
some
custom
built
queue,
proxy
and
cubelet
code
that
we've
contributed
to
the
open
source
community
on
the
slide
there.
B
You
can
kind
of
see
the
fork
that
were
working
out
of
right
now
and
we're
also
submitting
port
less
obviously
into
the
master
branch
to
get
over
some
of
the
limitations
that
were
called
out
earlier,
and
so
these
updates,
basically
that
go
into
the
azure
worker
node
VMs
for
Windows,
4q,
proxy
and
cubelet.
Allow
us
to
overcome
some
of
those
limitations.
B
We've
also
done
work
on
the
qubit
itself
to
make
sure
we
have
internet
access
through
through
our
NAT
networks.
So
those
are
some
of
the
changes
we've
made
to
make
kubernetes
work
in
Azure
with
Windows
worker
nodes.
So,
let's
dive
a
little
deeper
into
what
one
of
those
worker
nodes
actually
looks
like
from
a
networking
stack
perspective.
B
This
is
kind
of
throughout
the
complicated
picture.
So
let
me
kind
of
break
it
down
as
much
as
I
can.
In
this
picture.
We
have
two
windows
server
containers.
It
could
be
n
number
of
containers,
though
it's
not
limited
to
just
two,
but
we're
actually
providing
two
different
container
endpoints,
which
are
attached
to
different
networks,
different
container
networks.
One
of
those
networks
is
a
transparent
network,
and
so
we've
worked
with
the
azure
container
service
team
to
basically
create
these
networks
behind
the
scene,
a
transparent
and
an
app
network.
B
The
transparent
network
itself
is
basically
for
inter
pod
communication,
and
so
we
have
this.
We
have
multiple
management
host
Phoenix
that
have
not
only
the
host
IP
address
for
that
container
host
vm,
but
also
that
a
forwarder
virtual
network
interface
forwarder
that
acts
as
the
default
gateway
for
the
pod
ciders
that
are
provided
and
allocated
for
this
particular
container
host
p.m.
on
that
forwarder
interface
in
the
bottom
left-hand
screen.
You
can
see
that
there's
also
external
IPS
which
are
assigned
to
this.
So
if
you
do
a
cube,
CTL
apply
and
launch
a
new
service.
B
Basically
it'll
work
with
the
as
your
load
balancer
to
get
an
external
IP
address
and
then
access
and
put
that
external
IP
address
on
the
container
host
VM
itself.
On
the
other
side,
then
that's
for
ingress.
Egress
traffic
are
sorry
that
that's
for,
inter
pod
traffic,
so
we
have
two
v
switches,
and
this
is
something
we're
actually
on.
Our
roadmap
is
going
to
be
collapsing.
This
I
can't
give
you
any
timelines,
unfortunately,
but
we
are
we're
working
towards
that
direction,
because
we
know
this
is
the
cause
for
concern.
B
The
reason
we've
actually
done
this
and
I
have
some
frequently
asked
questions
in
the
next
slide.
Why
do
we
need
two
endpoints?
Basically,
as
I
mentioned
before
ones,
for
ingress
north-south
traffic
going
through
our
NAT
Network
and
then
the
other
one
is
for
that
east-west
traffic
between
different
pods
in
the
cluster
itself.
Right
now
we're
using
a
transparent
network
to
provide
that
external
connectivity.
B
As
you
know,
kubernetes
assigns
these
pods
sitter's,
basically
to
a
host
node
and
then
multiple
IP
addresses
from
that
sitter
range
are
assigned
to
the
individual
pods
themselves,
so
we
have
to
have
a
way
to
basically
route
between
those
different
pods
sitters
and
the
way
we
do
that
in
a
sure,
is
using
this
user-defined
routing
table.
It's
actually
implemented
in
our
virtual
filtering
platform
extension
on
the
physical
host
itself
and
what
it
does
is.
B
It
looks
up
the
destination
prefix
of
any
IP
traffic
and
then
sends
it
to
the
next
hop,
which
is
actually
the
IP
address
of
the
container
hosts
that
cluster
IP
address
through
its
gateway
and
so
on
the
left-hand
side.
We
see
that
the
pause
sitter
range
for
this
host
is
10.2
44.1,
whereas
the
one
on
the
right,
10.2,
40
4.2.
B
Ip
address
and
then
it's
gonna
give
sent
to
the
physical
host
the
physical
host
then
through
the
VfB
extension
rules
that
implement
the
user-defined
routing
will
do
a
look
up
and
see
that
the
next
hop
for
that
prefix
for
the
IP
address
in
that
prefix
needs
to
go
to
this
second
container
host
on
the
right-hand
side,
and
so
that's
how
that
enter.
Our
east-west
traffic
goes
to
share
between
different
containers,
container
host
and
Azure
I'm
running
out
of
time
here.
B
I
apologize
for
going
somewhat
quickly
through
this,
but
I
do
have
a
few
more
slides
on
show.
Basically
ingress
egress
traffic
flow
again
we're
using
an
azure
load
balancer,
which
is
publishing
a
public
IP
address,
and
so
an
external
client
basically
can
access
that
service.
Through
the
add
your
load
balancer
public
IP
address
on
the
backend,
then
as
your
load
balancer,
is
sending
these
out
in
a
round
robin
fashion,
in
conjunction
with
doing
some
health
probes
and
laundry
to
make
sure
these
worker
nodes
are
up.
B
What's
going
to
be
saying
in
to
one
of
these
queue
proxies
that's
running
in
the
Windows
worker
node
itself,
the
queue
proxy
can
intercept
or
see
that
traffic
and
then,
depending
on
where
the
service
is
actually
running.
It's
going
to
redirect
that
on
the
backend
side
and
send
the
traffic
to
that
service
represented
by
the
purple
circle
on
the
bottom
there.
B
That's
that
ingress,
egress
traffic
blue,
so
another
thing
we're
actively
working
on
is
a
scene
I
plugin
and,
in
fact,
there's
a
pre-release
of
this
work
that
some
are
on.
Our
team
did
recently
for
both
Linux
and
Windows
for
a
C&I
plugin,
specifically
for
the
Azure
environment
itself.
This
will
actually
be
a
change
in
the
way
once
this
is
fully
implemented,
be
a
change
in
the
way
that
we
do
the
networking
and
that
we're
not
going
to
be
using
our
transparent
or
not.
Networking
modes
more
were
actually
began.
B
A
B
B
B
I'm,
just
gonna
keep
going
I'm,
not
able
to
bring
it
up
right
now.
Basically,
you
have
a
different
types
of
resources
and
so
on
the
right-hand
side
of
the
screen,
you
see
a
bunch
of
different
availability
sets
that
we
talked
about
the
load
balancers
that
were
interfaces
and
VMs,
and
so
all
of
this
gets
created
for
you
when
you
lost
in
the
ACS
PM
kubernetes
instance.
B
What
that
you're
able
to
do
is
actually
through
the
public
IP
that's
exposed
through
the
azure
load.
Balancer
I
can
actually
run
commands
cube
CTL
commands
basically
on
that
cluster
itself
by
accessing
the
Linux
master
node.
So
if
I
do
you
know
a
simple
operation
like
getting
the
different
nodes
that
are
available?
I
see
that
I
have
this
one
Cades
master
node,
which
is
a
linux
node,
and
then
two
worker
nodes,
five,
eight
f03,
so
on
so
forth
that
are
actually
the
windows
nodes
themselves.
B
I
also,
then,
can
launch
services
and
I
believe
I've
already
launched
one
service
here
and
so
again
this
is
all
from
my
local
desktop
and
I've
just
set
up
SSH
tunneling
to
basically
be
able
to
connect
to
the
azure
backends
or
the
dash
or
Linux
master
node.
Rather
so
I
have
you
know
this
basic
Windows
web
server,
that's
running
with
a
cluster
IP
address
of
10.0
220
and
then
the
external
P
address
that's
advertised
through
the
azure
load.
Balancer
itself.
I
can
then
take
a
look
and
see
what
you
know.
B
Pause
consists
of
that
service
or
make
up
that
service
and
right
now,
I
only
have
one
and
so
I'm
I'm
gonna
try
and
go
ahead
and
bring
this
up
accessing
this
service.
Actually
before
I
go
there
I
want
to
show
you
just
service
definition
in
the
animal
file,
so
I'm
sure
you're
familiar
with
this,
the
Seattle
all
kind
of
describes.
What
that
service
is.
It
describes,
whatever
ports
are
needed
to
be
accessed
specifically,
since
this
is
a
web
server
or
running
through
port
80.
B
It
also
that
has
the
different
container
images
that
are
going
to
be
launched
and
so
the
base
image
that's
going
to
be
launches.
This
Microsoft
Windows
server,
core
image,
launch
3dr,
and
that's
going
to
invoke
this
powershell
command,
basically
to
start
things
running,
so
that's
the
specification
file
that
actually
gave
to
create
this
particular
service.
B
A
Yeah
so
you're
poor,
this
ad
yeah
yeah,
if
you,
if
you're
interning,
you're,
hitting
the
ten
dot
star
IP
address,
then
you'll
be
on
3460
right,
okay,
but
external,
it
looks
like
you
you're
hitting
the
so
you
basically
got
a
single
IP
address
for
this
specific
service
from
the
Azure
mobile
answer.
Yep.
B
Yeah,
that's
what
we
advertise
externally.
So
if
we
went
back
to
that,
your
portal
you'd
actually
see
this
IP
address,
but
just
for
the
sake
of
time,
I'm,
gonna,
I'm,
gonna,
move
on
and
what's
happening,
though,
is
it's
basically
redirecting
those
requests
to
the
queue
proxies
on
the
different
boxes,
the
Windows
worker
note
boxes.
So
if
I
bring
one
of
those
up,
you
can
see
some
of
this
stuff
that
I
was
describing
before,
where
we
basically
have
this
virtual
network,
adapter
called
the
four
der
adapter
and
some
IP
addresses
assigned
to
it.
B
B
B
B
Yes,
we
answer
some
of
the
frequently
asked
questions
about
the
to
container
endpoints,
some
of
the
stuff
we're
doing
to
resolve
those
changes
in
the
future
kind
of
touched
on
DNS
as
well.
Some
of
the
work
we
did
in
queue
proxy
to
make
DNS
work
not
only
to
append
those
DNS
suffixes
to
requests
coming
in
to
do
this
full
domain
name
resolution,
but
also
returning
the
answers
and
then
working
with
the
azure
load,
balancer
and
cue
proxy
to
basically
set
up
that
load.
Balancing
on
the
service
network.
B
I'm
out
of
time,
but
I
want
to
say
just
real
quickly
to
or
presenting
some
more
or
building
some
more
capabilities
with
an
overlay
network
driver
that
allow
you
to
do
the
same
type
of
thing,
with
on
premise,
based
deployments.
You
can,
of
course,
do
this
today
with
a
lot
of
physical
network
configuration,
but
we
have
an
overlay
network
driver
that
we're
going
to
show
in
a
future
demo
that
basically
shows
how
you
do
this
on
Prem,
without
needing
to
mess
with
any
of
the
physical
network.
B
Also,
so
what's
what's
next,
we
definitely
know
that
there
are
some
gaps
in
the
solution
today,
but
there
are
opportunities,
basically
for
you
to
kind
of
join
us
on
this
journey,
as
I
mentioned
at
the
beginning
of
the
call,
if
you're
interested
in
getting
some
of
those
builds,
please
do
reach
out
to
me,
and
especially,
if
you
have
any
customers
or,
if
you're,
just
injured
in
understanding
the
direction
that
we're
going,
there
probably
will
be
a
way
that
we
can
provide.
Some
insider
builds
to
you
to
help
us
validate
some
of
these
solutions.
B
You'll
also
know
that
we're
on
an
increased
release
cadence
and
so,
instead
of
waiting
multiple
years
for
new
version
of
server
to
come
out.
It's
going
to
it's
not
going
to
be
that
long
anymore.
It's
gonna
be
several
months,
and
so
not
only
are
we
going
to
be
providing
these
kind
of
insider
builds
or
some
mechanism
by
which
you
can
try
out
these
features
during
our
development,
but
you're
also
going
to
be
able
then
to
have
access
to
those
builds
at
a
quicker
release,
cadence
as
they
become
available.
B
B
C
C
B
The
API
is
for
creating
the
different
arm.
Templates
and
resources
are
definitely
publicly
available.
Acs
engine
itself
is
a
public
project,
and
so
you
can
definitely
go
on
to
the
ACS
Engine
github
repo
and
see
kind
of
the
commands
and
parameters
we've
done
to
hook
up
that
load,
balancing
and
put
all
those
different
resources
together.
I,
don't
think
I
have
a
link
for
that
particular
repo
on
my
slides.
Unfortunately,
I
can
provide
that
to
you
offline.
B
And
say,
thank
you
thank
you
for
the
team,
also
measuring
the
Windows
networking
engineering
team.
They've
done
a
lot
of
this
work
for
us,
the
content
and
again
we'd
love
to
be
more
part
of
the
community,
and
so
I
think
you'll
see
us
more
active
on
the
slack
group.
I
know
Anthony
and
others
are
already
there,
but
I
think
the
windows.
Networking
team
you'll
see
there
more
frequently
as
well.
So
look
forward
to
chatting
with
you
more
and
again.
A
Awesome
thank
Jason,
thank
you
for
the
presentation,
Thank
You,
Antonia,
Dinesh
and
the
rest
of
the
team.
This
was
very
informative.
Obviously,
look
forward
to
continuous
engagement,
to
look
forward
to
seeing
the
network
overlay
driver
some
of
the
cni
plugins
I.
Think
that's!
You
know,
like
you
know,
folks,
from
cloud
base.
A
As
a
daemon
have
mentioned,
you
know
these
kind
of
bene
forward:
an
important
building
block
for
us
to
deliver
additional
solutions
where
that's
cloud-based,
delivering
a
novia,
novia
solution
or
other
vendors
coming
and
building
on
top
of
it
you
to
produce
hybrid
solutions
across
Windows
as
well
as
Linux.
So
this
is
a
great
building
block
and
we
look
forward
to
getting
more
details
and
getting
to
the
point
where
these
guys
can
get
their
hands
on
it.