►
From YouTube: VMware Enterprise PKS and VMware NSX-T
Description
VMware Product Manager Merlin Glynn provides an overview of the integration between VMware Enterprise PKS and VMware NSX-T.
Enterprise PKS is engineered to use NSX-T to deliver an automated, application centric, security and networking policy enforcement solution to deliver security and compliance to modern cloud applications, right out of the box.
For more information about PKS visit:
https://cloud.vmware.com/pivotal-container-service
For more information about NSX visit:
https://www.vmware.com/products/nsx.html
A
Hi
I'm
Merlin,
Glenn
and
I'm
a
product
manager
here
at
VMware
and
in
this
light
board
session,
we're
gonna
be
talking
about
PKS
and
nsx
T
we're
not
going
to
do
a
real
technical,
deep
dive,
we're
gonna
kind
of
get
a
mid-level
and
describe
what
actually
happens
within
a
sex
T
in
the
way
that
PKS
implements
kubernetes
clusters.
So
first
thing
we're
gonna:
do
is
we're
going
to
look
at
a
little
bit
of
some
core
routing
topology,
so
in
your
environment,
you're,
probably
going
to
have
some
core
routing
that
already
exists,
and.
A
When
we
talk
about
deploying
nsx
T
with
PKS,
we're
gonna
create
something
an
nsx
T
construct
that
it's
also
a
router,
so
a
very
equivalent
capability,
and
we
call
it
a
t0
router-
and
this
is
this-
is
very
much
an
nsx
T
contract
and
it
links
with
your
existing
routing
via
a
routing
protocol
or
static
routing.
But
you
know,
BGP
is
probably
the
more
efficient
way
to
have
some
dynamic
routing
between
those
two
two
objects.
So
this
is
gonna.
A
Allow
us
to
be
able
to
ingress
and
egress
from
all
of
the
various
kubernetes
clusters
that
we're
going
to
be
deploying
with
PKS
we're
gonna
drill
into
like
a
single
cluster,
but
bear
in
mind,
as
we
talk
about
this,
that
we
could
have
multiple
clusters
all
coming
through
this
t0
embankment.
Now
the
way
this
t0
is
implemented,
you
know
we're
gonna
be
talking
about
a
couple
of
NS
xt
constructs,
so
one
of
those
we
just
mentioned
is
the
t0.
We're
also
gonna
be
talking
about
a
t1
which
is
very
similar.
A
A
Vms
are
physical
machines
that
have
had
the
edge
software
deployed
on
it,
so
this
edge
cluster
hosts
all
these
virtual
instances
of
multiple
T,
zeros,
T
ones
and
load
balancers,
and
it's
implemented
by
either
one
to
eight
VMs,
our
bare
metal
posts.
So
this
is
where
our
t0
is
coming
from
it's
coming
from
this
cluster
here
and
we've
got
it
linked
appropriately,
the
air
interfaces
and
it's
performing
our
routing
to
our
core
routing.
So
now
we
come
to
our
development
developers,
gonna
be
using
PKS
I
mean
that's
it.
A
A
Of
our
kubernetes
cluster
and
we'll
just
call
this
my
k-8
for
kubernetes.
So
when
that
cluster
is
created,
one
of
the
first
things
that
gets
done
is
we
get
a
logical
switch.
It
gets
created
by
PKS,
which
is
a
in
SXT,
construct,
logical
switch,
so
we
just
use
LS
for
that,
and
this
is
gonna
be
for
our
cluster
nodes
and
in
this
cluster,
we'll
have
three
nodes
just
to
keep
it
simple:
we'll
have
a
master
node
and
we'll
have
two
worker
nodes
in
this
cluster
they're.
A
All
going
to
be
connected
to
this
logical
switch
that
logical
switch
begin.
An
nsx
T
construct
is
its
own
subnet
router.
This
logical
switch
will
also
have
its
own
t1
router
associated
with
it,
and
that
t1
will
actually
link
via
dynamic
routing
protocol,
try
to
0.
So
this
is
how
we
get
access
into
our
master
node
through
our
for
our
t120
infrastructure,
and
we
can
do
some
things
like
NAT
services,
SN
@
&,
D,
NAT
and
load
balancing
from
here,
and
we're
going
to
talk
a
little
more
about
load
balancing
in
a
minute.
A
So,
in
addition
to
this
logical
switch,
we're
gonna
have
two
other
key
logical
switches.
They're
gonna
be
created
whenever,
whenever
our
cluster
create
command
took
place,
we're
gonna
have
another
logical,
switch,
that's
going
to
be
created
for
a
namespace
and
we'll
just
use
NS
for
the
namespace
and
that
namespace
special
special
name
space
is
called
PKS
infra.
A
A
It's
gonna
and
I'm
gonna
go
ahead
and
you
know
draw
our
in
a
sex
management
interface
out
here.
You
know
literally
be
going
through
our
routing,
but
it's
kind
of
easier
for
a
diagram
to
draw
it
out
here.
Our
NCP
is
gonna.
Take
actions
once
it
finds
something
created
inside
of
our
sed
cluster
set
of
sed.
You
know
that
represents
something
creating
our
kubernetes
cluster.
It's
going
to
go
out
to
NSX
T,
and
it's
going
to
create
all
these
other
objects
that
we're
going
to
talk
about.
A
So
it's
really
kind
of
performing
this
nice
role
of
just
almost
being
like
a
traffic
cop
and
watching
what's
happening
inside
of
our
kubernetes
cluster
and
making
sure
that
NSX
is
reactive
to
that
in
creating
the
correct
objects
that
match
what
we've
defined
in
our
kubernetes
in
our
kubernetes
constructs.
So
the
other
logical
switch
that
gets
created
and
there's
actually
more
than
these,
but
these
are
some
of
the
key
ones.
A
Now
the
other
logical
switch
that
gets
created
every
cluster
that
gets
created
in
PKS
gets
a
load
balancer,
and
so
a
logical
switch
also
gets
created
for
that
load.
Balancer
and
I
should
point
out
that
each
one
of
these
logical
switches,
that's
created,
is
actually
consuming
a
slash,
24
subnet
from
a
rather
large
block
that
we
have
to
support
multiple
kubernetes
clusters.
So
on
this
load,
balancing
logical
switch,
you
guessed
it
we're
also
going
to
have
a
load
balancer.
A
Now,
if
we
go
back
to
so,
these
are
all
the
things
that
are
some
of
the
things
that
happen
whenever
we
create
a
cluster
and
some
of
the
interaction
within
SXT.
So
if
we
go
back
to
our
developer
now,
instead
of
using
the
PKS
CLI,
let's
say
he
or
she
is
using
the
cube,
cuddle,
CLI
and
one
of
the
first
things
we
want
to
do.
Is
we
want
to
create
a
namespace?
A
So
what
happens
whenever
create
a
namespace?
Remember,
we've
got
NCP
sort
of
watching
what's
happening
here
through
our
connection
with
that
CD
and
in
making
sure
that
we
create
the
relevant
constructs
in
nsx
T
as
we're
gonna
will
get
another
logical
switch
whenever
we
create
our
namespace.
So
we'll
make
this
logical,
switch
and
we'll
put
it
name.
Space
equals
a
namespace,
a
that's
been
created
for
us
and
again
it
gets
us
last
24,
so
we
can
host
250
pods
on
it
and
the
pods
that
you
know
we
might
have
two
pods
running
here.
A
So
not
only
do
we
get
these
illogical
switches,
but
we
also
get
another
t1
connected
to
the
logical
switch
and
dynamically
routing
here,
and
so
you
know,
as
it
lays
out
if
we
decided
we're
gonna
create
another
namespace
this
this
task
repeats
repeats
on
and
on.
So
we
have
another
t1
with
another
logical
switch
for
namespace
B.
A
So
what
are
some
of
the
other
things
that
are
that
our
developer
is
gonna,
want
to
do
with
the
kubernetes
cluster
to
run
applications
right?
It's
not
just
to
do
this
for
fun,
we're
trying
to
run
applications
in
production.
So
one
of
the
key
things
that
we
need
because
security
right
and
we
need.
We
also
need
access.
So
we
need
to
expose
a
service
and
we
can
expose
that
via
ingress
or
load
balancer
and
what
happens
when
we
expose
a
service.
So
we
might
have
these
two
pods
running
as
a
service.
A
These
two
pods
are
an
applique,
unique
application
that
we
want
to.
We
want
to
expose
as
a
single
unit
inside
of
the
cluster,
and
we
call
that
a
service,
but
we
want
to
expose
that
service
externally.
So
what
happens
is
whenever
we
use
the
cube
kettle.
Api
to
expose
a
service
externally
is
we'll
actually
have
in
a
sec
steam
map
and
external
enemies,
a
term
VIP,
an
external
IP
or
a
virtual
IP.
That's
routable
out
here
in
our
external
segment,
but
that
literally
runs
an
ingress
into
our
load.
A
Balancer
object
for
our
cluster
and
then
our
load
balancer
object.
You
know
through
this
t1
routing
mechanism,
because
we
can
route
from
logical
switch
to
logical
switch,
will
actually
be
able
to
communicate
directly
to
the
back-end
pods
that
are
tagged
when
we
created
a
communities
cluster.
So
this
is
pretty
powerful
thing
because
we
can
in
a
declarative
way
whenever
we're
interacting
with
kubernetes
api.
I
kind
of
declare
you
know
how
do
I
want
to
expose
my
application?
A
You
know
ingress
and
egress
to
the
external
world,
and
then
how
do
I
want
to
dynamically
assign
all
these
back-end
pool
memberships
just
by
tagging?
Pods?
It's
it's
really
simplistic.
We
don't
have
to
know
anything
about
interfaces
or
IP
addresses
and
its
really
powerful.
Now,
in
addition
to
to
wanting
to
expose
the
service,
the
other
key
thing
that
that
our
our
developer
might
do
through
a
cube,
cuddle
and
kubernetes
api
is,
you
might
want
to
define
policy?
A
You
know
a
security
policy
on
this
application
and
what
we
could
do
with
that
is.
Let's
say
we
wanted
to
implement
a
policy
that
denied
access
between
these
two
logical
switches.
We
could.
We
could
actually
write
a
bit
of
gamal
that
we
feed
into
our
kubernetes
api.
That
says:
hey.
I
want
to
deny
based
on
pod
membership
from
this
namespace
to
this
namespace
and
implement
a
distributed
firewall
dynamically
in
our
in
SXT
infrastructure.
That
prevents
that
we
could
tag
on
namespaces.
We
could
tag
on
pods.
A
We
can
tag
on
IP
blocks,
so
we
can
even
control
ingress
and
egress
patterns
inside
and
outside
of
the
cluster.
So
it's
a
really
powerful
mechanism
that
we
can
be
declarative
in
the
way
that
we
do
our
security
and
implement
some
pretty
powerful
container
aware
security
constructs.
So
this
is
high
level
of
what
NSX
T
does
with
with
kubernetes
and
with
PKS
and
how
its
implemented
so
that
we
can
achieve
kubernetes
at
scale
and
production
with
dynamic
security,
Thanks.