►
From YouTube: vSphere Integrated Containers (VIC) Overview
Description
Senior Technical Marketing Architect Patrick Daigle provides an overview of the major components in vSphere Integrated Containers (VIC).
See related video in the below:
vSphere Integrated Containers Networking Overview
https://youtu.be/QLi9KasWLCM
vSphere Integrated Containers Storage
https://youtu.be/WrVHbjTZHrs
A
A
So
what
I'm
gonna
do
today
is
I'm
gonna,
give
you
an
overview
of
the
different
components
that
make
up
the
solution
and
then
we'll
go
over
some
of
the
steps
that
are
necessary
to
install
and
configure
the
solution
and
at
the
end,
I'll
go
over
some
of
the
workflows
of
how
your
end
user
or
your
developer
would
actually
use
it.
So,
throughout
all
of
this,
I'll
be
talking
about
the
different
personas
who
are
involved
as
well
in
setting
this
up
managing
this
and
consuming
this
in
your
environment.
A
So
with
vSphere
integrated
containers,
we
start
with
an
existing
vSphere
environment.
So
we
don't
need
to
dedicate
any
infrastructure
to
this.
You
can
set
this
right
up
in
your
existing
vSphere
environment,
so
we
actually
support
running
on
vSphere
enterprise
plus
and
on
vSphere
remote
office
branch
office
advanced,
and
the
reason
for
this
is
that
we
actually
have
a
dependency
on
the
V
network
distributed
switch.
So
this
will
also
work
with
nsx,
so
this
can
also
consume
nsx,
logical
switches.
A
Logical
switches
actually
provide
a
very
elegant
solution
to
the
isolation
of
the
bridge
network
and
then
you're
able
to
leverage
any
other
NSX
constructs
as
well.
So
you
get
all
the
benefits
of
the
distributed
firewall,
so
you
can
actually
provide
micro
segmentation
for
your
container
images
that
are
instantiated
on
these
four
integrated
containers.
A
So
I'm
showing
an
embedded
PSC
in
this
picture.
You
can
use
an
external
PSC
as
well
I'm,
just
using
an
embedded
for
the
purposes
of
this,
but
we
can
actually
use
and
that
the
external
PSC
as
well.
So
we
start
with
the
vSphere
infrastructure.
So
the
first
persona
we're
going
to
talk
about
the
VI
admin.
The
VI
admin
will
be
responsible
for
doing
the
initial
installation
and
setup
of
the
V
screen
integrated
containers
solution.
So
the
first
thing
that
they'll
want
to
do
is
to
deploy
the
Vic
OVA.
A
A
After
the
initial
boot
up,
you
need
to
initialize
the
OVA.
What
this
means
is
that
you
actually
need
to
connect
it
to
the
vSphere
platform
services
controller
and
we
use
this
integration
to
provide
role
based
access
control
for
the
users
that
we're
going
to
define
in
my
vic
solution.
So
the
users
and
groups
are
coming
from
the
PSC.
So
if
you
don't
have
anything
like
AD
or
LDAP
integration
that
you've
configured
in
the
PSC,
they
become
available
as
well
in
the
vSphere,
integrated
container
solution,
and
you
can
start
assigning
roles
to
them.
A
Once
this
is
initialized,
this
will
come
up
and
you'll
get
a
number
of
services.
So
there's
a
couple
of
key
services
that
we
have
in
the
Vic
solution.
The
first
one
is
the
management
portal.
This
is
based
on
Project
Admiral.
That's
a
VMware
open
source
project
to
provide
container
management,
so
this
will
provide
you.
A
Visibility
into
your
container,
hosts
into
your
actual
containers,
provide
your
visibility
into
container
networks,
container
volumes
and
then
allow
you
also
to
deploy
infrastructure,
manage
users,
projects
and
things
like
that
and
we'll
get
more
into
that
when
we
go
to
the
other
personas
here.
The
other
key
component
is
the
registry,
so
most
customers
who
start
in
their
container
journey
realized
very
quickly
that
they
need
an
internal
or
on-premises
area
to
store
their
container
images
securely.
So
we
have
a
built-in
registry
as
part
of
Vic.
A
That's
project
Harbor,
that's
another
open
source
project
by
vmware,
and
this
registry
provides
you
with
a
lot
of
the
controls.
You
need
to
run.
Containers
in
an
environment.
That's
been
operationalized
for
production.
So
what
do
we
mean
by
that?
We
mean
that
you
get
things
like
built-in
replication,
so
you
can
replicate
your
images
to
a
remote
site
either
for
just
data
protection
or
if
you
need
data
locality.
Things
like
that.
A
Third
thing
is
content
trust,
so
this
allows
us
to
sign
images
using
certificates
to
prove
that
they're
coming
from
a
trusted
source.
There's
many
other
controls
here,
there's
videos
out
there
about
harbor
and
what
it
provides.
So
we're
going
to
stick
with
these
for
now,
but
it
provides
a
lot
of
the
controls.
You
need
to
run
this
in
a
production
environment.
The
last
component
is
the
victim,
oisin,.
A
A
A
Provisioning
the
virtual
container
host,
so
this
is
a
workflow
based
tool
that
will
walk
you
through
the
different
steps
to
configure
your
these
four
integrated
containers
container
hosts.
If
you've
done
this
in
the
past
prior
to
the
Vic
1.3
version,
you
know
that
you
had
to
use
the
Vic
machine
command
line
to
create
the
virtual
container
oast
s--
and
it
was
not
very
user
friendly.
A
So
the
VI
admin
maintains
a
lot
of
control
by
allowing
them
to
decide
which
and
how
these
resources
are
presented
to
the
docker
API
that
we're
going
to
see
in
a
few
minutes
and
then
the
developer
gets
a
native
experience
and
that
they
can
use
the
same
open
source,
API
s
and
tools
that
they've
been
using
in
the
past.
So
using
the
h5
plug-in
my
VI
admin
is
able
to
take
the
last
step
in
setting
this
up
and
that's
deploying
the
first
virtual
container
host
so
using
the
html5
plugin.
A
I
will
deploy
a
virtual
container
host,
so
what
is
a
virtual
container
host?
Well,
it
starts
with
a
resource
pool
and
we
use
a
resource
pool
because
it
gives
us
a
way
to
group
the
container
VMs
with
their
container
host,
and
it
also
provides
us
with
a
very
flexible
resource
management
construct
where
we're
able
to
expand
this.
If
we
need
more
capacity
or
we're
able
to
restrict
it
if
my
container
host
is
taking
up
too
much
resources
and
impacting
the
other
workloads
that
are
running
in
my
vSphere
environment,
so
it
provides
a
very
flexible
mechanism.
A
One
thing
to
note
resource
pool
means
I'm
using
DRS,
and
we
actually
make
extensive
use
of
DRS
to
do
the
initial
placement
of
the
container
host
and
any
container
VMs
in
the
Robo
advanced
use
case.
There
is
no
DRS,
so
what
we
did
is
we
use
an
inventory
folder
to
group
these
container
VMs
together
with
their
associated
container
host,
and
then
the
VCH
has
a
very
simple
algorithm
that
it
uses
to
make
intelligent
placement
decisions.
So
what
we're
doing
really
with
this
is
we're
only
avoiding
making
a
bad
placement
decision.
A
A
So
we
start
with
a
resource
pool
we
provision
one
VM,
that's
my
VCH
VM
and
one
of
the
key
things
that
that
VCH
VM
will
give
me
is
my
docker
API,
so
we're
exposing
a
docker
compatible.
Api,
that's
going
to
be
able
to
answer
regular
docker,
CLI
or
docker
API
calls.
We
were
able
to
secure
this
if
you
want
to
using
TLS.
A
So
we
can
use
TLS
for
encrypting
the
communications
to
and
from
the
docker
API,
and
we
can
also
use
TLS
client
certificates
to
restrict
who
can
access
that
remote
API
to
send
calls
all
right.
So
now
we're
done
setting
up
the
vSphere
integrated
containers
environment
so
once
this
is
all
available
and
set
up.
This
is
where
my
next
two
personas
are
going
to
come
into
play,
and
here
this
solution
is
actually
configured
and
managed,
typically
by
your
cloud
and
your
dev
ops,
admins.
A
So
the
cloud
in
the
dev
ops,
admins
will
actually
connect
to
my
management
portal
and
they'll
be
able
to
perform
a
few
tasks.
So
the
first
thing
they
need
to
do
is
to
set
this
up
and
set
some
users
up,
so
that
I
can
actually
have
a
developer
come
in
and
use
this.
So
the
first
thing
we're
going
to
talk
about
is
the
project.
A
This
construct
of
the
project
that
we
have
in
the
Vic
appliance
should
really
be
seen
as
a
unit
of
tenancy
or
as
a
boundary.
So
we
assign
users,
we
assign
infrastructure
and
we
assign
registries
and
policies
to
projects
so
that
they
can
be
used
to
create
boundaries
around
either.
You
know
different
teams
without
your
organ
within
your
organization,
or
create
boundaries
between
development
and
production,
for
example.
A
So
where
the
cloud
admin
is
their
responsibilities
to
manage
the
appliance
as
a
whole,
the
DevOps
admins
responsibility
is
to
manage
the
individual
projects,
so
they've
ups
admins
can
manage
users
within
projects
manage
the
registries,
manage
the
policies
within
the
project.
So
again,
the
first
thing
that
cloud
admin
will
do
is
create
a
project
and
assign
a
DevOps
admin,
user
or
users
to
this
project.
One
thing
to
note
is
that
cloud
administrators.
A
When
you
first
deploy
the
Vic
appliance,
we
will
assign
cloud
administrator
role
to
anybody
from
the
platform
services
controller
that
is
part
of
the
administrators
group.
So
it
ties
in
well
with
whatever
groups
you
have
that
are
created
already
in
your
platform.
Services
controller.
So
once
I
have
some
users
defined.
The
next
thing
I
want
to
do
is
assign
some
infrastructure,
so
I
need
I
need
some
runtime
right.
I
need
a
place
to
actually
run
my
container
images.
A
So
this
is
where
we're
going
to
actually
start
connecting
these
container
hosts
that
I'm
creating
to
the
management
portal,
so
they
can
be
managed.
I
can
view
what
they're
running
and
I
can
actually
start
scheduling
or
instantiating
container
images
on
this
container
host
after
I.
Have
my
infrastructure
I
need
to
configure
my
registries.
So,
as
I
mentioned,
we
have
the
built
in
registry.
That's
harbor,
so
every
project
gets
a
dedicated,
namespace
or
repository
within
the
built-in
harbor.
So
you
can
use
that
as
your
internal
on-premises
registry.
A
A
Once
we
have
the
registry
set
up,
the
last
thing
we
can
set
up
are
the
policies,
so
I
can
set
up
some
policies,
for
example,
to
have
replication
done
in
my
environment,
I
can
set
up
policies
around
vulnerability
scanning.
We
can
actually
prevent
images
that
are
found
to
be
vulnerable.
We
can
prevent
them
from
being
and
run
from
the
registry,
and
we
can
similarly
prevent
images
that
are
unsigned
from
being
pulled
and
run
from
the
registry.
A
So
once
this
is
set
up
once
your
project
is
set
up,
you
have
users
assigned
that's
where
your
developer
can
come
in
and
start
using
this
solution
and
we're
going
to
look
at
what
the
developer
workflow
now
looks
like.
So
how
does
a
developer
actually
access
this
and
use
this
for
instantiating
their
containers?
So
let's
look
at
the
developer
from
a
developer
over
here.
The
first
way
that
the
developer
can
consume.
This
is
by
connecting
to
the
management
portal.
So
developer
can
come
in
connect
to
this
web
portal
and
from
there
they'll
go
there.
A
They'll
get
a
view
into
what
images
are
available
in
a
registry.
They
can
actually
deploy
these
images
from
that
interface.
So
do
a
simple
container
image
deployment
from
that
interface.
Let's
just
use
container
image
called
my
image
and
as
an
example
here
so
developer
comes
into
the
management
portal.
They
see
that
my
image
is
available
in
the
registry.
They
initiate
a
deployment,
so
the
Vic
of
the
Vic
appliance
will
talk
to
my
container
host
container
host
will
pull
the
my
image
image
from
the
registry
and
then
they
will
instantiate
the
container.
A
So
one
of
the
key
differentiators
of
vSphere
integrated
containers,
when
you
compare
it
to
other
engines,
is
that
where
docker,
for
example,
we'll
use
the
docker
image
and
then
they'll
use
this
docker
image
to
and
use
namespaces
and
C
groups
to
build
a
Linux
container.
With
these
four
integrated
containers,
we
use
the
same
docker
image,
so
we
use
the
same
building
blocks
the
same
docker
image
the
same
layers,
but
instead
of
using
namespaces
and
C
groups,
we
actually
build
a
VM
using
native
vSphere
construct.
A
So
this
container
VM,
we
call
it
a
container
VM
to
distinguish
it
from
regular
VMs.
It's
a
very
lightweight
VM,
that's
running
a
very
small
Linux
container
to
bootstrap
it
and
then
inside
of
that
Linux
environment
will
start
up
your
application.
In
this
case,
the
application
is
called
my
image.
So
Vic
is
strongly
opinionated
about
having
this
strong
isolation
model,
meaning
that
every
container
image
gets
Estancia
dinh
side
of
that
its
own
VM.
So
some
some
of
the
benefits
are
that
you
get
that
VM
as
a
security.
A
Boundary
vSphere
VMS
are
well
recognized
as
a
security
boundary,
so
every
container
gets
its
own
runtime.
Basically,
that's
running
separate
from
the
actual
management
plan.
That's
another
key
differentiation
right.
The
management
plane
here
is
running
completely
separately,
so
you
don't
run
into
as
many
issues
with
isolating
containers
from
themselves
or
isolating
containers
from
the
container
host.
In
this
model,
every
container
is
instantiated
inside
of
its
own
runtime.
A
So
you
get
all
the
benefits
from
the
VM
as
a
security
boundary
and
as
well
from
a
scheduling
perspective
you
get
all
the
benefits
of
running
on
top
of
vSphere.
Vsphere
is
very,
very
good.
At
scheduling,
workloads
and
because
we're
provisioning
these
as
VMs
there,
first-class
citizen
and
vSphere
can
work
all
of
its
magic
either
with
the
dynamic
resource,
scheduler
h.a
any
of
the
distributed
services.
A
The
other
thing
I
can
do
from
a
developer
perspective
when
I'm
accessing
this
management
portal
is
like
I,
actually
have
this
application
template
format
in
the
management
portal
that
allows
me
to
create
configurations
that
are
a
little
bit
more
complex
and
that
are
made
up
of
more
than
one
container
and
require
certain
network
constructs
or
certain
volumes.
I
can
actually
create
that
capture
that
as
a
single
unit
and
then
deploy
or
instantiate
it
as
a
single
unit.
So
there
are
two
ways
to
do
this
you
can
use.
A
This
has
a
built
in
graphical
designer,
where
you
have
a
canvas
and
you
can
assemble
the
different
container
images
with
their
networks
and
volumes.
The
other
option
is
you:
can
export
import
llamó
files
as
well,
so
you
can
export
its
own
template
format
or
you
can
import
actually
docker
compose
file
using
using
a
tool
all
right.
So
this
graphical
way
of
accessing
the
environment
is
interesting
and
useful,
but
in
a
lot
of
cases
my
talk
to
customers.
A
What
what
they're
really
looking
for
is
a
programmatic
way
to
consume
this,
and
that's
where
exposing
that
docker
API
becomes
very
interesting,
because
my
developer
can
actually
come
in
and
talk
directly
to
the
docker
API
using
the
familiar
docker
command
line
interface
or
the
docker
API.
So
the
docker
command-line
interface
is
the
docker
command.
You
know,
dr.
pol
dr.
run
these
types
of
things
that
you
can
run
against
this
docker
compatible
API,
and
then
you
can
use
as
well
docker
API.
A
So
if
you've
built
any
type
of
automation,
tools
and
your
software
development
lifecycle
or
others
that
make
use
of
the
docker
API,
you
can
actually
point
them
at
this
remote
API,
endpoint
and
it'll
work.
So
how
do
I
interact
with
this?
Well
I
interact
with
this
using
docker,
so
docker
pull,
for
example.
If
I
want
to
pull
an
image
from
my
registry,
I
can
also
do
a
docker
run
again.
If
I
take
the
example
of
my
image.
A
I
do
a
docker
run
my
image
directly
to
this
docker
API
docker
API
will
pull
it
from
the
registry
and
you
container
VM
will
get
created
running
my
image,
so
you
can
see
now
why
we
use
this
resource
pool
or
inventory
folders
in
the
case
of
Robbo
to
group
these
together
right
I
want
to
keep
these
container
VMs
that
belong
to
this
container
host
I
want
to
have
some
kind
of
logical
grouping
to
keep
them
together.
So
this
is
why
this
becomes
important.
A
Once
this
is
up.
You
know
I
can
use
other
commands
like
docker
exec,
if
I
need
to
interact
with
this
container
or
I
can
use
docker
CP,
for
example,
to
move
data
in
and
out
of
this
container.
All
these
commands
are
available
directly
through
the
docker
API,
so
we're
really
creating
a
docker
native
experience
for
our
developer
and
that's
one
of
the
key
benefits
here.
As
we've
seen
from
the
VI
admin
perspective,
they
deploy
this
using
familiar
vSphere
tools
and
constructs.
They
manage
this
using
familiar
vSphere
tools
and
processes.
A
You
know
all
of
these
things
are
first-class
citizens
in
the
vSphere
environment,
so
VI
admin
on
a
day-to-day
basis
can
see
what
container
VMs
are
running.
Who
started
them.
They
can
see
what
resources
are
being
used.
They
can
control
the
amount
of
resources
through
the
resource
pool
construct
so
on
and
so
forth.
So
from
the
VI
admin,
we're
creating
a
vSphere
native
experience
and
then
from
the
developer
standpoint
we're
creating
this
darker
native
experience
where
they
can
consume
this
using
the
familiar
open
source,
Dockers,
CLI
and
API
or
if
they
choose
to.