►
From YouTube: Kubernetes UG VMware 20200206
Description
February 6, 2020 meeting of the Kubernetes VMware User Group. This meeting covered storage for persistent volumes, Pod scheduling interaction with vSphere VMs, and running Kubernetes on VMware desktop hypervisors.
A
Okay,
we're
recording
so
hi
everybody,
I'm,
Steve
Wang
and
my
co-host
is
Miles
gray.
We're
co-chairs
of
this
user
group,
we're
also
joined
by
the
user,
leads
Joe
Searcy
of
t-mobile
Bryce
and
Shepherd
of
Walmart
I
posted
a
link
in
that
chat
to
an
agenda
document,
and
if
you
like,
you
can
leave
your
name
in
that
list
of
attendees
we're
going
to
start
with
I
guess
it
isn't
in
on
the
intro,
but
Robert
suggested
we
go
through
introductions.
A
So
we'll
do
that
first
and
then
we'll
move
on
to
an
overview
of
first-class
disk
in
CSI
and
then
finally,
Robert
put
on
the
agenda
topic
of
discussing
cube,
kaanum,
Amsterdam
and
who's
gonna.
Be
there
what's
of
interest.
So
that
said,
let's
start
with
some
introductions.
I'll
start
with
myself,
Steve
Wong
I've
been
working
on
kubernetes
since
2016.
A
B
You
want
to
introduce
yourself
sure,
thanks
Steve,
so
I'm
miles,
gray,
I
also
work
for
VMware
as
part
of
the
HCI
business
unit,
so
my
primary
focus
is
storage
and
kubernetes
storage
I've
been
working
with
Kate's,
not
as
long
as
Steve,
but
since
it
by
2017
or
so
and
I'm
fairly
adept
with
with
most
parts
of
it.
So
if
you
have
questions
on
it
and
in
particular
on
the
storage
integrations,
I
can
probably
answer
your
questions.
If
you
have
any,
my
role
at
VMware
is
technical
marketing,
so
my
job
is
essentially
to
talk
to
engineers.
B
D
2016
I
think
and
if
it
might
have
been
2017,
but
oh
well,
it's
been
a
few
years
now
so
yeah
we're
running
our
edge
communities
clusters
on
VMware,
so
I'm.
My
team
is
responsible
for
similar
to
Joe,
building,
maintaining
and
that
kubernetes
platform
here
at
Walmart
I'm,
specifically
one
of
the
architects
over
our
edge
clusters
by
edge
I
mean
these
are
small
clusters
that
we
run
in
our
stores
or
smaller
remote
areas
such
of
our
such
of
our
distribution
centers.
So
that's
a
quick
update
on
me.
I'm.
E
E
E
F
G
Yeah
hi
I'm
Robert
from
the
floppy
says
up
on
Twitter
I
work
for
Isaac
you,
which
is
a
V
my
partner
in
the
Netherlands
I'm,
extremely
new
to
the
whole
cloud
native
space.
My
background
is
in
infrastructure
and
a
bit
of
storage,
I'm,
very
interesting
community
and
in
helping
people
make
the
journey
to
cloud
native
and
everything
with
that
entails.
So
that's
my
my
main
reason
for
being
here,
but
I'm,
very
mature,
kubernetes,
noob
still
ship.
G
H
I
A
Thanks
Michael
by
the
way,
I
know
that
there
was
a
recent.
Is
it
a
beta
of
support
for
kubernetes
with
Fusion?
So
if
you
want
to
put
that
if
you're
prepared-
and
you
want
to
put
it
on
the
agenda,
maybe
if
we
have
time
we
can
get
to
that
during
this
meeting,
because
I
think
it
might
be
of
interest
to.
I
A
J
You
once
again
at
VMware,
so
I've
been
doing
touch
points
on
kubernetes
same
as
miles
and
Steve
have
been
for
the
past
few
years
as
well
into
2018
in
all
of
2019.
I
was
part
of
cig
release,
so
part
of
the
kubernetes
release
team,
pretty
much
for
four
releases
between
those
two
years
and
then
I
am
also
involved
with
a
commercialized
product
that
will
be
coming
out.
So
that's
really
where
my
interest
isn't
here,
just
to
kind
of
see.
K
L
Hey
Keith
here
from
VMware,
also
tech
marketing
in
the
Matta
PU
has
someone
here
would
know
of
been
doing
Cuban
Eddie's
since
2017
and
kind
of
fitting,
with
what
the
goal
is
here
with
kubernetes
on
VMware
infrastructure.
As
back
then
I
was
working
for
Adele
technologies
on
doing
reference,
architectures
on
the
actual
for
Cuban
ovens,
but
now
with
VMware
cool
Ted.
B
A
Move
on
so
first
on
the
agenda
was
an
overview
of
first-class
discs
in
CSI.
If
you
look
in
the
agenda
notes
document
I
put
a
bunch
of
curated
resources
on
these,
so
will
not
go
into
a
super
deep
dive
on
this,
but
a
year
ago
there
wasn't
much
material
and
thankfully
now
there
is
the
Cormac
code.
Hogan
blogs
are
good.
Recently,
first-class
disk
and
cloud
native
storage
were
added
to
the
docs.
A
A
B
Yeah
just
to
sort
of
expand
on
what
Steve
was
talking
about
to
do
with
F,
CDs
and
they're
their
origins.
It
was
originally
built
for
a
feature
of
a
product,
really
called
app
volumes
for
VMware
View,
and
it
was
basically
to
allow
you
to
put
apps
in
individual
VM
decays
and
then,
if
you
needed
them
in
a
guest
OS,
they
would
mind
those
so
they
needed
to
persist
across
a
lifecycle
that
was
not
associated
with
the
VM.
B
So
we
went
and
decided
to
reuse
that
first-class
discs
concept
for
a
number
of
reasons,
but
the
primary
one
being
it's
a
global
catalog
that
is
not
tied
to
a
VM.
So
if
you
create
a
first-class
disk,
there's
a
record
of
that
VMDK
in
the
vCenter
database
that
uniquely
identifies
that
first-class
disk.
B
Now,
if
any
of
you
work
have
worked
in
sysadmin
in
the
past,
you'll
know
that
whenever
you
delete
VMs
that
have
been
decays
that
are
unattached,
you
end
up
with
orphans
on
your
VMFS
and
NFI
status
tours
and
it's
a
real
pain
to
try
and
clean
that
stuff
up.
So
fcd
does
away
with
all
that
problem
because
we
have
a
database
that
keeps
a
record
of
every
single
one
of
these
fcd
volumes
that
is
provisioned
from
from
a
V
Center.
So
we
took
that
concept
and
we
thought
well.
B
If
we
are
going
to
use
it
for
app
volumes,
we
can
use
it
for
kubernetes
volumes
as
well.
So
the
first
implementation
that
we
ever
did
of
any
kind
of
storage
for
kubernetes
was
not
CSI
based
and
it
was
not
F
CD
based-
and
that's
probably
what
most
of
you
are
using
today.
So
if
anyone's
used
in
Red
Hat
open
shifts,
Google
anthos,
Enterprise,
PKS
Rancher
any
packaged
Kait
solution
today
it
uses
the
vcp.
B
You
can
install
the
CSI
if
you
want,
but
out
of
the
box,
almost
all
of
them
used
the
BCP,
so
it
doesn't
use
the
first-class
discs
concept.
It's
just
the
standard
VMDK
created
on
a
data
store
mounted
into
a
VM
and
sort
of
thing.
We
don't
have
the
global
tracking
for
it.
So
with
CSI
and
with
CNS
it
being
a
core
feature
of
the
vSphere
platform.
We
decided
that
we
wanted
to
have
something
a
little
bit
more
robust,
make
it
easier
to
try
to
make
it
easier
to
operationalize.
A
A
Seven,
you
three
and
I
think
the
reason
that
a
lot
of
these
distros
are
still
back
on
the
on
the
older
interface
is
that
realistically,
there
are
a
lot
of
enterprise
users
that
have
three
five-year
life
cycles
get
their
licenses
for
the
hypervisor
with
at
the
time
the
hardware
is
purchased
and
if
you're
walking
in
the
shoes
of
the
kubernetes
distribution,
you
might
have
customers
that
can't
move
on
that
aren't
on
six
point:
seven,
you
three
and
have
difficulties
moving
there
quickly.
So
this
is
a.
This
is
something
that's
in
transition.
A
If
you're
putting
in
a
new
Greenfield
I'd,
be
looking
at
six
point,
seven,
you
three
and
first-class
discs,
but
for
now
it's
okay!
If
you
stick
with
the
older
stuff
now,
looking
down
the
road
say
a
year
or
more
from
now
at
some
point,
kubernetes
itself
is
getting
rid
of
entry,
storage
drivers,
the
entry
cloud
provider,
and
you
need
to
be
looking
at
transition
plans
if
you're
on
that
older
stuff.
But
it's
probably
not
an
emergency
Oh
backing
you
miles
sure.
B
Just
add
a
bit
more
history
to
it
as
well,
because
you
know
like
like
most
things:
it's
not
just
as
simple
as
I
made
it
sound.
We
had
the
vcp
to
begin
with,
which
you
know
that
supported
versions
way
back
I
think
it
was
like
six
taro.
It
supported
right
back
to
because
six
dog
didn't
have
the
fcd
concept.
We
introduced,
FC,
d4,
VMFS
and
NFS
and
6.5,
and
then
4v
sin
and
6.7
you
won.
B
So
there
was
a
project
came
out
of
map
the
use
of
the
modern,
app
platforms,
bu
that
Andrew
and
Keith
worked
for,
and
they
built
an
implementation
of
CSI
based
on
the
f
CD
API
set
that
existed
there
and
that
worked
from
6.5
for
VMFS
and
NFS,
and
six
of
seven
new
one
for
being
sent.
Now
whenever
we
built
the
CNS
feature
itself,
we
wanted
to
add
a
new
API
to
front
F
CD.
So
from
from
our
perspective
in
HCI
bu,
the
f
CD
API
didn't
have
all
of
the
stuff
that
we
wanted
in
it.
B
So
we
put
a
new
API
in
front
of
it
called
CNS
and
we
abstracted
out
the
back
and
f
CD
API.
So
essentially,
what
this
means
is
any
calls
now
in
six
of
73
and
the
the
current
upstream
version
of
the
driver
they
go
into.
Cns
and
CNS
does
all
the
backend
work.
So
that
means
we
can
swap
out
back
in.
So
we
don't
need
to
use
F
CD
in
future
or
if
we
wanted
to
add,
like
file
services
back
in
the
future.
We
can
do
that
without
having
to
significantly
refactor
the
upstream
CSI
driver.
B
So
it
is
that
feature
in
the
CNS
part,
and
the
current
upstream
that
requires
six
out
73
and,
like
Steve
said,
there
is
a
period
of
transition
here.
People
don't
just
upgrade
their
vSphere
environment
overnight,
we're
very
conscious
of
that,
but
we're
hoping
with
the
new
releases
of
these
fear
that
are
coming
out
in
the
near
future
that
the
cadence
of
people,
actually
updating
things
will
become
a
little
bit
more.
B
A
A
B
H
H
Can
you
talk
a
little
bit
about
where,
where
you
are,
and
some
plans
support
with
the
new
snapshot
mechanism
in
kubernetes
and
how
that's
going
to
trickle
down
to
the
CSI
is
that
functionality
there
today
is
that,
if
it
is,
is
it
considered
stable
just
a
little
bit
around
that,
because
I
know
that
a
lot
of
users
are
curious?
That's
one
of
the
more
intriguing
features
of
one
of
the
latest
kubernetes
releases.
M
So
this
is
a
snapshot.
Future
goes
to
beta.
You
went
out
17,
so
that
is
stable.
Now,
of
course,
we
want
to
get
feedback,
and
if
there
are
any
box,
we
need
to
fix
it
before
going
to
GA,
and
the
next
question
is:
where
are
we
going
to
get
data
supporting
our
driver?
You
know
CSI
driver,
so
that
is
still
working.
Progress
I
actually
cannot
really
give
you
a
time
when
that
will
be
released.
M
B
A
A
Then
you
typically
hook
it
up
to
use
it
with
backup
tools
like
Valero,
for
example,
and
that
stuff
is
out
in
the
open
and
not
gated
by
you
know,
product
releases
I
know
there
was
a
session
on
it
at
cube,
con
North
America
back
in
November.
So
let
me
find
a
link
to
that
and
I'll
put
it
in
the
notes.
Yeah.
M
M
D
M
M
B
M
B
M
Actually
should
be
there
sometime
in
April
when
our
to
dot
CSS
I
were
to
dot
o
release,
so
the
driver,
the
one
that
is
available
in
github.
So
when
we
have
out
you
know,
release
we
should
have
the
expansion,
but
but
I
think
that's
the
thing
that
my
office
has
been
testing.
We
only
support
offline
accession,
so
I
just
need
to
be
aware
of
that.
Yeah.
B
Yeah,
if
you,
if
you
have
a
burning
need
for
for
CSI,
resize
or
resize
volumes,
there'll
be
something
there
in
the
near
future,
but
there
are
challenges
around
doing
online
disk
expansion
and
that
kind
of
thing
that
we
need
to
remediate.
First,
if
you
want
like
true
and
end
support
where
it
does
everything
for
you.
D
B
A
B
Cool
yeah,
it's
only
the
cluster;
it
actually
interacts
with
it.
That
needs
to
be
at
673,
so
usually,
whenever
I
talk
to
customers,
they're
setting
up
new
environments
for
this
stuff
anyway.
So
it's
not
that
big
of
a
deal
or
they'll
just
set
up
an
extra
cluster
and
their
existing
VC
and
upgrade
the
VC
and
just
have
that
cluster
itself
as
a
pocket
673
with
all
of
their
their
other
in
fur.
There.
A
There
is
a
extra
cost
pre
event
called
kubernetes
Academy
sponsored
by
VMware,
but
I'm,
going
to
tell
people
that,
if
you're
looking
for
the
it,
it's
good
material
to
teach
you
kubernetes
intro,
but
it
is
not
vSphere
specific
in
any
way.
So
you
don't
assume
that
just
because
it's
presented
by
VMware
that
it's
focused
on
vSphere-
and
this
is
useful
training
on
kubernetes
running
anywhere
in
a
public
cloud
on
vSphere
or
not
on
vSphere
at
all,
and
it's
that
generic
there,
our
sessions
by
VMware
people
on
build
packs
that
once
again
is
platform
neutral.
A
A
We
only
get
thirty
five
minutes,
so
probably
not
a
deep
dive,
but
we'll
go
into
something
and
I'm
thinking
of
having
a
face-to-face
lunch
at
the
event
for
people
who
are
there
in
person
that
could
be
as
informal
as
getting
some
signs
made,
because
these
conferences,
the
Q
cons,
usually
have
their
own
launch
and
I
haven't
I,
don't
want
to
commit
to
going
out
to
a
restaurant
because
I'm
not
familiar
with
the
area
to
know
if
there's
anything
nearby,
so
we're
likely
to
have
something
but
a
TBD.
What
exactly
it
is
the
other
sessions
there.
A
Sessions
on
the
cloud
providers,
which
applies
to
kind
of
the
it's
if
you're,
using
a
distro,
maybe
getting
into
the
cloud
provider,
is
something
your
vendor
took
care
of.
But
if
you're
going
cube
a
DM
or
doing
your
own
installs
that
that's
good
material
for
getting
a
deep
dive
perspective
of,
what's
really
going
on
under
the
covers
and
might
be
essential
if
you're
kind
of
growing
your
own
rather
than
using
a
distro.
Anybody
else
got
any
thoughts
on
things
they
know
of.
There
notice
them
at
cube.
Todd.
A
And
then
Robert
is
organizing
something
called
he's:
organizing
a
VBS
event,
so
I
think
longtime.
These
fair
users
are
familiar
with
these
VBS
events,
but
maybe
people
coming
from
the
other
direction
of
first
discovering
kubernetes
and
then
electing
to
run
it
on
top
of
vSphere
have
never
heard
of
these
but
Robert.
Maybe
you
could
describe
this
and
even
as
a
local
in
Amsterdam,
maybe
you
can
describe
kind
of
what
these
are
like
in
your
community.
Okay,.
G
So
VB
is,
is
very
informal
get-together,
usually
at
a
bar
or
a
cafe,
to
just
to
to
meet
each
other's
network
to
swap
war
stories
and
just
to
get
to
know
each
other.
It's
it's
just
a
community
building
activity,
the
one
I'm
doing
at
the
cube
con
will
be
at
a
at
the
conference
center
itself
and
Amsterdam
has
five
or
six
on-premise
bar
slash.
G
Restaurants,
I'm
organizing
this
at
one
of
those,
but
there's
a
little
risk
involved
there,
because
I
don't
know
how
busy
you
will
be
and
the
invite
link
is
up
to
15
people
now.
So
that's
quite
nice.
The
VBS
events
in
the
Netherlands
are
usually
not
very
big.
There's
I
think
the
biggest
one
is
in
Amsterdam,
which
is
held
every
few
months,
not
organized
by
me
about
20
people
can
show
up
to
that.
G
There's
one
in
the
hague,
which
is
only
about
eight
of
us
usually,
and
there
are
there-
are
one
to
others
in
the
country.
But
the
reason
for
wanting
to
do
one
at
the
cube
con
was
seeing
a
lot
of
people
from
the
VMware
and
the
V
community
in
the
V
expert
community,
indicating
interests
in
in
communities
in
the
scene
cf
in
the
community
around
that.
G
So
this
seemed
like
a
perfect
cross-pollination
effort
or
timing,
but
it's
probably
going
to
be
pretty
busy
at
the
conference
centers
over
those
days,
especially
on
the
Tuesday
I'm,
not
planning
on
moving
it.
I'm,
just
kind
of
I
think
have
to
wing
it
and
see
how
busy
it
is
if
it
means
we're
all
standing
around
with
a
beer
in
our
hands.
You
know
that's
probably
fine,
too,
and
then.
A
G
B
Given
this
is
sort
of
our
first
or
inaugural
meeting,
and
most
of
you
are
kind
of
new
to
this
space,
or
at
least
some
of
you
are
new
to
this
space
and
maybe
just
want
to
get
up
and
running
with
gate,
zombies,
fear
I,
would
say
probably
the
easiest
way.
If
you
just
want
to
try
right
in
your
environment,
you
know
kick
the
tires
and
see
how
it
works.
Andrew
has
been
working
on
a
project
along
with
another
guy
called
Andrew.
B
Cooks
called
cluster
API
for
vSphere
and
that's
without
a
doubt,
the
easiest
way
to
get
set
up,
install
and
kubernetes
on
a
vSphere
environment.
So
I
threw
a
link
into
the
agenda.
There,
I've
written
a
blog
on
on
how
to
use
it,
how
to
get
started.
So
if
you
just
want
to
kick
the
tires
on
kubernetes
with
vSphere,
that's
where
you
want
to
go
and
check
that
ID.
But
you
know,
given
we've
got
a
bunch
of
time
left
as
well
and
Michael's
here
Michael.
Do
you
want
to
chat
about
kubernetes
on
desktop
yeah.
I
I
You
know,
modern
application,
builder
friendly
and
the
way
that
we
do,
that
is
by
leveraging
sort
of
the
best
components
in
the
open-source
community
and
building
that
on
top
of
our
hypervisor,
and
so
we
had
our
initial
release
like
I
said
a
couple
weeks
ago,
and
so
right
now
we
were
able
to
run
containers.
We
can
pull
push
and
run
containers
build
is
coming
very
soon
and
so
that
that's
sort
of
where
the
technology
is
at,
but
we're
leveraging
under-the-hood
container
D
we're
leveraging,
runs
C.
I
You
know
we
have
stuff
that
we
had
built
for
the
project
Pacific
work
over
on
vSphere
and
we've
merged
all
that,
basically
into
the
fusion
and
workstation
codebase.
Interestingly,
if
folks
don't
know,
fusion
and
workstation
are
actually
the
same
hypervisor
code
as
ESX
same
monitor
code
same
scheduler.
It's
all
you
know.
I
We
have
one
team
working
on
it
all
together
and
then
we
have
like
a
separate
team
on
top
of
fusion
that
creates
the
UI
layer
and
that's
where
we're
working
on
that's
sort
of
UX
around
what
we're
calling
a
cuddle,
so
V
CTL
the
ability
to
manage
these
containers
and
operate
these
new
things
need
needed
to
interface.
As
it
goes
on
we're.
You
know
we
want
to
look
at
it
more
inside
the
UI
and
easy
buttons
and
whatnot,
but
step
one
make
it
run.
I
Containers
step
two
is
working
with
the
open
source
community
in
particular
kind
and
the
folks
over
and
say
cluster
lifecycle,
so
we've
been
talking
to
Ben,
elder
and
and
some
of
the
other
folks
over
there.
But
how
can
we
think
about
the
provider
concept
as
it
applies
to
kind,
and
you
know
maybe
there's
a
variety
of
different
runtimes
that
make
sense.
So
you
know
we're
gonna
see
what
what
we
can
sort
of
do
to
work
in
concert
there,
but
ultimately,
I
would
like
to
see
a
story
whereby
we
have
hey.
I
If
you
have
kind
installed,
you
can
point
it
at
fusion
or
workstation
with
a
flag
and
a
way
it
goes.
Otherwise,
if
you
don't,
we
would
like
to
have
a
that
easy
button
for
folks
that
don't
want
to
go
through.
Maybe
all
that
sort
of
manual
work
even
that
sort
of
here's,
my
kubernetes
environment,
and
then
anybody
want
to
see
a
little
bit
like
I
could
show
a
quick
demo.
Anyone
I've
seen
head
nods
all.
A
Right
cool
sure
the
other
thing
is
I
went
and
looked
at
it
when
there
was
a
little
publicity
coming
over
Twitter
with
a
link
and
back
a
few
weeks
when
I
looked
at
it
it
it
sort
of
implied
that
it
was
for
fusion
only,
but
for
people
running
workstation
on
Linux
or
Windows
you're
saying
you
know
this
stuff
is
good
there.
It
will
be
okay.
I
You
know
like
it
represents
a
tiny
number
of
our
users,
but
it's
incredibly
important,
so
we're
trying
to
figure
out
the
right
thing
there,
but
definitely
you
know
give
it
up
put
on
Windows
our
tech
preview.
We
didn't
get
the
windows
binary
done
in
time,
but
when
we
do
the
release
for
this,
which
isn't
going
to
be,
you
know
very
long
very
far
from
now
it'll
be
included
in
Windows
I,
don't
think
I
can
share.
Unless
someone
wants
to
add
me
as
a
host.
Ok,
let
me
try
that
thanks,
Dee.
B
A
D
I
So
you
know
one
of
the
things
that
we're
doing
that's
a
little
different
is
we're
leveraging
give
up.
We
want
to
be
more
open
and
you
know
be
where
folks
are
and
I
didn't
use
them
get
up
for
a
long
time,
so
it
just
made
sense
to
put
our
documentation
there
and
now
that
github
has
get
up
pages.
We
put
the
nice
little
Lander
for
it
exciting,
but
you
get
the
tech
preview.
You
know
it's
a
direct
link,
installs
it
it
everything
is
self-contained
inside
the
binary.
I
It's
also
licensed,
so
you
can
just
keep
running
it.
That
license
is
gonna,
be
good
till
closer
to
the
end
of
the
year.
It's
gonna
be
hard
for
folks
to
expire
it
given
the
number
of
releases
we're
going
to
have
and
then
within
the
github
repository
inside
of
the
nautilus
repo
we
our
docks.
So
there's
our
you
know
getting
started
guide
which
basically
covers
you
know
your
the
typical
use
cases
that
we're
currently
supporting.
I
But
if
you
want
to
get
into
the
thick
of
it,
you
want
to
look
at
our
man
pages,
and
so
we
have.
You
know
a
page
for
all
the
different
commands
that
we
can
do
and
how
they
work,
and
you
know
things
like
that,
so
we're
accepting
issues
and
pull
requests.
Please
share
us
your
thoughts,
you
know
we're
all
keen.
So
as
far
as
getting
it
running
like
I
said,
we
use
a
tool
called
we
created
a
tool
called
V
cuddle.
I
So
if
I
dude
II
cut
along
just
status,
I
am
currently
running
the
container
run
time.
I
turned
it
on
a
moment
ago.
So
here
it
is
running.
Just
kind
of
natively
gives
you
an
idea
of
what
it
does.
The
different
commands
that
we
have,
so
we
can
with
respect
to
containers
we
can
delete
describe
you
know
we
can
run
commands
within
the
Michelle
into
them
list
him.
It
lists
stuff
about
that.
I
We
can
and
then
you
know,
start/stop
run
and
check
version,
so
you're
kind
of
typical
things
that
you
would
want
to
do
with
the
container.
So
what's
different
about
it,
sort
of
the
approach
to
how
it
actually
goes
ahead
and
creates
the
container
it.
You
know
we
imported
some
of
the
project
that
we
had
done
in
project
Pacific
for
this
concept
around
they
have
this
pod
VM.
So
it's
this
tiny
little
Linux
based
on
photon
virtual
machine.
That
is
purpose-built
for
this.
I
It's
got
almost
nothing
in
it,
except
for
the
kernel
so
that
we
can
expose
cgroups,
and
then
we
have
a
CRX
runtime
in
front
of
that
which
is
like
a
derivative
of
run,
see
and
in
front
of
that
we
have.
Our
implementation
of
container
D
is
so
shimming
along
the
way.
So
if
I
let's
run
a
container
so
I'll
list,
my
images,
so
these
are
the
ones
that
got
and
I've
created
one
and
I've
given
a
tag
here.
So
that
would
be
just
I
show
you
the
command
tag.
You
know
tag
image.
I
The
full
path
apply
image
to
something
that
I
can
call
it
nice
and
simple
like
so.
We
don't
have
to
type
out
the
full
path
of
the
the
read
the
the
repo
where
the
image
is
being
stored
because
we
don't
assume
docker
hub.
So
we
let
folks
I'm
specify
whatever.
So
we
would
do,
for
example,
a
whole
image
right
here
and
I've
already
got
it.
So
it's
just
gonna
quickly
unpack
the
layers
and
rebuild
the
image.
I
Yeah,
so
we're
gonna
run
container.
This
could
also
be
this
is
an
alias,
so
container
or
sea,
wouldn't
matter
we're
gonna
give
that
container
a
name
we're
going
to
call
it
my
www,
the
image
that
we're
going
to
use
is
by
Hugo
and
we're
going
to
run
it
detached
so
once
it
launches
you
know.
So
what
it's
doing
right
now
is
firing
up
a
virtual
machine,
very
tiny
one.
I
I
Lsc,
which
is
the
same
as
feed
cuddle
get
containers.
Aliases
are
wonderful,
so
we
could
see
the
name
of
it.
The
image
it's
using
the
command
that
it's
running,
which
is
just
some
nginx,
the
IP
of
the
VM
and
notice.
There's
no
port
mapping
here
and-
and
you
know
when
it
was
created,
so
you
could
then
do.
I
And
this
gives
us
a
little
bit
more
further
details
and
we
can
even
very
quickly
shout
into
it
so,
for
example,
if
I
just
copy
that
paste
now
or
shell
into
that
the
container
host,
so
the
the
pod
BM.
And
if
you
see
a
kernel,
you
can
see
it's
a
photon
derivative
and
if
you
came,
if
you
look
at
table,
you
could
see
we
have
our
our
processes,
the
nginx
stuff,
our
CRX
runtime.
I
Here
we
have
this
thing
called
the
sphere
agent,
which
is
like
a
couplet,
so
some
of
the
primitives
that
we
put
in
project
Pacific
or
start
to
show
up
here
and
yeah
and
not
much
else.
Some
IRA,
cues
and
lightweight
version
of
VMware
tools
is.
Is
this
like
kind
of
runtime
environment
like?
Does
it
adhere
to
like
the
CRI
spec?
Absolutely
it's.
It's
OCI
from
the
container
images
and
the
runtime
itself
is
we're
gonna
try
to
go
through
the
process
to
get
it
officially
certified,
but
yeah.
That's
exactly
it.
You're
cool
I
noticed.
B
I
It's
a
good
question,
I
think
it's
a
there's.
Some
open
discussion
around
how
to
evolve.
This
I
think
there's
there's
the
I
would
very
much
like
it.
If
it
just
knew
the
type
of
object
that
we're
giving
and
it
did
the
right
thing,
I
think
that
as
the
types
of
objects
we
can
manipulate
with
ecole
grows,
it
might
make
sense
to
specify,
but
I
think
there
should
be
some.
I
Also
some
defaults
there,
so
I'd
be
very
open
to
you
know
adjusting
some
of
the
syntax
on
how
it
works,
because
it's
tough
we're
trying
to
kind
of
combine
little
bit
a
little
bit
of
docker,
but
also
it's
doing
a
few
thing,
and
so
part
of
it
as
well,
is
just
how
we're
assembling
it
as
we're
going.
You
know
in
terms
of
it's
a
go
project,
so
all
this
Cup
commands
are
modular,
so
we
can
absolutely
sort
of
gold.
I
I
We've
sort
of
been
using
the
terms
interchangeably,
but
yeah.
It's
it's
like
one
container.
You
know
when
implementation
on
C
groups
that
is
running
on
the
of
the
sandbox
virtual
machine,
but
absolutely
we
won't
be
able
to
get
folks
control.
I
can
see
a
future
where
we
have.
You
know
a
way
to
describe
a
series
of
containers
all
within
one
file
and
some
command
line
options
that
says:
hey
just
keep
using
this
one
VM
it's
safe
or
you
know,
specify
some
other
combination
of
ones
and
many.
I
Exactly
and
then
we're
very
open
to
like
figuring
this
out
the
right
way.
We
know
that
it's
désolée
docker,
for
example,
made
a
lot
of
opinionated
decisions.
Some
of
them
were
familiar
with
will
be
laid
out
of
like
some
of
them.
We
think
there
might
be
a
better
way.
We
very
much
want
to
have
this
as
an
open
dialogue,
an
open
discussion
with
the
community
and
kind
of
get
that
you
know
that
loop
closed
as
opposed
to
just
coming
in
and
saying.
This
is
how
we're
doing
it
and
we're
super
opinionated.
I
It's
gonna
be
like
this
I
think
what
we're
trying
to
do
is
be
able
to
control
VMs
containers
and
kubernetes
clusters
in
a
uniform
way,
and
if
there's
one
way
to
do
that
in
terms
of
how
we're
gonna
be
interacting
vSphere,
which
is
through
coo
cuddle,
but
on
the
desktop
that
might
not
make
the
same
kind
of
sense.
So
you
know
we're
open
to
suggestion.
There.
B
I
Actually
running
on
a
NAT
network,
it's
or
rather
it
gonna
cost
a
custom
VM
net.
So
if
I
would
actually
look
at
it,
it
would
be
VM
at
10
and
you
can
assign
other
VMs
to
those
networks
and
then
just
open
them
up.
So
this
is
accessible
for
my
hosts.
So
because
it's
just
one
of
my
VMs
right
so
be
honest.
A
I
So
showing
showing
it
to
you
here,
so
we
have
it's
it's
this.
This
is
the
network
that
it
created
right
and
if
you
were
to
look
at
the
VM
which
shows
up
here
right,
we
obscure
it
from
the
the
library
for
now
but
you'd
see
that
the
NIC
has
this
network
attached
to
it,
and
you
know
when
in
here
you
can
do
cool
things
like
set
empty
use.
I
A
Kind
of
cool
to
have
these
virtual
multiple
virtual
networks,
because
I
know
I've
played
around
for
a
demo
for
a
talk
of
getting
sto
to
run
not
on
this,
but
on
last
generation.
Desktop
hypervisor.
But
are
there
people
toying
around
with
service
mesh
and
getting
that
running
in
this
environment
to
get
some
driving
test
time
on
I?
Think.
I
I
I
You
know
so
there's
some
storage
here
and
some
some
config
and
if
you
look
inside
the
you
know
the
folder
itself,
there's
you
know
the
container
D
stuff
as
well.
So
then
the
amount
folders
here
so
right
now,
there's
like
this
hacky
way
of
doing
it.
I'm
gonna
do
a
write-up
on
how
to
like
you
know,
do
that,
but
ultimately
yeah
we
want
to.
We
have
host
guest
filesystem
as
a
proprietary
VMware
thing,
but
we
also
have
nine
PFS
working.
I
B
To
come
up
and
go
is
part
of
the
roadmap.
Just
add
a
bit
more
color
to
that
as
well.
Fusion
isn't
the
only
thing
that
uses
9p
and
the
VMware
stack,
there's
some
upcoming
stuff
that
you'll
see
in
the
near
future
that
uses
9p
for
for
backing
data
stores
on
vSphere
itself
and
and
it's
because
it's
a
zero
copy
data
path.
Essentially
it's
just
a
mapping
straight
through
there's,
no
hops,
no
latency.
It's
just
map
right
through.
So
it's
really
really
quick.
The.
I
I
K
I
I'll
stop
sharing
here,
I
mean
that's
kind
of
the
the
the
end
and
where
we're
at
right
now,
like
I,
said
we
want
to
work
closely
with
the
folks
over
in
kind
to
really
deliver
the
kubernetes
part
of
this
we're
just
getting
started
now.
This
is
the
very
first
sort
of
iteration
of
it
step.
One
run
containers
step
two
kubernetes
clusters
step:
three:
we
want
to
have
a
declarative
deployment
model.
I
want
to
be
able
to
point.
I
A
Thanks
Michael,
that
was
a
great
demo,
we're
almost
near
the
top
of
the
hour.
Five
minutes
left,
Bryce
and
I
know
you
wanted
to
get
to
the
your
question
on
Cluster,
API
and
affinity.
I
saw
in
the
chat
that
Andrew
Sai
Kim
is
recommended
that
maybe
we
can
dedicate
the
next
a
future
user
group
meeting,
maybe
that
why
don't.
D
D
I
agree
the
cluster
API.
That
would
be
a
different
one,
that
my
question
is
in
a
cluster
API
question,
it's
more
of
a
question
of
if
we
have,
if
we
have
pod
a
knife
in
any
rules
set
up
so
that
we
have
pods
not
running
on
the
same
node,
we
also
essentially
want
them
not
to
run
on
the
same
zone
or
the
same.
You
know
physical
host,
otherwise
we
might.
D
The
issue
is,
if
we
don't
have
it
set
up
like
that,
you
might
have
them
running
on
different
VMs,
but
still
in
the
same
physical
host
that
physical
host
takes
those
down
at
the
same
time.
So
what
we're
trying
to
do
is
we
have
a
situation
where
maybe
a
node
physical
host
goes
into
maintenance
mode
and
it
moves
it
moves
VMs
that
were
on
different
physical
hosts
onto
the
same
host.
So
when
that
pod
was
originally
scheduled,
it
was
scheduled
on
different
hosts
and
different
VMs.
D
B
B
When
we
looked
at
this
in
the
past,
there
was
a
thought
of
we
can
assign
node
labels
to
each
of
the
pods
or
each
of
those
volumes.
That's
based
on
the
host
of
that
sits
on
whenever
it
moves
or
whenever
of
emotions.
There
would
be
something
in
the
vSphere
API
or
a
controller
in
kate's
itself
that
checks
that
figures
out
where
it
is
an
update
to
the
label.
So
it's
not
compliant
or
whatever.
Honestly
I
have
not
heard
a
good
solution
to
this
problem,
because
it's
not
a
trivial
thing
to
fix
what.
D
You
described
is
exactly
kind
of
what
I
was
hoping
we
could
do.
You
know,
update
those
labels
and
right
now,
at
least
on
the
version
of
kubernetes
we're
on
right
now.
It's
only.
It
only
allows
you
at
scheduling
to
do
that.
So
the
problem
is
even
if
you
were
to
change
that
it
wouldn't
go,
reschedule
it
to
a
different
node,
so
Andrew,
unmute
I,
think
he's
probably
got
an
opinion
on
this.
K
Yeah,
so
there
was
some
proposals
last
year
to
add
physical
hosts,
and
it
was.
We
didn't
go
forward
for
that
exact
reason
that
on
tophams,
like
a
sphere
like
the
motion,
you
know
if
you
transfer
a
pala
to
different
fiscal
psychology,
there's
nothing
that
would
reschedule
or
the
pod
to
to
to
match
that
technology,
and
even
if
you
did
like
dynamically
swapped
up
topology
on
a
note
like
that
messed
up
a
lot
of
things
so
I
think
like
the
David
can
kinetic
was
to
like
go.
D
B
D
A
hack,
but
it
might
fix
it
so
they're
there
is,
they
are
changing,
I,
don't
I,
don't
know
which
version
it's
coming
in,
but
so
that
it's
the
ni
affinity
rules
won't
just
be
scheduled
on
schedule.
It'll
actually
be
when
running
to,
so
that
we
would
should
be
able
to
look
at
some
of
those
things
and
the
ideas
when
it's
running.
If
that
were
to
change,
then
it
should
reschedule
it
somewhere
else.
So
we
are.
K
D
A
D
B
Well
we're
the
top
of
the
hour
we
takin
the
whole
thing
up
there.
I
we
have
a
lot
more
to
talk
about,
it
would
appear,
and
people
are
keen
for
intros
and
other
features
like
cluster
API
for
vSphere
and
what
Abbey
we
will
draft
up
an
agenda
for
the
next
meeting
in
a
month
time,
so
that'll
be
March,
6
or
so
I,
don't
know
what
the
actual
data
is,
but
the
first
Thursday
March.
B
A
And
we
had
a
poll
to
pick
this
meeting
time,
but
I
didn't
actually
put
it
up
for
discussion.
The
frequency
I
wanted
to
start
with
monthly,
just
not
to
water
it
down
too
much,
but
if
anybody
as
the
year
progresses
feels
that
these
things
get
busy
enough
that
we
need
to
do
it
more
frequently,
we
can
entertain
doing
that,
but
see
you
in
a
month.