►
From YouTube: Kubernetes UG VMware 20210902
Description
September 2, 2021 meeting of the Kubernetes VMware User Group. Comments of VMware desktop hypervisors including ARM support from Michael Roy. Open user discussion of issues and best practices for running stateful applications.
A
Hi
welcome
to
the
september
2
meeting
of
the
kubernetes
vmware
user
group.
We
don't
have
any
fixed
pre-announced
agenda
for
today's
meeting,
so
we're
going
to
hold
it
anyway,
and
this
is
just
open
forum,
birds
of
a
feather
discussion
for
anything
people
want
to
talk
about
open
forum
q.
A
before
I
kicked
off
the
meeting
officially
michael,
is
joining
us
and
wanted
to
make
an
announcement
about
the
vmware
desktop
hypervisor
fusion,
so
I'll
turn
it
over
to
you,
michael
yeah.
Thanks
appreciate.
B
That
yeah
it's
been
a
while
I've
been
you
know,
kind
of
bogged
down,
trying
to
get
a
lot
of
the
crazy
work
that
we've
been
doing
on
the
desktop
side.
You
know
through
the
through
the
gears
so
to
speak
this
year
and
our
big
project
has
been
trying
to
deliver
a
desktop
hypervisor
on
apple
silicon
products
and
so
the
if
all
silicon
is
a
unique
set
of
challenges.
It's
arm
based.
B
It's
not
you
know
it's
it's
arm
in
the
middle
and
then
like
a
whole
bunch
of
apple
stuff,
wrapped
around
it,
and
it's
really
awesome
tech
we're
very
impressed
with
it.
So
you
know
our
stack
has
been
very
much
tied
to
the
x86
architecture
and
we've
been
making
inroads
in
armed
with
our
fling
project
that
we
have
with
esx
ion
arm,
and
so
we
leveraged
a
lot
of
the
work.
A
lot
of
the
folks
that
worked
on
that
are
also
working
on
some
of
the
core
components
of
fusion
for
apple
silicon
and
we've.
B
Just
as
of
you
know,
essentially
last
night
launched
our
formal
private
tech
preview
program.
That's
an
invite
only
program
and
we're
keeping
the
discussion
sort
of
closed.
We
plan
to
do
this
for
a
couple
weeks
just
to
make
sure
that
some
of
our
biggest
customers
and
our
close
partners
and
friends
have
earlier
access
to
when
we
open
it
up
to
the
public
which
we
plan
to
do
in
the
next
few
weeks.
So
I
wanted
to
extend
an
invitation
to
folks
inside
of
sig,
vmware
and
folks.
B
Listening
to
this
feel
free
to
either
fire
me
an
email
at
mroy
vmware.com
or
the
fusion
beta
team
at
fusion
dash
beta
at
vmware
and
request
an
invite
we'll.
We
would
be
happy
to
add
you
to
it.
The
limitation
right
now
is
again
everything's
just
sort
of
self-contained
inside
the
community,
we're
really
confident
about
where
we're
at
with
it.
B
We
just
wanted
to
try
something
a
little
different
this
year,
so
the
private
community,
we
will
see,
evolve
and
do
like
a
continuous,
ongoing
kind
of
a
beta
where
we'll
have
sort
of
releases
there
and
then
we'll
have
like
a
public
tech
preview
which
will
have
releases
that'll
happen
slightly
after
that.
So
it's
like
an
alpha
and
a
beta
kind
of
channel.
If
you
want
to
look
at
it
that
way,
so
we're
super
excited.
Things
are
working
great
linux
across
the
board.
Bsd
we've
got
working
as
well.
B
Of
course
we
have
licensing
issues
with
windows.
You
know,
microsoft
does
not
make
available
a
windows
license
for
arm
other
than
the
the
tech
preview,
or,
I
should
say
the
insider
program.
The
insider
program
specifically
says
that
you
have
to
it's
only
supported
on
in
devices
that
already
come
with
windows
on
r,
which
is
not
an
apple
silicon
chip.
But
you
know
the
thing
about
booting:
a
windows
operating
system
is
it's
not
very
different
from
booting,
a
linux
operating
system
and
the
you
know.
B
So,
while
we
can't
convert
like
a
vhd,
which
is
what
you
get
when
you
download
it
from
microsoft,
there
are
ways
of
creating
isos
and
the
other
guest.
Os
type
will
probably
boot
that,
but
it's
just
not
supported,
and
there
are
no
drivers
or
you
haven't
done
tools
or
anything,
but
on
the
tool
side
we
actually
have
open
vm
tools
compiled
for
linux,
in
the
in
11.3
and
for
arm.
B
So
that's
you
can
build
that
from
source,
for
you
know
any
distro,
but
we
also
make
available
debs,
for
you
know:
debian,
11,
rwn10
and
ubuntu
2004
and
2104.,
so
and
and
of
course,
freebsd
actually
has
it
in
the
the
ports
repository.
So
I
don't
know
if
it's
in
package
manager
yet,
but
it's
definitely
in
ports,
so
it
can
be
built.
It
takes.
You
know
it's
bsd,
so
it
takes
a
while
to
build
it.
But
that's
bsd
yeah
super
excited.
B
B
We
have,
you
know
open
vm
tools
which
are
picked
up
by
the
distributions
and
delivered
that
way.
So
our
model
is
just
a
little
different
and
we
had
to
like
bring
the
community
all
sort
of
together
at
once.
For
this
all
to
you
know
for
the
gears
to
all
connect,
so
yeah,
that's
a
long-winded
way
of
saying
we're,
we're
ready
to
go
and
if
folks
want
some
early
access
in
a
confidential
way
get
in
touch
with
myself,
or
I
think
stevie
put
something
in
the
chat
there
to
reach
out
through
the
sig.
A
Notes
document,
if
you
want,
we
can
go
drop
your
contact
information
there.
A
I'll
fill
it
in
later,
I
didn't
paste
it
in
right
now
or
if
you're
a
member
of
the
group,
you
should
be
able
to
click
on
that
link
and
do
it
yourself
just
I
want
to
make
sure
I
understand
that
so
this
is
a
desktop
hypervisor
for
the
arm-based
apple
laptops
in
terms
of
the
vms.
It
runs.
These
vms
are
also
going
to
be
running
in
arm
it.
Doesn't
it
doesn't
compose
a
way
to
do
x86
vms,
on
top
of
the
physical
arm,
hardware,
ice.
B
Right
so
there's
no,
you
know
we're
vmware,
not
em
where
so
we
are
virtualizing,
we're,
not
emulating,
and
so
a
lot
of
folks
really
want
x86
stuff,
but
the
limitations
that
we
have
are
like
apple,
doesn't
provide
the
apis
to
do
that.
They
have
rosetta
and
rosetta
does
a
lot
of
x86
emulation,
but
it
doesn't
do
virtualization
specifically,
and
we've
done
yes,
chris
we're
talking
about
fusion
we've
done.
We
had
emulation
for
like
the
longest
time.
B
We
had
this
thing
called
binary
translation
and
it
it
was
what
originally
created
the
hypervisor
in
the
very
in
the
first
place
way
back
in
like
99,
and
that
binary
translation
is
it's
slow?
If
you
try
to
do
world
switches,
it's
like
if
you're
switching
it
like.
You
know
a
full
stack
of
memory
around
back
and
forth,
and
it
was
just
really
hard
to
maintain
over
the
years.
B
All
those
features
that
we
needed,
that
for
ended
up
getting
baked
into
the
cpu
itself
and
our
relationship
with
intel
continued
to
get
stronger
and
everybody
was
selling
chips
and
stuff
like
that.
So
we
we
abandoned
that
or
we
didn't
abandon,
but
we
we
end
of
life
that
emulation
stack
years
ago,
there's
actually
a
tombstone
on
campus.
If
folks
are
interested
and
but
but
yeah,
so
it's
it's
the
the
the
possibility
of
doing
emulation.
Just
it's
gonna
take
us
two
years.
A
I
imagine
inherently
because
of
almost
the
physics
of
it
it's
always
going
to
be
slow.
You
know
they
do
those
emulations
for
things
like
ancient
video
games,
where
they
didn't
expect
to
have
a
lot
of
horsepower,
so
they
still
work.
Okay,
but
kind
of,
I
think
in
my
experience,
one
of
the
most
common
use
cases
for
fusion
is
to
run
the
the
real
windows
office
suite
or
something
like
that
on
top
of
a
mac
and
it
is
or.
B
B
So
like
we
have
things
like
the
docker
machine
driver,
so
you
can
install
that
to
get
docker
and
to
drive
mini
kube,
and
we
also
have
v
cuddle
or
vcto,
which
has
a
is
basically
a
container
d
front
end
and
all
those
things
avoid
all
the
docker
licensing
things
that
recently
have
been
kind
of
making
noise.
B
Well,
I
was
going
to
say
on
x86:
that's
where
we're
at
so
we're.
Not
we
haven't
got
that
in
in
arm,
yet,
basically
the
limitation
right
there,
so
I
own
the
docker
machine,
docker
machine
driver,
vmware,
repo
and
the
project
on
github
and
the
the
big
challenge
I
have
right.
There
is
basically
the
boot
to
docker
doesn't
exist
for
x8,
or
it
only
exists
for
x86
and
personally
I've
been
focused
on
trying
to
get
tech
preview
ready
to
go.
B
A
B
B
So
you
know
anything
that
can
run
kubernetes
in
a
virtual
machine
is
gonna
work
and
yeah.
So
there's
some
you
can
do
it
without
having
to
install
any
of
the
other
components
like
you
know,
standing
up
a
vm
not
doing
anything
else
installing
you
know
kubernetes
kind,
all
the
dependencies
that'll
all
work
and
be
just
fine.
The
if
you
wanted
to
have
like
3d
or
I
should
say,
2d
graphics,
you
need
to
install
tools
and
you
need
to
install
the
newer
version
of
the
linux
kernel.
So
part
of
the
you
know.
B
Process
right
now
is
like
you
know,
installing
mainline
5.14,
which
has
our
graphics
drivers
into
it,
and
then
either
building
tools
from
source
or
running
it.
The
the
d
package
installers
for
ubuntu
and
and
debian
so,
but
that
doesn't
you
don't
need
to
do
that
to
just
get
you
know
like
a
linux
server
operating
system
running
you
could
take
ubuntu
server,
2110
leading
edge
fire.
It
up
way
to
go.
You
could
probably
even
do
that
thing
where,
if
you
install
docker,
you
can
do
like
an
environment
variable
to.
B
B
Sorry,
scott.
C
B
Should
there
are
some
weird
issues
with
like
ovf
tool,
and
I
and
I
haven't
personally
tested
that
use
case,
but
as
far
as
like
the
hardware
version
and
tools,
those
are
the
same.
Okay.
B
Yeah,
it
would
be
great
to
know
if
that
works
or
not
like.
I
know
there
was
some
issues
with
oba
import,
export
and,
and
that
had
to
do
also
with
sort
of
remoting
where
it
was
you
can't
like,
drag
and
drop.
You
can
connect
to
a
remote
vsphere,
which
I
don't
think
you
can
like,
drag
and
drop
workloads
back
and
forth,
because
ova
tool
or
ovf
tool
hasn't
been
built
for
arm.
Yet.
C
B
E
A
Okay,
well
thanks
michael,
I
see,
we've
got
a
few
other
people
joined
in
for
those
of
you
who
joined
late.
This
is
no
fixed
agenda,
but
it's
just
open
forum
for
q,
a
or
nominating
discussion
topics
you
want
to
throw
out
there
and
I
see
bryson
has
joined
us,
haven't
seen
you
in
a
long
time,
so
welcome
bryson.
E
A
Yeah,
unfortunately
miles
the
the
co-chair
of
this
group
couldn't
make
this
meeting
today
and
he's
the
authority
on
it.
Certainly.
Well,
you
probably
already
know
this
that
if
you're
dealing
vsan,
you
definitely
want
to
be
using
the
the
csi
driver
for
vmware
as
opposed
to
other
solutions.
Although
theory,
I
suppose
you
could
use
vsan
to
host
something
like
nfs
and
use
a
generic
one,
I
don't
know
why
you'd
want
to.
E
I
have
to
go:
pull
pull
up
a
site
for
me
to
remember
the
exact
title
of
it,
but
you
can
set
up,
for
example,
your
vsan
hey.
If
I
have
one
of
the
hosts
down,
don't
configure
any
new
disks
right,
it's
trying
to
protect
the
storage,
so
the
the
issue
I've
seen
with
things
like
that
is,
if
you
are
using
persistent
volumes
that
are
getting
dynamically
created,
is
if
one
of
the
hosts
is
down,
then
it
won't.
E
A
Okay,
I'll
I'll
write
it
down
in
the
notes
and
we'll
try
to
get
you
an
answer,
but
I
don't
have
enough
experience
to
answer
it.
There
may
be
somebody
else
on
the
call
who's
been
involved
with
that
before
who
can
chime
in
here.
F
So
my
my
only
experience
with
the
csr
driver
up
until
now
is
through
tkgs
through
vso.
It
tends
to
right
and
then
and
in
those
in
that
case,
through
basically
a
storage
class,
a
particular
vsan
storage
policy
is
associated
with
a
kubernetes
storage
class.
F
You
can
have
many
different
vsan
storage
policies
and
you
can
have
yeah.
F
F
You
can
choose
to
not
have
certain
objects
mirrored
in
which
case
just
raid
zero
right,
but
that
would
be
pretty
unsafe,
but
there
are
probably
use
cases
for
that.
If,
especially,
if
you
want
to
save
storage-
but
you
can
have
I
mean
so,
vsan
storage
policies
are
simply
a
vsphere
construct
right.
It
doesn't
really
have
any
associate.
It
doesn't
really
have
anything
to
do
with
the
csi
initially
and
then
the
csi.
F
F
F
E
Yeah,
I
get
that
I
I
guess
what
I'm
asking
is:
what
are
the
recommended
options
for
for
using
persistent
volumes
so
yeah
I
find
like
hey
you.
You
want
to
make
sure
you
have
this
set
up
for
your
storage
policy
or
this
this
thing
it
may
provide
additional
safety,
but
you'll
have
failures.
You
have
more
issues
like
creating
persistent
volumes.
If
you
have
your
storage
policy
set
up
this
way,
I'm
looking
for
more
of
that
best
practice.
Kind
of
information.
C
Yeah-
and
I
think
it's
one
of
the
hardest
things
to
do
a
best
practices
on
that,
because
it
really
also
comes
down
to
what
type
of
application
it
is
and
what
type
of
persistency
you're
needing
there
for
that
specific
application,
because
just
for
example,
I
need
persistent
volume
for
rabbit.
Mq
is
very
different
than
a
persistent
volume
for
postgres
if
you
were
to
lose
the
persistency
of
a
rabbit
mq
in
many
cases.
Again.
C
So
like
it
really
comes
down
to
what
types
of
applications
you
want
and
what
at
least
I'm
starting
to
see
more
and
more
being
done
is
creating
storage
policies,
not
necessarily
that
are
relevant
to
vsan
they're,
obviously
using
the
vsan
capabilities
in
order
to
do
it,
but
creating
a
postgres,
specific
storage
policy,
a
rabid,
mq,
specific
storage
policy,
a
mysql,
specific
storage
policy
that
would
be
according
to
the
best
practices
that
usually
the
database
operators
out
there
that
exist
in
open
source
and
commercial
offerings
and
whatever
have
their
best
practices
of
how
that
persistent
data
should
be
stored.
C
Sometimes
that
exists
less
for
vsphere,
but
it's
very
easy
to
correlate
like
if
you
look
at
guides
that
people
have
done
for
running
on
aws
and
eks,
for
example,
most
of
the
settings
you
can
set
in
a
storage
policy
for
vsan
you
can
set
for
ebs
csi
as
well.
So
looking
at
that,
you
can
kind
of
pretty
easily
correlate
examples
that
people
have
done
or
the
blog
posts
on
how
to
run
postgres
correctly
on
kubernetes
and
the
csi.
You
know
based
storage
policy.
There
do
the
same
thing
on
a
vsphere
side.
C
So
it's
more
of
an
incident
by
incident
base,
rather
than
a
blanket
best
practice.
From
my
experience
at
least.
F
Yeah
scott,
just
just
just
my
information,
I
mean
the
kind
of
the
kind
of
storage
policy
differences.
I'd
expect
are
things
like.
I
want
more
striking
because
you
know
I'm
gonna
get
more.
I
don't
know
read
or
write
hits
on
a
particular
storage
object.
Is
that
the
kind
of
stuff
you're
talking
about
or
or
are
there
more
exotic
kinds
of
things
you'd
want
to
do
with
a
storage
object?
That's
more
like
because
I've?
No,
I
don't
know
what
ebs
can
do.
For
example,.
C
So
there
are
things
on
like
limiting
iops
or
setting.
You
know
priorities
quality
of
service
that
you
can
do
there
are
things
around.
You
know
how
many
replicas
you
want
striping
things
like
that.
There
are
also
different
settings
that
you
can
set
for
like
encryption
vsan,
especially
now
with
the
in
vsphere
7,
where
we
have
the
built-in
kms
in
vsphere,
or
maybe
that
was
7
update
two.
C
I
don't
remember
which
release
that
came,
but
you
can
do
also
vsan
encryption
for
first
class
disks,
so
certain
applications
say
don't
run
encrypted
because
the
performance
is
going
to
affect
me
other
ones.
You
may
need
encryption
because
you
need
an
encryption
for
some
compliance
or
because
you
want
encryption.
C
E
And
so
this
is
this
is
the
kind
of
information
that
I'm
talking
about
where
it
may
not
be
it's
and
obviously
it's
not
the
same
best
practices
defaults
for
everything.
But
if
you're
new
to
this-
and
you
just
take
the
default,
that's
out
there,
you
may
run
into
issues
that
other
people
have
already
figured
out.
E
So
if
people,
if
we
can
have
like
a
place
kind
of
centralized
on
here's,
some
best
practices,
we've
found
for
this
type
of
setup
or
something
like
that-
we've
ran
into
just
so
many
issues
lately
that
we're
trying
to
just
move
away
from
those
applications
that
are
using
the
persistent
volumes,
because
the
persistence
volumes
are
just
causing
the
downtime
and
so
like.
We
have
another
example.
E
So
this
isn't.
This
is
moving
away
from
what
we
were
just
talking,
but
still
along
the
persistent
volume
side
where
the
host
will
become
disconnected
from
the
vcenter
and
kubernetes
wants
to
move
the
pod
to
a
new
node,
and
so
it
tells
the
vcenter
to
unmount
the
volume
from
a
vm
that
was
on
a
host
that
became
disconnected.
But
since
it's
disconnected,
it
can't
unmount
it
and
therefore
it
can't
mount
it
on
another
node,
that's
on
a
different
host.
A
C
There's
different
things
and
it's
in
overall
a
csi
question:
it's
come
up
a
lot
in
cluster
api
because
of
how
cluster
api
is
immutable
infrastructure
and
coming
up
and
down
of
nodes.
It's
a
bunch
of
things
around
how
to
do
the
force
mount
to
a
different
vm
and
things
like
that.
It's
not
only
affecting
vsphere,
it's
affecting
a
lot
of
csi's
vsphere.
Specifically.
E
The
infrastructure
you're
correct
it
is
it's
the
same
issue
that
you
can't
mount
it
to
the
new
storage
until
it's
released
from
or
the
new
vm
until
it's
released
from
the
old
vm.
So
we
have
the
same
similar
situation
with
our
other
cloud
providers
like
we
run
into
that
similar
issue.
C
And
the
only
real
solution
with
that
with
vsan
is:
if
you
have
vsan
enterprise,
you
can
use
file
services
and
do
read,
write
many
pods
and
only
connect
them
to
one
and
then
because
it's
file
service
it
could
connect
to
multiple
nodes,
and
then
you
do
have
a
kind
of
solution
there.
It's
again
it's
a
kind
of
work
around,
because
you
really
want
to
read
write
once
persistent
volume
but
to
get
around
that
issue
until
something
is
solved
in
the
upstream
csi.
C
You
know
whatever
driver
the
real
the
workaround,
for
that
would
be
to
do
file
services
and
then
do
persistent
volumes
that
are
nfs,
backed
by
vsan
file
services,
and
then
that
would
be
able
to
be
mounted
to
a
additional
pod
on
a
different
node
without
issues.
A
C
But
yeah,
I
know-
and
I
agree
with
you-
that
having
something
in
even
in
like
the
vsphere
csi
repo
or
a
blog
post
or
whatever
it
is,
but
having
somewhere
with
a
bunch
of
like
best
practice,
examples
of
storage
policies
would
be
nice.
I
know
there
are
some
examples
in
the
csi
repo
but
they're
more
just
like
quick,
getting
started
examples.
C
You
know
that
definitely
would
be
something
that
would
be
awesome
to
have
some
more
examples
of
what
the
best
practices
are
for
specific
applications.
I'm
not
sure
if
that
really
belongs
in
the
vsphere
csi
or
more
of
a
separate
repo,
that's
kind
of
you
know
collaborated
by
people
from
the
different
database.
Vendors
different.
You
know,
application
vendors
that
you
know
their
software
is
utilizing
the
persistent
volumes,
but
that
would
definitely
be
a
helpful
resource.
I
know
for
a
lot
of
people
that
I
deal
with
as
well.
A
Yeah
this
sounds
like
a
good
idea
that
you're
right
it
might
be
broader
and
belong
in
the
kubernetes
storage
thing,
because
it
go
it's
cross-cutting
against
different
vendors
for
different
storage
and
different
apps.
But
I'll
write
down
a
note,
because
I'm
always
looking
for
topics
to
bring
up
for
these
user
group
meetings
or
even
tubecon
sessions
in
the
maintainer
track.
So
this
looks
like
a
great
opportunity
to
make
the
world
of
kubernetes
a
better
place
by
it
sounds
like
we
might
even
have
to
figure
out
what
the
best
practices
are
before
we
document
them.
E
So
robert,
you
asked
a
question
if,
if
this
was
one
of
the
issues
at
the
top
of
that
page
and
that
page
describes
the
issue
as
multi-attach
error
for
read-only
read,
write
only
block
volume
when
node
vm
is
shut
down
before
pods
are
evicted
and
volumes
are
detached
from
node
vm.
So
that
issue
is
like
hey.
It
got,
and
I
do
experience
this.
E
E
F
Yeah,
so
it
seems
that
kubernetes
is
currently
missing
some
kind
of
unlocking
mechanism
around
this
there's
these
there's
no
way
to
detect
whether
it
seems
I
mean
this.
This
sounds
to
me.
This
is
all
new
to
me.
By
the
way
it
sounds
to
me,
like
a
lack
of
some
kind
of
you
know,
exclusive
locking,
release
or
something.
E
Well,
I
mean
essentially
just
relies
on
the
hypervisor,
whatever
cloud
you're
in
to
actually
do
the
unmounting,
and
if
it's
not
unmounted,
it
can't
mount
it.
It
doesn't
matter
where.
So
this
is
says
for
c
wait
was
this
for
vsphere
we
we
have
the
same
thing
issue
with
like
azure,
so
we
have
that
and
that's
what
kind
of
scott
brought
up
is
like.
Sometimes
we
get
these
same
issues
in
azure,
where
it
won't
mount
to
the
new
one,
because
azure
isn't
unmounting
it
from
the
old
vm.
A
Yeah,
these
sorts
of
things
have
been
alive
out
there
in
kubernetes
for
a
couple
of
years
now,
and
in
some
cases
you
know
they're,
you
know
they're
they're
stalled
and
stuck,
and
in
others
they
might
eventually
get
consistent,
but
that
background
process
of
achieving
this
eventual
consistency
is
so
slow
that
the
outage
takes
so
long
that
for
practical
purposes
for
your
app
it
it
might
be
perceived
as
an
outage
anyway.
You
know
if
this
thing
took
tens
of
minutes
or
an
hour
before
it
was
able
to
recover
from
this
for
practical
purposes,
for
your
app.
A
It
might
could
be
that
it's
the
same
as
just
being
outright
broke.
You
know
nobody
is
willing
to
give
it
that
long.
A
E
E
Say
something
happens
on
the
network,
we'll
just
say
a
network
upgrade
let's
say
someone's
upgrading
the
network,
the
network.
Isn't
it
should
be
up
the
whole
time,
but
maybe
for
some
reason
it
wasn't
you.
Then
you
run
into
issues
with
vsan
as
well,
and
so
we've
we've
ran
into
some
issues
where,
if
the
network
wasn't
as
good
or
wasn't
in
ideal
shape,
then
vsan
wasn't
in
ideal
shape
and
eventually
our
vms
would
mark
themselves
read-only
file
systems
to
prevent
any
damage,
and
then
the
downside
there
is.
E
We've
seen
this
enough
that
we're
trying
to
monitor
for
it
now
and
I
so
I
haven't
got
to
exactly
the
root
details
other
than
we
know
that
the
what
ends
up
happening
is
it's
a
similar
thing
when
we're
talking
about
a
v
center
being
disconnected
from
like
one
of
the
hosts.
One
of
the
hosts
goes
disconnected,
the
vms
are
still
running,
but
it
can't
talk
back
and
so
that
storage,
that's
part
of
the
vsan,
also
is
disconnected
at
that
time,
and
so
vsan
says:
hey,
I'm
not.
I.
E
E
It's
not
it's
yeah,
it's
not
anything.
Really
kubernetes
specific,
like
you
can
just
see
this
running
vms
in
vmware.
This
is
so
there's
some
like
workarounds
people
have
talked
about,
which
is
basically
not
not
telling
telling
your
via
your
operating
system
to
not
go.
Read
only
but
then
the
longer
you
do
that
the
more
you
could
end
up
with
corrupted
data
and
then
having
to
rebuild
the
vm.
E
So
we've
only
had
to
rebuild
a
vm
a
couple
times
most
the
time
after
the
reboot.
It
may
need
an
fsck,
that's
forced
to
fix
things,
but
it
does
go
back
to
how
is
vsan
handling
those
type
of
network
outages.
A
I
I
don't
think
storage
policy
would
be
involved
with
this,
but
I'm
speculating
and
it
this
seems
like
an
issue
where
maybe
you'd
want
some
authoritative
thing
running
at
the
network
layer
that
police's
health,
because
what
you've
got
going
on
both
kubernetes
and
vsfan
and
vsphere
to
some
extent,
have
observability
into
what
they're
observing
in
terms
of
networking
functionality.
A
F
So
what
what's
your,
what
you'll
do,
what
you're,
describing
and
and
yeah?
This
is
we're
in
and
we're
in
vsan
product
mode.
Now
I
mean
what
you're
describing
is
is
is
either
like
a
split
cluster
scenario
or
or
this
happening
to
objects
that
have
a
storage
policy
that
only
has
one
that
has
no
redundancy.
Basically,
so.
E
E
F
Yeah
but
but
so
so,
if
I
make
a
storage
policy
that
has
no
redundancy,
what
I'm
telling
vsan
to
do
is
is
whatever
objects
are
attached
to
the
storage
policy,
only
save
them
once
somewhere
in
the
vsan
cluster.
That
doesn't
necessarily
mean
it's
on
the
same
host.
Oh.
F
So
so
in
normal
situations,
right-
and
I'm
not
talking
about
things
like
stretched
cluster-
so
you
know
single
v,
sound
cluster,
that's
all
local!
Let's
say
it's
in
one
rack
I
mean
for
one
the
communication
between
the
hosts
right
that
vsan
uses
its
own.
It
uses
a
dedicated.
F
Well,
you
can
set
set
the
esxes
to
use
a
dedicated
the
veeam.
You
know,
vm
kernel
port
for
all
that
communication
and
the
physical
side
of
that
should
always
be
redundant
right.
So
so,
hopefully,
you'll
never
get
in
a
situation
where
you'll
get
a
dual
network
outage.
That
kills
both
paths,
two
that
would
completely
disrupt
the
communication
between
esx
hosts
for
vsan
and
for
everything
else
right,
because
vsan
communication
is
one
way
these
hosts
in
the
cluster
talk
to
each
other,
just
to
replicate.
F
Vsan
storage
objects,
but
another
one
is
the
the
vm
kernel
network
and
the
network
they're
communicating
over
to
send
each
other
pings
right.
So
you
can
have
setups
where
that's
all
using
the
same
vm
kernel
port
or
you
can
split
it
all
out
and
then
have
even
dedicated
necks
associated
with
dedicated
vm
kernel
ports
for
specific
types
of
channels.
You
know
for,
for
you
know,
high
throughputs,
or
you
know,
specialized
kind
of
vsan
scenarios.
You
might
have
that
you
might
have
special
necks
that
are
just
for
the
storage
traffic.
F
You
know,
or
you
want
your
storage
traffic
to
run
over
a
separate
network.
You
do
separate
nicks
for
that.
You'd
have
a
separate
vm
kernel
board
for
that.
So
so
you
know,
there's
there's
lots
of
different
ways.
This
can
fail,
but
it
does
kind
of
sound
like
you're
running
you're,
already
running
into
a
situation
where
you're
getting
a
dual
network
failure
now,
both
both
paths,
it
down,
which
you
shouldn't
really
that
shouldn't
really
happen
right.
That's
not
that's
not
healthy!
F
So
so,
when
that
happens,
when
esx
hosts
innovation
clusters
lose
communication
with
each
other,
you
end
up
with
you
kind
of
with
a
split
cluster.
Where
you
know,
storage
objects
for
vms
might
be
on
these
hosts,
and
the
vms
in
themselves
are
on
the
other
hosts.
In
that
case,
when
that
happens,
you'll
get
the
behavior
you're,
describing
where
certain
vms
will
go
read
only
because
vsan
has
no
way
to
guarantee
rights,
so
it'll
just
shut
off
their
storage.
F
E
And,
and
that
does
match
they
will
match
the
situation
where
we'll
have
a
cluster,
and
so
you
have
six
kubernetes
nodes
in
there.
You
may
only
have
one
of
them
go
to
reonly
or
two
of
them
go
real
only
or
you
could
have
all
of
them
go
read
only
like
so.
E
When
I
say
when
I
said
really,
I
meant
the
when
I
said
really.
I
meant
the
file
system
read
only
of
the
vms,
which
yes
depends
on
the
storage
objects
and-
and
I'm
not
sure,
if,
like
everything
to
your
point
like
that,
could
be
that
could
be.
It
could
be
that
this
vm,
if
all
of
its
storage,
is
on
its
local
host.
So
it's
still
good
to
go.
F
Yeah,
but
it
might
not
be
right
because,
without
without
stretch
cluster
without
defining
specific
failure
domains,
you
don't
know
which
storage
object
is
on,
which
is
exhaust,
it
pulls
them
all
together,
because
the
network
is
assumed
to
be
fast
enough
to
just
you
know
you
could
you
could
have
the
two
mirrors
of
a
particular
vms
disk
sitting
on
host
three
and
four,
while
the
vm
that
they
belong
to
sitting
on
host
one
and
in
those
cases,
if
you
get
network
failure,
you'll
get
what
you
described.
F
You
know
the
vm
will
be
put
redoing
now
one
of
the
ways
you
can.
One
of
so
I
can't
remember
this
is
default
behavior
or
not,
but
vmware
has
a
role
to
play
here.
The
nyha
can
be
configured
to
deal
with
situations
like
this.
You
can
have.
I
believe
it's
been
a
while,
since
I
worked
with
vsan,
you
can
have
vms
be
forcefully
shut
down
when
they
enter
state
like
this,
so
they
have
no
addressable
storage,
because
there's
been
a
split
in
your
cluster
can
turn
those
vms
off.
E
F
F
This
is
what
happens
if
you
do
stretch
cluster
and
you
have
a
vsan
cluster
stretch
over
two
physical
locations
with
a
witness.
This
is
one
of
the
behavior
modes
you
can
set.
You
can
say
like.
Oh,
if
there
is
a
network
split
between
my
two
data
centers,
I
want
the
witness
to
say
for
these
storage
objects,
always
let
data
center
one
survive,
kill
everything
immediately
in
data
center
two,
so
I
don't
get
data
corruption.
F
So
now
you
can
set
vsphere
to
behave
similarly,
just
in
one
single
local
vsan
cluster.
I
believe
so.
You
know
you
you
kind
of
want
to
look
into
those
those
options,
but.
E
F
Yes,
because
hha
will
try
to
restart
any
vms,
that
it
can
that's
its
default
behavior,
but
it
will
only
be
able
to
succeed
if
the
storage
is
available
for
those
vms
to
be
able
to
start
them.
It
requires
that
at
least
one
of
the
mirrors
of
such
a
storage
object
is
available.
So
the
moment
it
can
reach
the
mirror
one
of
the
mirror.
You
know
one
of
the
parts
you
know
say:
all
the
objects
are
just
mirrored:
it'll
it'll
boot,
those
vms.
F
So
that's
vmware
will
try
to
do
that,
but
I
I
which,
which
version
of
easter,
is
this.
C
E
That
shouldn't
be
happening
and
knowing
the
setup
of
these,
like
I'm,
not
sure
how
we
get
into
that
situation
and
not.
A
E
But
anyways,
that's
that's!
I'm
trying
to
track
down.
We
have
a
different
team
that
manages
the
network
updates
and
the
network
support
and
so
we're
adding
in
a
an
alert.
That's
going
to
start
telling
us
when
this
is
happening,
so
we
can
track
more
because
in
the
past,
like
the
early
alerts,
we
had
with
alert
manager
prometheus
like
they
actually
required
it
to
be
written
to
disk,
to
alert
on
it
and
if
it's
read
only
and
it
can't
do
that,
it
doesn't
alert
that
that's
a
read-only
file
system.
E
E
E
Exos
three,
but
most
of
the
time
it's
three.
F
E
A
Yeah,
maybe
if
we're
going
to
do
a
best
practices,
guy
bryson
you
could
we
don't
have
to
do
it
in
this
meeting.
But
if
you
can
throw
out
sort
of
lists
of
representative
examples
like
in
your
case,
you
know
three
cluster
nodes
and
what
you've
got
set
up
for
networking,
whether
yeah
it's
redundant
and
what
what
is
being
used
for
to
achieve
the
redundancy
and
what
type
of
storage
you
have.
E
Yeah,
I
I
think
I
think
scott
made
some
points,
though,
that
there's
some
our
use
cases.
Actually
I
mean
pretty
generic.
As
far
as
persistent
volumes
go,
I
mean
we
want
them
to
work,
but
if
they
went
down,
if
it
went
down
for
a
few
minutes,
but
came
back
up
would
be
fine,
we're
not
running
like
a
database
on
it
that
that's
going
to
be
really
impactful.
F
What
I've,
what
I've
seen?
I
don't
know
if
this
helps
what
I've
seen
from
from
the
tkgi.
Sorry
from
the
tkgs
documentation,
so
visual
with
sanzu
is
that
they
they
don't
really
they
don't
really
wreck
they.
Don't
they
don't
put
forward
a
best
practice
so
explicitly
around
vsan?
What
they
simply
do
is
they
assume
the
vsan
defaults
and
the
vsan
default
is
a
storage
policy
that
just
does
failure
to
tolerate
as
one
so
it
mirrors
every
object.
F
That's
that's
the
assumption
they
they
make
in
the
in
their
documentation
and
what
they're
doing
there
is
is
basically
it's
like
the
80
20
rule
right.
The
the
for
most
vsan
clusters
out
there
that
default,
vsan
storage
policy
ftt
is
one
is
enough
for
most
workloads,
most
generic
workloads
and.
F
From
well
so
in
a
in
a
3-0
vsan
class,
you
couldn't
do
anything
more
than
f
to
t
as
one
because
you
don't
have
enough
hosts
to
to
replicate
storage
objects
more
than
once.
F
E
For
that
policy,
do
you
want
I'm
not
saying
for
that
necessarily
policy,
but
I
think
the
thing
I've
taken
away
from
today's
conversations
is
we
probably
need,
so
we
have
a
team
that
manages
the
vmware
side
of
things,
and
so
they
they
set
up
the
storage
policy,
but
that's
for
just
all
their
vms.
E
F
So
yeah-
maybe
maybe
but
but
but
like
I
said
in
a
in
if
it's
just
three
esx
hosts,
there's
not
much.
You
can
do
that
that
you
can't
replicate
more
than
than
what
that
default
policy
would
do.
If
you
have
no,
that
there's
very
in
a
3x
cluster
is
very
small.
It's
the
minimum
size
for
vsan.
There's
you
don't
have
many
options
there.
You
couldn't
you
couldn't
really
mess
around
with
that
storage
policy,
all
too
much.
We
can
turn
encryption
on
and
off,
and
things
like
that,
but
effectively.
A
C
A
On
or
off
yeah,
the
only
other
thing
you
could
do
with
this
is
you
know
this
isn't
quite
the
same
thing
but
have
backups.
That
would
be
another
orthogonal
measure
of
this.
That
is
just
kind
of
an
ugly
way
to
roll
back
to
some
point
in
time.
If
things
went
really
bad,
but
do
you
do
you
guys
have
something
to
back
up
assistant
volumes.
E
See
we
we
don't
we
don't?
Actually
we
don't
the
backups,
don't
matter
as
much
as
just
data
being
cached,
so
we're
we're
using
this
as
like
a
cache.
So
if
there's
an
issue,
we
can
blow
it
away,
and
so
we
can
scale
down
the
pod
blow
away.
The
pv
scale
back
up
create
a
new
pv
and
be
moving,
so
that
is
one
of
the
workarounds
like
hey.
E
E
Because
why
they
didn't
set
it
to
zero
white
you're
asking
why
they
wouldn't
set
it
together?
Well,.
F
E
F
Because,
because
if
they,
if
they
had
or
someone
had,
that
would
explain
why
you
get
these
persistent
volume,
disconnect
situation
so
often,
but.
E
E
Center,
if
we
have
two
of
those
hosts
become
disconnected
from
vcenter,
would
that
lead
to
more
of
this
type
of
situation
then
so.
F
E
F
E
F
Yeah
and
if
you
have
a
three
host
cluster
and
two
hosts
die,
vsan
is
dead
because
it
loses
quorum
and
everything
goes
read
only
right.
Three
in
a
three
node
vsan
cluster,
you
can
only
ever
lose
one.
Anything
more
and
vsan
will
simply
stop
working.
E
C
F
D
D
E
Well,
okay,
then
you
have
other
things
so
that
we
don't
even
have
to
worry
about
this.
So
we'll
move
we're
moving
the
cache
out
of
kubernetes.
So
then
it
can.
We
can
rebuild
the
cluster
from
I
mean,
essentially
it's
the
cache
for
images.
E
So
if
you,
if
you're
trying
to
pull
your
images-
and
you
can't
reach
the
external
endpoint,
you
need
something
local
to
be
able
to
pull
those
to
keep
everything
running
if
it's
disconnected,
and
so
this
has
been
caching,
our
our
images
and
but
if
we
rebuild
the
cluster
and
it's
cached
externally
to
the
cluster,
then
it
helps
rebuild
the
cluster
faster,
so
we're
looking
at
that
and
but
the
generally,
these
issues
have
just
become
more
and
more,
and
the
average
user
probably
wouldn't
have
as
big
an
issue
with
this.
E
But
when
you
have
thousands
of
clusters,
you're
managing
that
one
issue
becomes
a
bigger
issue,
you
see
it
more.
E
Avm
I
mean
it
was
in
fact
we
mill
still
may
have
this
issue
with
read-only
file
systems,
but
we
move
out
of
some
of
those
issues
I
discussed
earlier.
A
Okay,
we're
nearing
the
top
of
the
hour.
Does
anybody
else
have
any
last-minute
topics
they
want
to
bring
up?
I
think
we'll
cue
this
up
for
a
future
meeting.
I
can't
promise
it'll
be
the
very
next
one,
but
at
some
point
in
in
the
future
we'll
get
into
a
topic
of
best
practices
for
storage
for
persistent
applications.
A
Maybe
it
will
be
vmware
specific,
but
it
could
a
lot
of
what's
covered
here
might
be
generic
to
just
using
kubernetes
in
general,
so
anybody
got
last
call
for
something
for
today's
meeting
and
similarly,
if
you
have
suggestions
for
the
meeting
next
month
and
the
month
after
that,
throw
it
out
there.
A
Okay,
thank
you.
Everybody
for
attending
and
we'll
see
you
at
the
next
one
last
minute
advisory,
the
there
is
going
to
be
a
kubecon
north
america
held
in
mid-october,
it's
available
both
physical
and
online
and
miles,
and
I
will
be
presenting
at
that.
A
If
anybody
is
intending
to
attend
physically,
definitely
track
me
down,
because
I'd
like
to
try
to
get
it
back
to
the
old
world
where,
at
the
physical
kubecons,
we
attempted
to
have
the
user
group,
people
who
were
present
engaged
socially
for
a
meet
and
great-
and
I
don't
know
if
anybody
but
me
will
even
be
there
but
I'll
try,
and
if
anybody
on
this
call
intends
to
be
there
definitely
direct
message
me
and
we'll
try
to
get
in
touch
while
we're
there
bye
everybody.