►
From YouTube: Kubernetes UG VMware 20210401
Description
April 1, 2021 meeting of the Kubernetes VMware User Group - which hosted a "bring us your problems workshop". Covered XFS support for vSphere CSI driver; Online volume expansion; Storage vMotion support; Potential to use vSphere events to keep zone tags auto-updated and current; and more.
A
Hi
welcome
to
the
april
1st
meeting
of
the
kubernetes
vmware
user
group
at
today's
meeting
we're
going
to
try
an
experiment.
The
idea
miles
came
up
with
where
we're
calling
this
meeting
bring
us
your
problems.
Now
you
can
bring
up
any
problems.
You've
been
having
also
things
that
maybe
aren't
quite
problems,
feature
requests,
questions
just
inquiries
as
to
where
you
might
find
things
in
the
documentation.
A
Anything
is
open,
so
any
users
having
questions
comments,
problems,
issues
whatever
bring
it
on,
because
this
is
kind
of
ad
hoc.
I
can't
promise
that
we'll
be
able
to
solve
every
problem
in
this
meeting,
we'll
try
and
those
that
we
don't
get
solved
will
queue
up
to
address
either
asynchronously
in
the
slack
channel
or
at
a
future
meeting.
But
I
think
we've
got
a
lot
of
talented
people
here.
A
I
can
see
on
the
attendee
list
that
we've
got
people
with
pretty
strong
backgrounds
in
various
issues
and
some
of
the
people
who
joined
the
call
are
kind
of
routinely
helping
users
out
on
the
slack
channel.
So
I
think
we've
actually
got
a
lot
of
brain
power
here
on
the
call
with
that
said,
does
anybody
have
anything
to
bring
up.
B
By
the
way,
I
also
noticed
that
we
have
a
couple
of
the
csi
cns
development
team
on
here
as
well.
So
if
you
need
anything
to
sort
of
jog
your
thought
processes
or
things
that
you
might
want
to
bring
up,
if
you
have
any
any
comments
on,
you
know,
csi
cns
any
of
that
kind
of
stuff
as
well.
I'm
sure
they
would
love
to
hear
it
too.
A
Just
so
that
we
don't
face
dead
air,
I
was
policing
the
slack
channels
for
questions.
You
know
the
in
my
mind
a
great
place
to
ask
your
questions
on
running
kubernetes
and
vmware.
Is
this
group's
slack
channel?
But
I
think
people
don't
necessarily
know
that
particularly
newbies,
so
you
find
them
in
the
channel
for
the
cloud
provider
as
well
as
in
csi.
A
Sometimes
if
they're
storage
related
and
a
lot
of
people
ask
questions
in
kubernetes,
hyphen
users,
now
my
advice,
there
is,
if
it's
specific
to
one
cloud
provider,
that's
not
a
real
great
place,
because
there's
so
much
churn
in
that
channel
that
you
know
it
can
get
hundreds
of
messages
a
day,
so
that
a
lot
of
the
devs
who
might
be
the
best
sources
for
helping
people
out
on
issues
just
don't
have
the
time
to
read
something
as
chatty
as
that.
But
yet
they
do
routinely
occur
there.
A
And
I
noticed
one
this
morning
that
I
don't
know
immediately
the
answer
here,
but
maybe
somebody
else
does
somebody
was
saying
that
they
were
using
a
csi
volume,
mount
with
a
storage
class
and
attempting
to
use
xfs
on
that
volume.
Mount
and
they
found
that
that
didn't
work,
even
though
using
ext4
did-
and
I
believe
in
that
message
they
did
by
on
the
basis
of
the
log
file
it
looked
like
they
might
have
been
using
vsphere.
A
C
Yeah,
so
regarding
the
xfs
file
system,
we
used
to
have
that
package
installed
as
part
part
of
the
base
driver
image,
but
we
have
removed
that
package
from
2.0
as
we
didn't
tested.
The
xfs
file
system
so
looks
like
the
customer
is
hitting
into
the
issue
while
using
the
xfs
file
system,
because
the
package
itself
is
not
available,
so
you
can't
format
the
file
system
with
the
xfs.
A
Okay,
so
just
so,
I
understand
the
way
this
happens
is
that
when
the,
when
you
provision
a
pv
using
a
storage
class,
it's
up
to
the
csi
driver
to
have
code
in
there
to
do
that.
Formatting
is
that
right.
C
So
we
have
asked
customer
to
create
an
sr
and
we
we
are
interacting
with
our
pm
to
add
a
support
for
that
in
the
future.
Vsphere
csi
driver
releases.
A
Okay,
I'm
just
wondering
if
it
would
be
possible
while
you're
waiting
to
perhaps
if
the
volume
actually
gets
created.
Maybe
you
could
mount
it
somehow
and
manually
format
it
if
that
would
actually
work.
If
it
were
a
one-time,
only
thing
that
might
get
you
you
know,
might
get
you
a
temporary
workaround.
C
I
don't
think
temporary
workaround
is
available
for
extended
file
system.
You
may
use
other
file
system,
although
iext,
three
four
or
another
yeah.
B
There
you
go,
I
sent
into
chat.
There
is
a
github
issue
created
for
it
on
the
csi
driver
page
as
of
february,
I
think
yeah
24th
of
february
asking
for
xfs
file
system
support.
B
I
think
the
challenge
is
some
of
the
applications
have
a
requirement
on
xfs,
like
some
apps
actually
just
use
features
of
that
file
system
that
only
exist
in
that
file
system.
I
think
there's
some
ibm
apps
that
people
are
running
that
that
have
that
kind
of
weird
clustering
thing
going
on,
so
I
think
there's
some
applications
just
hard
require
that
which
is
I
never
come
across
it
before
to
be
honest
but
yeah.
Apparently
it's
a
thing.
A
It's
probably
good
for
us
to
aspire
to
have
in
the
documentation,
maybe
a
list
of
the
file
systems
we
do
support
and
those
we
don't
just
to
save
people
from
having
to
go
out
there
with
inquiries
on
each
of
these
there's
probably
a
short
list
of
these
things
that
people
are
actually
likely
to
use.
But
you
know
when
it
comes
to
linux,
the
list
of
potential
file
systems
is
potentially
really
long.
I
think
some
of
them
might
be
quite
obscure,
but
they're
out
there.
A
Another
thing
that
I
know
has
come
up
in
the
past
with
regard
to
file
systems
is
using
raw
block
mounts
because
there
are
some
pieces
of
software
that
don't
try
to
use
any
file
system
at
all.
You
know,
I'm
thinking
some
of
these
databases
figure
that
the
file
system
is
just
additional
overhead
so
that
they
can
engage
in
raw
io
to
get
to
a
volume.
A
Is
anybody
here,
authoritative
on
that
subject
as
to
whether
csi
currently
has
support
for
raw
volume
mounts,
or
I
think
they
call
them.
C
C
A
D
I
know
it
has
worked
for
me
with
aerospike,
which
requires
a
roblox,
so
it
does
work,
even
though
it's
not
ga
from
our
testing.
E
Do
we
run
the
e2e
test?
If
you
don't
need
to
test,
then
it
does
exercise
raw
block
also.
C
So
the
existing
e2e
test
cases
are
not
using
the
proper
parts
pack
to
skip
that
file
system
format,
so
we
have
to
use
the
special
parameters,
so
those
test
cases
are
not
available
in
the
e2e
test
cases.
Currently
we
have
on
the
repository.
A
C
No,
the
test
cases
need
to
have
a
special
parameter
so
that
we
understand
that
this
is
a
block.
We
want
to
present
this
volume
as
a
block
and
doesn't
want.
We
don't
want
to
create
a
file
system
on
it.
So,
okay,
because
this
doesn't
have
that
parameter
yet.
A
Does
anybody
know
if
we've
got
a
github
issue
or
something
already
existing
to
advance
this
forward?
It
seems
like
if
scott
is
reporting
that
it
actually
seems
to
work
with
aerospike
that
we
shouldn't
leave
this
like
lingering
in
an
undocumented
state
for
years,
but
I
actually
try
to
push
this
forward.
Get
the
thing
in
the
test
suite
and
advance
it
as
a
published.
F
Feature
so
I
I
want
to
add
the
one
bit
of
information
here
so
when
we
are
building
cns
and
csi,
so
we
are
building
it
together
and
the
way
our
we
work
internally
in
vmware
is
we
actually
go
through
a
proper
qualification
so
that
we
can
do
enterprise
great
support
for
our
customers.
F
So
it's
like
a
lot
of
things
as
a
developer.
We
can
just
quickly
say
yeah.
It
is
working
fine,
but
when
we
say
it
is
not
supported
what
I
really
mean
is
it
has
not
gone
through
the
full
qualification
that
all
the
other
features
go
through
within
vmware,
so
things
like
raw
block
volumes.
We
know
it
works,
but
we
need
some
qualification
time.
For
that,
then
the
other
feature
we
talked
about
the
xfs
support.
F
I
think
that
should
be
simple,
very
simple,
to
add
support,
but
again
that
it
has
to
go
through
a
complete
systems
test.
We
have
fet
and
then
we
have
performance.
So
I'm
just
trying
to
give
an
idea
like
when
we
say
something
is
not
supported,
but
it
is
there.
It
basically
means
that
our
team
need
to
do
lot
more
work
to
pull
in
that
feature.
It's
not
just
developers
just
saying
it
is
that.
A
Okay,
that
sounds
fair.
I
mean
I,
I
don't
want
to
advertise
things
we
haven't
tried,
but
still,
if
it's
possible,
I'd
like
to
have
it
available,
so
that
a
user
could
maybe
do
some
kind
of
query
to
see
that
it's
in
the
pipeline
or
that
it's
at
least
feasible
and
people
have
tried
it
and
I'm
open
to
other
ideas.
A
But
maybe
one
way
to
do
that
is
to
just
put
it
in
as
a
github
issue,
because
at
least
people
could
search
for
it
then
and
find
that
there's
some
evidence
that
you
know
there's
at
least
experimental
support
for
it
right
or
not.
B
D
B
Drivers,
I
think
that's
something
that
we
very
much
under
utilize,
actually
in
in
all
of
our
open
source
projects
at
vmware
is
the
alpha
beta
tag
stuff,
because
we
always
have
been
in
a
mindset
if
you
ship
something-
and
it's
ga
like
there-
was
never
really
an
alpha
phase
for
anything
vsphere.
So
I
think
we're
still
sort
of
getting
past
that
you
know
and
and
making
it
okay
for
ourselves
to
ship
alpha
stuff.
B
Though
one
thing
I'll
just
add
to
what
you
were
saying
steve,
I
know
that
the
guys
have
done
a
ton
of
work
over
the
last
while
and
I
think,
as
of
2.0,
all
development
on
csi
is
done
in
the
upstream
repo.
I
don't
know
you
know
if
all
the
issues
are
necessarily
in
the
upstream
repo,
but
I
know
all
dev
work
starts
there.
So
it's
always
like
the
latest
version
of
what
we're
working
on
is
on
on
github.
A
Okay,
it
sounds
good,
so
you
know:
we've
got
an
action
item
to
maybe
make
this
stuff
a
little
better
in
terms
of
user
visibility
and
issues
might
be
a
good
way
to
do
that
or
alpha
beta
tagging
or
a
little
of
each.
You
know
it
seems
like
issues
are
a
good
way
before
it
even
gets
to
the
alpha
state
to
at
least
clue
people
in
that
it's
somebody's
at
least
looking
at
it,
and
maybe,
if
you're,
finding
a
problem
you're,
not
the
first
one
there.
A
I'd
also
encourage
users
to
this
is
just
generic
to
all
of
kubernetes
that
you
know
it.
I
certainly
encourage
you
to
put
the
inquiry
on
slack,
but
slack
is
really
lousy
for
searching
for
developers
who
want
to
address
things,
and
nobody
is
offended
by
you
creating
an
issue.
They
can
always
be
closed
if
it
turned
out.
F
B
Yeah,
I
was
just
going
to
say
since
we
got
started,
there
was
a
few
more
people
that
jumped
on
the
call
and
just
just
to
clarify,
because
they
they
missed
what
you
were
saying
at
the
beginning,
steve.
This
is
kind
of
just
an
experimental
vmware
clinic.
So
if
you
have
any
problems
or
questions
or
feature
requests
or
otherwise
feel
free
to
just
you
know
open
them
up
here,
and
we
can
have
a
chat
about
them
and
see
what
we
can
do.
D
D
So
I
think
if
I
on,
if
I
remember
correctly
from
the
blog
post,
there's
online
volume
expansion
in
seven
update
two,
and
that
is
not
anywhere
in
the
csi
documentation.
That's
only
in
the
release,
blog
posts
and
whatnot,
so
that
should
probably
be
updated
as
a
new
column
in
the
table.
B
I'll,
let
the
the
csi
guys
comment
on
on
you
know
what
the
status
is
on
documentation
update,
but
I
know
that
we're
doing
at
quite
a
large
refactor
of
the
documentation
in
nearish
future
as
well.
So
a
lot
of
that's
gonna
get
cleaned
up
as
a
matter
of
that
course
as
well
scott.
But
I
don't
know
if
they've
got
an
action
item
or
anything
internally
to
to
update
the
feature
matrix
with
what
was
released
in
the
new
version.
F
Yeah
I
mean
we
we
are
planning
to
so
there
is
a
release
that
we
are
planning
for,
which
is
2.2.0
so
right
now
we,
if
you
look
at
our
github
before
you,
will
see
list
candidate
like
packs
being
added,
so
we
are
actually
going
through
a
qualification
process
now,
so
the
plan
is
2.2
is
going
to
have
the
online
volume
size.
It
will
actually
bring
in
all
the
features
that
we're
going
to
support
in
7.0
youtube.
So
all
the
documentation.
Everything
is
basically
waiting
on
that
glues.
A
So
let
me
get
this
right
and
communicate
to
our
users
where
the
documentation
is
expected
to
live.
You
know,
there's
two
potential
places
and
when
we
put
out
a
new
vsphere
release
like
7
update
2,
it
typically
has
release
notes
that
address
the
new
features,
but
independent
of
that
we've
got
our
csi
driver.
So
what
is
the
natural
home
where
a
user
could
expect
to
find
these
things
or
is
it?
Should
it
be
duplicated
in
both.
B
So
we
are
actually
asking
ourselves
that
internally,
quite
recently
too,
is
you
know?
Where
does
this
stuff
live?
So
the
the
the
current
sort
of
modis
operandi
is
anything
product,
vsphere
related
is
going
to
be
docs.vmware.com
and
anything
to
do
with
the
csi
at
all
lives
in
csi
documentation
exclusively,
so
they'll
be
two
separate
places,
and
that
way
you
know
we
don't
get
duplication
of
information,
that's
incorrect
and
all
that
kind
of
stuff.
So
if
you
want
csi
information,
it'll
be
exclusively
on
the
csi
github.
A
I've
got
a
suggestion
and
I'm
not
in
a
position
of
authority
to
make
it
happen,
but
it
strikes
me
that
it
would
be
great
in
the
release
notes
for
vsphere
to
at
least
put
a
link
to
the
csi
that
corresponds
with
it,
because
particularly
new
users
aren't
potentially
don't
even
know
this
other
channel
exists
and
if
you
at
least
put
a
link
in
those
release,
notes
they'd
have
a
fighting
chance
of
discovering
it
sometimes
there's
interactions
too.
I
know
where
a
feature
gets
added
to
csi,
but
it's
dependent
on
something
getting
enhanced
in
vsphere
itself.
A
So
without
both
of
them,
they
don't
work
right
and
or
the
feature
isn't
exposed
and
once
again
I
think
cross-linking
between
those
two.
I
I
can't
see
the
harm
of
it
other
than
adding
a
sentence
to
the
release,
notes
and
it
might
save
somebody
a
lot
of
time
down
the
road.
G
I
would
concur
because,
just
from
a
gss
perspective,
we
work
with
customers
who
are
either
like
not
particularly
versed
in
going
to
github
and
looking
at
the
those
documentations
and
they're
looking
on
the
docs
page
for
like
an
official
vmware
like
hey.
This
is
where
it's
at,
because
I
kind
of
see
github
as
like
a
public
area
that
isn't
particularly
official.
But
if
we
had
something
that
docks
to
reference
like
hey,
everything
here
is
vmware
official.
They
would
probably
understand
it
a
little
bit
better
as
well.
A
A
Let
me
call
it
the
legacy
means
of
support,
which
is
to
engage
with
some
vendor,
but
the
fact
of
the
matter
is
that
working
as
a
developer
for
a
couple
of
decades,
I
know
that
that
legacy
mode,
usually
routes
through
a
tier
of
different
levels
of
support,
who
typically
don't
know
quite
as
much
as
developers
and
really
going
at
the
github,
sometimes
gets
more
educated
eyeballs
on
your
issue
quicker
whether
users
know
it
or
not.
A
So
this
is
almost
like
a
coaching
issue
where
an
astute
user
might
be
able
to
take
advantage
of
both
channels.
It
can't
hurt-
and
you
don't
usually
have
the
gatekeeper
between
you,
the
user
and
the
developer
when
you
go
in
there
with
github
issues,
and
I
don't
know
if,
if
it
were
me
in
the
shoes
of
that
user,
I
would
certainly
be
eager
to
take
advantage
of
the
github
issue
channel.
But
I
I
think
a
lot
of
legacy
users,
maybe
don't
realize
it
exists.
G
Yeah,
I
think
we
see
a
lot
of
that
same
thing
with
harbor
as
well,
with
customers,
they're,
they're,
they're,
very
cautious
on
whether
or
not
to
like
go
to
a
github
where
harbor
lives
and
where
the
developers
work
on
it
to
open
an
issue.
They'd
rather
come
in
through
support
and.
D
G
Oh,
this
is
the
only
way
I
know
as
far
as
like
my
past
interactions
is
how
to
get
like
a
feature
request,
or
even
like
a
quick
answer
to
something
that
might
be
a
common
issue.
A
If
you
want
to
keep
it
private,
because
there's
kind
of
no
way
to
not
disclose
at
least
some
form
of
your
identity
with
a
github
issue,
I
suppose
you
could
open
up
a
an
alias
github
account
if
you
really
wanted
to
go
that
far,
but.
G
Yeah-
and
I
think
they
sometimes
see
like
you-
know,
issues
that
might
seem
impactful
to
the
customer
or
whoever
brought
it
up
go
stale
and
they
kind
of
they
kind
of
see
that
as
like.
Oh
I'm
not
getting
help.
You
know
what
I
mean.
D
Ever
it's
always
going
to
be
lacking
back
because
it
goes
through
its
extra
validations
and
extra
everything.
So
if
someone
needs
something
in
the
product,
if
they're
having
an
issue,
if
they're
using
a
commercial
offering,
then
sometimes
you
have
to
go
through
the
gss
channel,
because
that's
where
you're
gonna
get
a
commitment
on
product
versus
just
on
the
open
source.
A
Yeah
and
just
as
a
note
to
users
on
this
call,
this
group
operates
under
the
kubernetes
project.
So
there's
been
a
lot
of
talk
here
about
vmware
commercial
product,
but
this
group
exists
to
serve
all
kubernetes
distributions.
So
if
anybody
here
is
on
red
hat,
open
shift
or
some
other
kubernetes
distro,
this
group
is
here
to
support
you
so
feel
free
to.
E
Thank
you
steve.
This
is
him
and
by
the
way
I
work
for
red
hat.
So
I
I
don't
know
if
this
is
a
good
segue
to
ask
some
questions
about
like
I
was.
I
opened
those
two
I
was
talking
to
divan
offline
and
one
of
the
things
that
I've
been
trying
I
have
opened.
A
pier
to
fix
is
the
using
the
vendor
dependencies
in
in
every
other
site,
car
and
communities
the
csr
driver
that
is
under
kubernetes
6
repo.
E
You
would
see
that
the
dependencies
are
rendered
and
a
while
back,
for
example,
ebs
csi
driver
stopped
building
actually
because
somebody
yanked
the
dependency
from
internet,
and
we
had
to
kind
of
scramble
to
fix
it,
and
the
other
case
is
like
building
a.
I
will
need
to
build
the
the
driver
in
in
internet
or
like
where
you
may
not
have
full
internet
access,
so
yeah
and
I
have
there
are
some
like
what
is
people
thoughts
on
it?
It's
it
still
requires
fixing
the
vendor
checks,
but
if
somebody
can
chime
in
yeah.
A
B
E
B
That's
pr767
is
that
right.
B
I
don't
know
divian
or
sandeep.
Do
you
have
any
comments
on
that.
C
Yeah
so
we
discussed
this
issue
and
the
pr
internally
so
looks
like
we
have
a
plan
in
kubernetes
to
remove
the
vendor
dependency
for
the
sidecar
containers
and
the
upstream
kubernetes
repository.
So.
E
C
C
Okay,
so
we
wanted
to
go
through
the
advantage
and
the
disadvantage
of
rendering
this
packages
into
a
repository.
For
example,
let's
say
user
or
a
developer
want
to
modify
something
instead
of
updating
the
upstream
dependency,
if
they
start
modifying
the
rendered
folder
right,
we
want
to
restrict
them
through
some
linear
checks.
Yeah,
they
shouldn't
be
directly
updating
it
and
they
should
be
only
updating
through
go
mod
vendor,
and
we
should
encourage
a
user
to
update
the
dependency
on
the
upstream
repository
itself.
E
C
So
you
have
that
added
in
the
in
this
vr
itself,
yeah.
E
Where
there's
a
script
called
verify
vendor
dot,
sh
that
checks,
if
the,
if
the
vendor
was
modified
directly
and
we
could
add
it
into
the
project
or
we
could
write
it
as
a
build
process
and
like
I
noticed
that
in
csr
driver
vsphere
c
is
a
driver.
We
are
not
using
travis,
but
in.
D
E
A
I
don't
know
if
I
have
your
name
right
here.
Man
can
I
ask
you
you're
in
a
position.
You
know
if
you're
with
red
hat,
an
open
shift
of
looking
at
a
lot
of
different
cloud
provider
platforms,
because
you
target
them.
Is
there
some
other
csi
implementation
that
you
would
say
did
a
great
job
at
this
that
we
could
go
take
a
look
at
to
you
know,
evaluate,
what's
going
on
with
the
vsphere
csi.
E
I
mean
for
this
rendering,
if
you
see
like
the
gcp,
is,
for
example,
a
good
example,
a
good
example,
and
they
and
they
have
obviously
rendered
it
and
ebs
is
also
a
good
example.
But
you
could
say
that
whatever
changes
that
we
are
proposing
is
vsphere
csr
driver
for
rendering.
I
myself
am
made
in
abs
at
some
point
in
time,
so
but
yeah.
The
other
good
example,
is
definitely
gcp
or
openstack
cylinder.
Things
like
that.
E
And
there
are
three
concerns
raised,
so
one
was
the
yeah,
then
somebody
could
modify
the
vendor
and
then
there
was
a
when
you
disable
the
lender.
Linkedin
check
on
vendor
directory
yeah
this
those
the
existing
script
has
to
be
updated
one
by
one.
It's
not
it's
just
a
little
bit
of
work
and
I
was
planning
to
do
it,
but
I
just
wanted
to
check
if,
if
that's
the
that's
the
work
that
requires
most
effort,
because
it's
just.
C
E
Yeah,
though,
the
one
more
issue
that
I
was
starting
with
even
offline
was
the
vsphere
sinker
that
is
shipped
with
the
vsphere
csi
driver
and
we
were
like
looking
into
shipping
vsphere
csr
drive
with
open
shift
and
we
are
thinking
the
what
components
are
required.
E
What
components
we
could
you
know
like
we
could
skip
or
not
ship,
so
so
vsphere
sinker
is
something
that
is
like
they've
been
explained
offline
like
if
anybody
has
anything
else
to
add
it
adds
it
pushes
the
metadata
of
pv
pvc
to
cns
and
and
helps
with
it
allows
user
to
use
static
provisioning.
But
if
user
is
not
using
static,
provisioning
or
not
necessarily
cares
about
cns
seeing
the
metadata
of
pvp
incentives,
they
can.
C
Yes,
we,
but
we
are
also
adding
some
of
the
features
into
a
sinker,
so
sinker
is
kind
of
a
container
for
us
where
we
are
running
lot
of
other
services,
so
it's
better
to
add
a
sinker
as
part
of
the
csi
driver
for
end
to
end
functionality.
C
E
Okay
and
one
last
question,
sorry,
I
thank
you
for
adding
this.
Are
we
running
any
eques
or
in
tests
based
on
storage
being
motioned
for
csr
driver,
because
we
never
supported
storage,
emotion
with
the
interview
sphere
cloud
provider?
We
are
supporting
now
with
this
one
and
there's
just
so
many
failure
scenario
with
a
storage
view,
motion
and
yeah.
We
wanted
to
evaluate
test
and
validate
that
properly
before
telling
to
our
customers
that
okay,
this
driver
supports
storage
revolution.
Now
you
can
go
ahead
and
use
it.
A
B
A
A
It
seems
fair
that
we
should
have
an
intent
test
case.
I
mean
if,
if
users
can
trigger
it,
if
it's
going
to
blow
something
up,
we
sure
like
to
know
and
either
document
it
lock
it
out
or
whatever
or
maybe
it
just
works
fine,
but
they,
I
doesn't
seem
reasonable
to
expect
the
user
to
be
the
first
one
to
ever.
Try
it.
E
Yeah,
so
with
entry
driver
what
happens
and
it's
a
issue
with
a
lot
of
openshift
customers
is
like
the
customers
use
a
data
store
which
is
a
part
of
data
store
cluster
and
they
configure
it
in
open
shift
or
kubernetes
whatever
and
the
vmotion
event
happens,
and
then
their
vmdks
are
migrated.
Now
their
pv
pvcs,
their
references
are
all
invalid,
so
parts
cannot
start
because
volume
cannot
be
attached,
detached
nothing
works
and
they're
like
vmdks
are
missing.
E
So
that
is
the
that's
the
that's
a
pain
with
the
entry
driver
we
had
this
for
years,
even
though
telling
customers
don't
do
that,
don't
do
that,
it
just
yeah.
It
kept
happening
with
the
csi
driver.
We
have
this
where
it
works,
but
I
think
that
that
to
you
know
like
to
properly
support
this,
we
should
have
some
kind
of
end-to-end
tests
like
where
we
run
the
test.
In
in
a
b
storage
b
motion
enable
clusters,
we
trigger
storage
b
motion
and
we
make
sure
that
parts
are
able
to
come
up
properly.
B
I
think
sandy
is
probably
the
best
one
yeah.
F
B
F
Yeah
I
mean
I
I
just
listened
to
him
now
yeah.
I
think
the
storage
emotion
is
something
that
was
definitely
planned
for
cns,
but
we
do
have
some
tests.
We
also
observed
in
corner
cases.
It
doesn't
work
so
right
now.
I
think
it's
better.
We
actually
don't
claim
full
support
for
it
because,
like
we
don't
want
to
end
up
having
an
enterprise
customer.
F
Finding
a
like
a
red
description,
saying
like
I
migrated
now,
my
volume
is
gone
right,
so
I
think
for
now
let's
be
safe
and
let's
say
it's:
when
we're
trying
to
work
on
the
corner
cases
and
we'll
fix
it.
E
So
do
you
think
that
when
we
ship
it,
we
should
not
claim
support
yet
so
we
might
have
to
update
vsphere
csi
driver
documentation
as
well.
Then,
yes,.
F
We
in
fact
we
already
discussed
it
internally.
We
will
be
updating
it
because
some
of
these
we
discovered
recently,
so
we
will
be
updating
our
dock.
Even
I
think
you
had
plans
to
update
it
or
like.
C
Is
it
something
yeah
only
in
the
vcp
to
csi
migration?
We
have
listed
it
as
a
supported
but
looks
like
those
corner
cases
when
the
volume
is
not
attached
right.
So
those
thing
we
want
to
say
it
over
here.
So
in
general
we
are
not
declaring
a
support
anywhere,
except
for
the
vcp
to
csi
migration,
where
we
are
showing
the
difference
between
what
we
supported
in
bcp
and
what
we
are
supporting
in
csi
driver.
C
So
it
looks
like
that
table
needs
to
be
updated
to
say
that
we
don't
support
storage,
vmotion
or
block
volumes.
F
We
don't
support
yet
I
mean
that's
the
right
term,
so
it
looks
like
we
are
making
some
progress
on
the
vsphere
side
of
things
so
hopefully
in
the
next
major
version
of
vsphere,
we'll
probably
be
claiming
support.
I
just
hope
so,
I'm
not
sure.
B
I
threw
a
link
into
the
agenda
there
for
anyone
that
wants
to
have
a
look
at
issues
that
tag
storage,
free
motion.
There's
there's
a
couple
of
issues
on
the
sphere:
csi
driver
there
that
are
either
bugs
or
asking
for
it
as
a
feature
or
whatever.
F
Sorry,
I
have
a
very
basic
question:
how
do
I
get
an
invite
to
this
meeting?
Is
it
like?
Even
even
I
got
this
meeting
and
went
forwarded
so.
B
A
Well,
the
the
official
process
should
be
that
you
first
join
this
group,
so
it's
a
google
group,
like
any
of
the
other
groups
inside
the
kubernetes
project
and
the
reason
for
joining
that
group.
If
you
want,
you
can
opt
out
of
the
mailing
list
and
you
won't
get
any
email
spam.
Although
our
group
sends
very
few
emails,
then
the
group
membership
is
what's
used
to
gate
access
to
the
our
documents.
A
So
as
soon
as
you
join
the
group,
if
you're
logged
in
under
that
same
google
account
you
use
to
join
the
group,
you'll
have
edit
rights
to
the
agenda,
notes
document
and
any
other
docs.
We
care
to
share
with
the
group
the
reason
we
have
to
gate
it
like
that,
like
any
pretty
much
every
kubernetes,
sig
working
group
user
group
does
it
this
way
is
there
are
trolls
out
there
who
will
go
there
and
just
for
entertainment,
deface
or
delete
documents.
A
Well,
the
actual
calendar
and
I'll.
Let
me
cut
and
paste
that
to
get
this
in
your
personal
calendar,
we
don't
want
to
maintain
our
own
shadow
calendar,
so
the
kubernetes
itself
has
an
official
kubernetes
project
calendar
that
has
all
the
sig
meetings
and
let
me
just
see
I'm
going
to
post
that
in
the
chat
anyway.
A
B
Yeah
I
threw
the
group
link
in
there
as
well
so
for
anyone
that
wants
to
to
join
the
group
that
you've,
maybe
just
joined
this
meeting
from
slack
channel
or
otherwise
go
in
and
join
it
in
there
as
well.
And
then
there's
all
the
information
on
calendars
and
everything
in
one
of
the
posts
that
steve
made.
A
Yeah
and
so
far
we've
been
lucky
and
this
meeting
has
never
been
occluded
by
a
holiday
where
we
had
to
cancel,
but
the
advantage
of
using
that
official
kubernetes
calendar
entry.
Is
that
there's
a
means
for
this
group
to
update
you.
So
in
the
event
that
an
upcoming
meeting
was
to
get
cancelled,
that's
how
you
get
notice.
If
you
manually
create
that
this
is
the
first
thursday
of
every
month,
you
don't
have
really
any
live
channel
to
the
group
where
we
could
communicate
any
temporary
change.
H
Okay,
so
this
is
a
topic
okay,
so
we
use
this
with
persistent
volumes
if
we're
trying
to
say
mount
them
only
on
specific
nodes
that
are
in
a
specific
region,
basically,
is
the
vsan
in
that
that
zone,
because
if
it's
on
a
different
part
of
our
clusters
on
a
different
cluster,
then
it
won't
be
able
to
utilize
that
storage.
Does
that
make
sense
miles?
You
look
yeah.
H
Okay,
so
if
we
have
say
a
cluster,
a
vmware
cluster.
B
B
H
We
will
have
vms
in
our
kubernetes
cluster
that
span
those
two
zones-
okay,
and
so
we
have
been
setting
up
tags
at
the
in
the
vmware
data
center
level
to
be
able
to
specify
which
which
zone
it's
in
and
then
kubernetes
looks
at
that
we're
still
using
the
cloud
config
we're
on
117
and
then
basing
that
to
determine
hey.
Where
can
I
schedule
you,
we've
been
having
some
issues
with
during
updates
and
various
things,
those
tags
disappearing.
H
B
B
B
H
Well,
we've
got
some
different
maintenance
things
going
on,
so
it
actually
jessie's
on
from
our
team
that
handles
the
vmware
stuff
they're
doing
some
like
migrations
from
like
a
cluster
that
maybe
has
two
v
hosts
and
multiple
v
hosts.
So
in
that
I
think
they're
actually
like
essentially
recreating
the
clusters.
That's
probably
just
something
like.
B
H
And
I
would
assume
that
on
those
cases,
but
I'm
seeing
it
in
cases
that
I'm
not
exactly
sure
when
it's
happening,
I
know
that's,
not
happening,
we're
not
doing
any
hardware
migrations.
In
these
cases
there
are
cases
where
possibly
they've
been
doing
some
like
esx
upgrades,
and
besides
that
I
don't
know
what
else
would
possibly
be
causing
it,
but
it
impacts
us
like
the
the
pv
stays
running
until
the
pod
gets
rescheduled,
and
then
it
says,
oh,
I
can't
find
it
or
you
can
not
not
find
it.
I
can't
find.
B
H
B
Can't
put
it
anywhere
jesse,
I
don't
know,
do
you
have
any
more
details
on
how
you
guys
are
doing
vsphere
upgrades
and
whether
you're
just
ripping
hosts
in
and
out
or
actually
upgrading
existing
hosts
in
place.
H
A
B
A
B
Bryson,
I
would
say
I
I've
never
seen
tags
just
disappear
like
that,
so
I
would
say:
there's
probably
something
in
the
upgrade
process
that
is
either
like
removing
the
host
from
the
inventory
and
re-adding
it
to
the
inventory
again
will
cause
it
to
get
a
new
id
so
any
operation
that
removes
it
from
the
vcenter
inventory
and
it's
added
back
in
again
will
break
you
know
that
id
and
then
you
would
have
to
replace
those
those
tags.
So
any
vsphere
life
cycle
operation
that
involves
messing
with
the
inventory
would
cause
stuff
like
that.
A
Yeah,
if
you
want
to
queue
up
a
follow-up
question
in
slack
or
whatever
channel
we'd
like
to
know
what
update
mechanism
you're
using
so
some
of
them
might
be,
I
think
the
modern
one
would
be
the
life
cycle
manager
built
into
vcenter
itself.
The
prior
generation
used
something
called
update
manager.
A
A
B
B
Reconnect
is
okay,
the
id
would
stay
consistent
there,
but
it
would
be
if
it
was
removed
from
inventory
re-added
to
inventory
or
added
as
a
new
host
in
in
any.
A
A
H
B
So
how
you
would?
How
do
you
track
it
down?
It's
weird,
because
tags
are
created
at
the
v
center
level,
as
are
categories
and
then
they're
assigned
to
hosts,
or
you
know
any
other
kind
of
object,
but
the
event
log
is
attached
to
the
object.
So
if
the
object
gets
deleted,
the
event
log
goes
with
it.
So
you
don't
see
it
get
deleted
because
the
object
doesn't
exist
anymore.
So
whenever
say,
for
example,
a
host
got
taken
out
and
a
new
host
or
the
same
host
got
re-added.
B
Then
it's
got
a
fresh
event
log,
because
it's
got
a
fresh
moid,
sorry
managed
object
id.
So
your
event
log
is
going
to
be
empty,
so
all
you
would
see
is
maybe
the
first.
The
very
first
event
in
the
log
would
be
was
added
to
vcenter
right.
So
you
could
take
that,
as
this
is
the
time
that
it
was
added
to
vcenter,
and
maybe
you
could
just
correlate
that
to
that's
when
these
labels
went
away.
A
I'm
brainstorming
here
and
unfortunately,
maybe
it's
a
legit
issue,
bryson
that
you
can
charge
us
for
writing
this
for
you
as
a
user,
but
I
think
that
it
would
be
possible
to
write
some
scripting
tools
that
would
periodically
crawl
through
the
inventory
of
all
hosts.
And
if
it's
the
case
that
there
should
be
none
with
zero
tags,
you
could
probably
flag
that
hey.
This
anomaly
has
happened
where
everybody
should
every
host
here
yeah.
In
my
view,.
A
H
Yeah
we've
been
looking
at
that
a
little.
You
know
that,
but
not
just
alert
on
it
like
just
take
action
and
add
it
back.
A
Like
I
say,
if
you
wanted
to
put
that
in
as
a
feature
request,
that
seems
totally
legit,
because
it's
a
crazy
idea
for
every
customer
vmware
has
to
have
to
write
that
themselves
when
it's
potentially
generically
useful,
so
that
that
strikes
me
anyway
is
falling
in
the
category
of
something
that
maybe
vmware
should
take
on.
But
I'm
not
the
pm.
That
makes
the
decision,
but
still
I'd,
encourage
you.
If
you
come
to
that
conclusion
to
just
formally
submit
it
as
a
feature
request,
and
I
I
think
we
pay
attention
to
those
kind
of
requests.
A
D
D
So
that's
a
real
easy
way
to
do
it.
I
mean
it
can
be
done
with
k-native,
open
fashion,
any
of
the
back
ends
out
there.
It's
just
a
vmware
fling,
so
it's
another
open
source
project
that
just
runs
on
kubernetes.
That
can
give
you
an
easy
solution
to
that.
A
D
So
that
one
is
behind
closed
doors,
but
there
is
a
tagging
of
vms
on
the
community
samples
on
the
we
get
the
link
here
for
the
viba
one,
but
there
is
a
whole
repo
for
the
community.
Samples
of
the
vmware
event
broker
that
william
put
together
with
all
of
the
different
examples
in
different
languages.
D
For
example,
when
a
host
enters
maintenance
motor,
a
new
host
is
added,
I
mean
it
would
just
be
piecing
together,
two
basically
scripts,
where
there's
one
in
there,
for
when
a
host
is
added,
someone
created,
one
that
adds
it
to
a
cmdb,
and
then
someone
created
one
for
tagging,
a
vm
when
it
gets
powered
on.
So
it
would
just
be
taking
kind
of
the
logic
from
one
and
the
event
from
the
other
and
just
mixing
them
together.
D
H
This
is
probably
something
worth
like
a
topic
for
one
of
our
upcoming
meetings,
because
I
think
this
could
benefit
like
anyone.
So
you're
saying
I
haven't
looked
at
this,
it's
it's
sitting
in
the
cluster
and
it's
paying
attention
to
what's
happening.
Sorry.
So
it's
sitting
inside
of
kubernetes
and
it's
paying
attention
what's
happening
on
the
vcenter
or
is
that
what's
happening.
D
So
basically
viba
is:
it
can
be
deployed
either
as
an
ova
or
as
a
helm
chart
onto
any
kubernetes
cluster,
and
then
you
basically
give
it
a
config
map,
just
pointing
it
to
the
erv
center,
and
it
is
constantly
listening
to
all
events
that
happen,
and
then
you
can
create
basically
scripts
in
any
language.
You
want
and
tell
it.
D
I
want
whenever
a
event
of
vm
powered
on
run
in
automation,
when
a
host
is
host
connected
power
on
there,
I
think
it's
35
000
events
that
it
can
work
on,
which
is,
I
think,
the
new
number
of
vmware
events
in
vsphere,
and
so
it
can
basically
any
event
in
vsphere.
So
it's
not
just
for
this
issue.
It's
for
any
type
of
issue
that
you
may
have
it
can
be
used.
It
simply
runs
on
kubernetes,
but
it
can
run.
D
I
have
it
running
in
my
management
kubernetes
cluster,
but
it
could
run
in
any
kubernetes
cluster
or
in
the
ova,
which
is
just
a
single
node
kubernetes
cluster,
an
easy
way
to
get
it
up
and
running.
H
Yeah,
that
sounds
that
sounds
pretty
cool.
Do
you
know
if
it
would
be
able
to
like
if
you
have
in
kubernetes,
you
have
prometheus
saying
you
have
a
node.
Not
ready
would
would
that
be
able
to
query
vcenter
and
say
hey.
The
node
is
like
shut
down
like
it's
down.
So
of
course
the
note's
not
ready
or
or
like
look
and
say,
vcenter
and
say:
hey
vcenter,
says
it's
running,
and
so
it's
not
ready
so
like
there's,
actually
probably
a
problem
from
the
kubernetes
side
versus
like
the
nodes
just
shut
down.
D
Basically,
you
could
run
an
automation
any
time.
Avm
is
powered
off
to
update
it
in,
for
example,
your
cmdb
or
update
it
in
prometheus
to
suspend
alerts
from
the
alert
manager
or
something
on
that
sort.
You
could
do
because
you
can
have
any
automation
run
on
any
event
in
vsphere
through
this
project,
yeah
that
that
sounds.
A
H
D
Exactly
and
it's
actually
immediate
versus
pulling
on
an
interval
where
you
may
still
have
some
false
positives
during
the
five
minute
polling
or
whatever
polling
you
set.
This
is
the
event
happens,
the
automation
happens,
and
it's
immediately
afterwards.
So
literally
your
delta
time
there,
between
it
happening
and
an
issue
with
a
fake
alert
coming
out,
is
the
time
that
your
automation
takes
to
run.
A
D
I
mean
I
have
800
different
automations
that
I
run
through
viba
and
they
all
have
scale
to
zero.
So
it
is
zero
pods.
Well,
it's
one
for
the
v
bubble
blinds
and
then
everything
is
on
zero
and
whenever
an
event
happens,
it
just
brings
up
the
pod
and
runs
my
script.
So
it's
a
very
easy
way
to
be
still
resource
not
to
consume
too
many
resources
and
get
these
capabilities.
B
D
A
B
B
D
Yeah,
I
know
I
I
originally
built
for
openfest,
which
was
the
original
backing.
It's
now,
okay
native,
but
I
built
originally
the
power
cli
language
integration
into
open,
fast
and
then
william,
took
that
and
put
it
into
viva,
which
is
awesome
and
then
put
together
the
project
which
michael
built,
the
event
router,
which
is
the
whole
line.
The
key
component
there,
which
is
pretty
amazing,
yeah.
A
Okay,
a
time
check
here
we're
at
the
official
end
of
the
meeting,
but
before
we
go,
I
wanted
to
check
something
miles
and
I
came
up
with
the
idea
mostly
miles
of
having
this
bring
us
your
problems,
and
I
want
to
clarify
that
people
found
this
to
be
useful.
I
think
the
fact
that
we,
we
were
a
little
skeptical
as
to
whether
problems
would
come
here
and
we'd
have
a
bunch
of
dead
air,
but
that
certainly
didn't
happen.
Can
we
hear
from
some
users,
and
also
from
the
other
vendor?
H
Future,
I
think
this
is
I
mean
just
here
at
the
end,
was
useful
to
start
a
conversation
to
come
up
with
more
ideas
of
things
that
we
might
want
to
discuss
in
the
future.
So
I
don't
know
if
you
want
to
do
it
like.
H
We
only
do
this
once
a
month,
so
maybe
every
six
months
and
kind
of
determine
what
we're
going
to
do
for
the
next
five
kind
of
sessions,
maybe
or
at
least
come
up
with
some
of
those
ideas,
and
we
can
yeah
adjust
those
as
we
go.
A
D
I
think
also
that
I
think
it
went
great
and
I
I
love
this
idea.
I
think
that
if
the
idea
comes
up
in
advance
in
the
future
to
do
this,
if
we
say
every
six
months
or
whatever
it
is
to
post
it
out
in
more
forums
as
well,
that
this
exists,
because
not
everyone
knows
of
the
user
group,
but
there's
the
kubernetes
channel
in
the
vmware
sla
in
the
vmware
code.
D
A
A
Okay
with
that
said,
it
is
four
minutes
after
so
we've
gone
over
an
hour.
I'm
gonna
call
this
to
a
close,
but
hey
great
meeting
thanks
everybody
for
attending
and
contributing
to
to
this
and
we'll
see
you
again
first
thursday,
of
next
month,
bye.