►
From YouTube: Kubernetes UG VMware 20200702
Description
July 2 2020 meeting of the Kubernetes VMware user group covered upcoming features in the CSI storage plugin and Windows container support when running Kubernetes on the vSphere platform
A
Okay,
welcome
to
the
july
meeting
of
the
kubernetes
vmware
user
group
on
the
agenda
today.
We're
planning
to
talk
about,
what's
upcoming,
with
regard
to
the
csi
storage
plug-in
with
kubernetes,
but
we're
waiting
for
that
speaker
to
arrive
and
while
we're
doing
that,
robert
had
asked
that
we
give
an
update
on
the
status
of
windows
containers.
A
A
Yeah,
well,
you
know
that's
sort
of
the
job
of
a
chair.
I
think,
is
to
drive
these
meetings
and
make
them
interesting.
So
I
actually
appreciate
it
because
otherwise
can
you
see
a
desktop
with
powerpoint
running.
B
A
Okay
and
I'm
just
gonna-
leave
it
in
this
mode
rather
than
presenter
mode,
just
because
it's
a
little
easier
to
get
to
the
zoom
moderation
controls
if
I
do
it,
but
yeah
robert
with
regard
to
the
short
notice.
I'd
rather
have
some
asks
from
actual
users,
rather
than
have
to
invent
stuff
on
my
own.
So
even
if
you
want
to
jump
in
there,
15
minutes
before
I
mean
a
day
before,
gives
me
a
lot
more
headroom,
but
always
appreciated.
A
A
You
know,
you're
welcome,
to
interpret
how
stable
a
lot
of
a
lot
of
enterprises
still
aren't
comfortable
using
a
release.
100
but
they've
been
marked
as
stable.
A
There's
a
doc
link
on
that
slide,
so
they're
documented
in
the
official
kubernetes
documentation-
and
I
just
asked
some
of
the
developers
of
that
feature-
what
it
is,
and
they
said
to
me
just
within
the
last
hour
that
they
consider
them
quite
stable.
However,
kubernetes
doesn't
stand
in
isolation.
You
know
that
the
whole
cncf
landscape
has
a
complete
tool
chain
of
other
components
which
includes
cni,
network
plug-ins,
storage,
etc,
and
I
think
there
may
indeed
be
issues
with
some
of
those.
A
For
example,
you
before
you
make
a
commitment
to
go,
take
this
into
production
in
your
organization,
you
there's
more
than
just
kubernetes
involved,
so
you
might
want
to
take
a
look
at
you
know
your
choice
of
cni,
plug-in
storage,
plug-in
to
make
sure
that
those
are
capable
of
a
plug-and-play
in
a
windows
environment,
but
at
least
some
of
them
are
so
with
regard
to
kubernetes
itself,
I
think
it's
designated
as
ready
to
go.
Why
would
you
want
to
use
them?
A
I
mean
there's
people
who
have
an
issue
where
they
have
a
big
development
staff
that
aren't
ready
to
be
retired
or
put
out
to
pasture,
and
you
know
many
organizations
are
faced
with
using
the
skill
set
of
the
people
that
they
have
now
and
if
you've
got
an
army
of
people
who
are
already
trained
in.net
that,
for
obvious
reasons
you
you'd
like
to
utilize
these
people
and
these
windows
containers
are
the
way
you
would
go
about
doing
that.
So
it
gives
you
a
way
to
containerize
existing.it.net,
apps
I'll.
A
Let
the
speakers
will
line
up
next
month
cover
how
that's
done,
but
I
believe
that
in
some
cases
your
choice
could
be
to
containerize
them
using
a
linux
version
of
net,
but
in
others
maybe
windows
containers
is
going
to
be
a
more
attractive
part
of
that
story,
and
I
understand
there's
a
lot
of
people
looking
at
this.
Just
because
some
of
these
windows
platforms
are
either
end
of
life
already
or
going
that
way,
so
that
it's
not
a
case
of
whether
you
need
you
know.
A
If
you
need
a
new
platform,
you
do,
and
there
are
people
looking
at
this
now
david.
You
said
that
you've
been
looking
at
this
very
recently
too,
and
maybe
you
have
a
few
remarks.
C
Yeah
so
there,
like
an
upcoming
product
release,
there's
supposed
to
be
some
support
for
azure
so
like,
naturally,
it
lends
to
looking
at
like
windows
related
stuff,
and
so
I,
like
one
of
the
things
that
I
have
to
do,
is
I
have
to
validate.
C
Like
our
you
know,
our
components
and
stuff
like
that
in
ci,
cd
pipelines
and
stuff,
and
one
of
the
items
line,
items
is
running
stuff
on
windows,
and
you
know
we
do
all
our
ci
cd
stuff
in
jenkins
right
now
currently-
and
sometimes
we've
been
adding
more
sorry,
the
phone's
going
off
here,
but
we
do
concourse
and
runway
too,
but
most
of
our
stuffs
in
jenkins
and
requires
a
windows.
Executor
are
basically
just
a
node
in
order
to
run
natively
on
windows
and
based
on
what
I
my
experience.
B
C
My
experience
like
I
tried
I
initially
because
I
was
like
hey.
You
know,
let's
start
not
with
the
latest
windows,
because
that's
probably
going
to
work
so
2019
and
I
drop
started
initially.
Building
a
windows
executor
note
on
windows,
server,
2016
and
I
ran
into
a
number
of
problems
really
quick
so
like
when
they
talk
about
windows.
Support.
C
If
I
couldn't
you
know,
I
basically
I
mean
I
could
get
docker
and,
like
all
the
other
components,
to
install
on
windows
on
2016,
but
I
quickly
found
that
the
support
for
it
was
like
very
lacking,
and
I
mean
very
lacking
like
when
you're
running
like
docker
on
you
know,
just
doing
a
docker
run
a
hello
world
like
I
immediately
ran
into
issues
where
that
I
guess
whatever
format,
that
they're
using
in
order
to
pull
containers
from
like
docker
registry
or
gcr.
C
It
was
just
no
longer
supported
on
that
platform,
and
it's
pretty
obvious
now.
I
started
pulling
a
bunch
of
random
containers
and
like
random
images,
and
I
ran
into
that
same
problem
and
I
ultimately
came
to
the
conclusion
where
I
needed
to
go
to
windows
2019
and
only
use
that
platform,
and
I
actually
found
some
documentation
in
kubernetes
land
which
I'm
going
to
plop
in
the
chat
here.
So
everyone
just
can
like
look
at
it
and
yeah.
C
So
basically,
I
found
that
2019
was
the
only
like,
really
quote,
unquote,
supported,
container
or
supported
thing
for
running
containers
on
windows.
After
I
switched
to
2019
everything
just
started,
magically
working
hello
world
and
all
these
other
container
runtimes,
I'm
not
saying
that
you
probably
can't
get
it
to
run
on
2016..
C
It's
just.
I
think
your
uphill
curve
is
going
to
be
high
to
get
that
thing
to
actually
work,
and
I
don't
know
how
long
they're
going
to
continue
to
support
that
and
there's
a
number
of
things
that
I
found
that
it
just
it
didn't
seem
like
that
they
were
treating
2016
as
something
that
was
that
they
were
going
to
continue
to
support
with
the
latest
stuff
and
like
the
underlying
stuff
that
it
required.
So
at
least
that's
what
my
experience
was
with
it.
D
I
didn't
find
the
same
sort
of
thing
when
I
was
trying
to
record
demos
that
I
had
to
use
2019.
I
was
trying
to
use
2016
for
ages
and
I
just
couldn't
make
it
work
properly,
and
if
you
want
the
newer
stuff,
like
wsl2
and
all
those
other
nice
bits
and
pieces
that
are
built
into
the
new
versions
of
windows
10,
then
you
have
to
go
to
2019.
That
includes
the
single
tcp
ip
stack
for
the
wsl
stuff
and
the
windows
stuff.
D
B
So
yeah,
so
so
what
I
was
also
interested
in
was
was
understanding
kind
of
the
level
of
community
support
beyond
just
kubernetes
itself.
So
thinking
in
terms
of
things
like
observability,
you
know
prometheus
or
services
that
run
as
a
sidecar.
B
The
various
various
prod
projects
and
integrations
that
that
we
rely
on
to
make
a
complete
functioning
stack
because
the
you
know
they
they.
It's
not
clear
to
me
how
how
much
of
how
much
those
projects
have
to
think
specifically
about
the
windows
container
use
case.
B
I
think
it
kind
of
depends
on
the
on
the
on
the
project
and
the
tooling,
but
but
to
make
to
make
to
make
it.
You
know
operational
production
requires.
You
know
quite
a
broad
kind
of
community
acceptance
and
and
representation
of
windows
as
a
runtime
in
in
those
communities.
So
I'm
kind
of
wondering
how
you
know
beyond
kubernetes
if
you
run
into
things
or
if
maybe
I'm
overstating
the
the
difficulty.
D
I
think
maybe
well
from
from
the
conversations
I've
had
with
customers.
I
I
haven't
met
one
that
wants
to
run
stuff
on
windows
seriously
like
windows.
Kubernetes
seriously.
I
mean,
I
guess
the
the
argument
is.
D
So
there's
you
know
just
fundamental
things
that
are
not
quite
there
because
of
the
differences
between
the
platforms,
and
I
think,
if
you're
taking
your
old
applications
and
just
sticking
them
in
a
container
and
saying
hey,
look,
it
runs
on
kids
now,
which
is
more
often
the
fact.
When
it
comes
to
those
old
net,
then
you're
gonna
have
a
bad
time.
D
If
you
take
the
time
to
take
a
net
app
and
maybe
just
even
port
it
to
dotnet
core
or
to
mono
or
another
framework,
it
would
still
you
know,
the
bulk
of
the
business
logic
would
be
untouched.
You
change
some
of
the
underlying
libraries
and
let
it
run
on
linux
you're,
going
to
have
a
much
better
time
in
production
than
you
would
have
trying
to
you
know
square
peg,
fit.
B
Yeah,
so
the
the
use
case
I've
been
confronted
with
is
is
the
the
wish
to
to
port
a
pretty
classical
non-cloud
native
application
into
containers.
The
reason
is
mostly
for
the
deployment
model.
They
like
the
fact
that
it's
easier
to
pipeline,
and
that's
it
all
other
all
other
considerations
aside.
That's
why
they
want
it
now.
You
can
argue
whether
that
should
be
enough
of
a
reason
or
not,
but
I
totally
agree
with
everything
you
just
said,
but
that's
not
enough
reason
for
this
particular
team
to
to
stop.
B
A
A
And
yeah,
I
I
think
there
is
a
camp
that
really
wants
this
to
work.
It's
just
that
and
I
think
the
people
working
on
windows
with
kubernetes
say
that
the
core
piece
of
it
works,
but
I
think
this
story
of
your
storage
drivers
and
cni
networking
is
something
you
better
look
at
with
a
microscope
at
this
stage.
Yeah.
A
B
A
One
other
thing
I
want
to
throw
out
there.
You
know
I
have
to
confess
that
I'm
less
than
a
newbie
on
this,
because
I've
never
even
tried
to
fire
it
up.
But
it
sounds
like
a
few
of
you
on
the
call
have
I'm
even
wondering
about
licensing
and
things
like,
let's
just
say
that
I'm
geek
enough-
that
I
want
to
try
to
get
windows
containers
running
in
my
home
lab.
A
D
All
their
products,
but
you
need
to
reinstall
it
every
180
days,
which
is
kind
of
a
pain
like
that's
the
way.
If
I
run
a
lab
in
the
past,
I
would
just
use
the
thing
and
then
blow
it
away.
You
build
it
every
half
a
year.
So
it's
it's
kind
of
a
pain,
but
it
can
be
done.
A
A
For
example,
I
found
that
if
you're
going
to
go
arm
versus
x86,
if
you
have
mixed
nodes
in
your
kubernetes
cluster,
some
of
which
are
armed,
some
of
which
are
x86,
you
might
be
kind
of
swimming
upstream
against
the
current
and
is
windows
like
that,
where,
if
I
attempted
to
have
a
kubernetes
cluster
that
had
a
mix
of
worker
nodes,
some
of
which
were
linux
and
some
of
which
were
windows,
is
that
viable.
D
I
mean
I
think
that
comes
back
to
the
arm
versus
x86
conversation,
because
it's
it's
basically
the
same
thing
and
like
david
said,
there's
just
random
stuff
doesn't
work,
and
I
run
a
raspberry
pi
arm,
cakes
cluster
here
at
home
and
I
would
say,
50
to
60
of
the
containers
I
run
on
it.
I've
had
to
build
custom
images
because
they're
not
compiled
for
arm
and
you're
going
to
have
the
exact
same
thing.
D
Whenever
it
comes
to
windows,
some
stuff
sure
is
going
to
work,
but
I
would
say
it's
going
to
be
a
very
on
type
experience
where
you're
going
to
have
to
set
up
your
mci
pipelines
for
every
container.
You
want
to
run.
B
I'm
very
curious
about
the
microsoft
support
for
this.
You
know,
because
I
mean
they're
also
doing
a
lot
of
work
themselves
for
aks,
where
they've
they've
built
a
lot
of
that
automation
that
logic
and
and
they
they
have.
You
know
that
they've
obviously
enabled
a
lot
of
that
stuff
and
I'm
sure
a
lot
of
it
is
exposed
through
you
know:
azure
devops,
for
example,
I'm
just
wondering
how
much
of
of
that
they're
willing
to
open
source
as
it
were.
You
know
to
create
frameworks
and
methods
that
are
actually
usable.
A
Yeah,
I
doubt
if
anybody
on
this
call
can
speak
for
microsoft's
plans,
although
to
my
knowledge,
maybe
there
is
some
sort
of
a
group
associated
with
microsoft.
Has
a
cloud
provider
working
group?
That's
a
little
different,
though
that's
more
addressed
with
running
kubernetes
on
the
azure
cloud
then,
which
is
potentially
orthogonal
to
windows
containers
and
to
my
knowledge
there
is
no
user
group
for
running
kubernetes
on
windows
platform.
A
This
is
pretty
much
segregated
to
open
source,
and
even
I
can't
talk
about
future
vmware
product,
and
even
if
I
could,
the
kubernetes
sig
working
group
and
user
group
rules
would
prevent
me
from
talking
about
commercial
product
and
turning
it
into
a
product
pitch.
But
I
think
that
next
month
we
may
have
some
people
who
can
address
this
a
little
better
than
I.
I
think
at
this
point
that
seven
people
in
this
meeting
right
now
are
just
speculators
and
not
authoritative.
Sorry
well,.
C
And
just
to
also
throw
in
a
little
bit
more
info
like
so
that
link
that
I
gave
out
in
the
chat
so
yeah.
The
recommended,
like
way
of
supporting
windows
containers
in
kubernetes
is
having
just
windows
worker
nodes,
only
hooked
up
to
linux
masters,
and
then
you
can
have
a
mix
and
then
which
I've
set
up.
Not
it
hasn't
been
pretty,
but
just
put
it
that
way
and
you
need
to
effectively
use
zones
if
you
like,
if
you
really
care
about
like
how
things
run
and
like
the
stability
of
everything
you.
A
Yeah
the
other
thing
that
comes
to
mind
if
this
is
like
x86
versus
arm-
and
I
suspect
it
is
your
container
image
registry,
whether
you'd
elect
to
try
to
commingle
them
in
one
docker
container
image
registry
or
separate
them
into
two
different
ones.
A
I
think
there's
a
fair
number
of
moving
parts
that
would
come
into
consideration,
but
this
is
all
good
because
it
sounds
like
we'll
have
a
rich
topic
for
a
month
from
now
when
we
can
recruit
speakers
I'll,
I
will
take
the
challenge
and
go
out
and
find
people
to
give
a
a
deeper
dive.
So
it's
interesting
that
we've
had
we've
got
all
these
questions
because
I
think,
there's
and
then
also
just.
A
Well,
I
know
that,
and
there
is
at
least
one
cni,
which
is
the
andrea,
which
I
think
has
that
at
a
minimum
on
the
road
map,
if
it
isn't
already
supported.
So
I
don't
think
you're
you
know.
Obviously,
if
you
don't
have
a
cni
at
all
you
it
wouldn't
run.
So
there's
got
to
be
some
cni
out
there.
If
people
declared
that
this
was
stable,
they
had
to
have
some
platform
to
test
it
on.
I
don't
know
what
that
cni
is,
but
there
has
to
be
one
out
there.
D
A
It
did
it
definitely
doesn't
it's.
It's
open
v
switch
and,
I
believe,
open
v
switch.
Does
it
is
supportable
on
windows
now
that
said,
I'm
not
sure
if
it's
declared
a
stable
release
for
windows
support
there.
Yet,
but
you
know,
if
I
I
definitely
do
know
that
if
it
isn't
already
stable,
it's
on
the
roadmap
and
their
tech
is
capable
of
supporting
the
windows
platform.
A
A
The
other
thing
that
I
think
robert
brought
up
was
the
side
car,
so
istio
service
mesh
would
be
another
interesting
story.
I
mean
if,
if
you've
got
side
cars,
I
I
don't
honestly
know
I'm
too
much
of
a
noob
here
to
know
whether
every
sidecar
you
have
also
has
to
be
a
windows
container,
but
I
suspect
it
does,
and
that
would
imply
that
if
you
were
going
to
use
istio
service
mesh
that
maybe
that
becomes
an
issue.
A
I
don't
suppose
anybody
here
in
in
the
meeting
happens
to
know
once
again
I'll
I'll
see
what
I
can
get
for
next
meeting
we're
kind
of
going
off
on
a
tangent,
because
this
this
particular
group
is
supposed
to
be
constrained
to
the
vsphere
platform.
But
I
gather
I'll
accept
this
challenge
just
because
the
you
know
it.
A
It
should
indeed
run
on
the
vsphere
platform
and
I
think,
there's
a
huge
intersection
of
people
who
are
interested
in
windows,
containers
wanting
to
run
on-prem
so
we're
not
oblivious
to
the
fact
that
there,
if
there's
demand
for
this
at
all,
there's
going
to
be
demand
for
doing
it
on
the
vsphere
platform.
So
on
that
basis,
I'm
going
to
declare
that
it's
within
scope
for
this
group,
especially
since
I'm
not
aware
of
another.
A
D
So
zhing
has
kindly
agreed
to
join
us
she's
one
of
the
chairs
on
six
storage.
I
believe
so
she's
gonna
take
us
through
what's
new
or
what's
coming
in
csi,
119
and
1.20
as
well.
You
know,
what's
alpha,
what's
beta,
what's
going
stable
and
just
give
us
a
general
sort
of
overview
of
where
things
are
headed
with
that
project.
A
And
I
just
made
you
a
co-host,
so
hopefully
you
saw
that
in
the
zoom
interface
we
can't
hear
your
needed.
E
Sorry
thanks
miles
for
the
introduction
and
let
me
share
my
screen
yeah
so
yeah,
so
my
name
is
xinyan.
I
work
in
the
cloud
storage
team
in
vietnam.
I'm
one
of
the
co-chair
in
six
storage,
so
I'll
be
talking
about.
What's
coming
in
1.19,
really
in
1.19
1.20
release
for
kubernetes
csi.
E
A
E
Sure
yeah,
so
we
do
have
quite
a
few
alpha
features
planned
for
1.19,
so
just
to
go
over
them.
E
So
this
is
both
on
the
controller
plug-in
side
and
on
the
node
plug-in
side.
So
in
kubernetes
we
will
be
having
a
volume
house
monitoring
controller
that
communicates
to
the
the
csi
control
plugin.
To
get
this
information
and
those
will
be
reported
on
pvc
as
events
and
if
it's
on
the
node
side,
we
have
an
agent
on
the
node
which
communicate
with
the
node
plugin
and
that
will
be
reported
on
pod.
E
So
user
can
look
at
those
events
and
decide
what
to
do
with
those
volumes,
because
right
now,
without
this
events,
after
voting,
is
provisioned
communities
not
really
communicate
with
the
throat
system
anymore,
to
see
whether
the
volume
is
healthy
or
not.
So
this
should
be
very
helpful.
So
we
are
doing
implementation
invitational.
The
cap
was
merged
actually
in
1.18,
so
we
are
wrapping
up
the
invitation
for
the
control
and
agent.
E
I
think
this
one
is
in
a
good
status
to
get
in
1.19
as
an
offer
feature
and
the
next
next
one
is
csi
storage
capacity
tracking.
So
this
allows
the
csi
driver
to
report
capacity
which
is
associated
with
the
node
topology
and
storage
class.
E
E
So
this
one
the
cap
was
merged
in
1.19
and
then
there
are
several
pr's
out
there
being
reviewed.
I
think
this
one
is
also
in
good
in
good
status.
It
should
be
should
be
merged
by
the
deadline.
I
think
and
and
the
next
one
is
the
recovery
from
one
expansion
failure.
So
this
one
is
actually
one
of
the
bug
fixes
that
is
required
for
one
expansion,
boarding
expansion
feature
to
go
ga.
E
So
this
one
is
also
targeting
1.19.
This
is
basically
allow
user
to
recover
from
rolling
expansion
failure
caused
by
insufficient
quota.
So
without
this
fix,
if
you
try
to
expand
the
volume
and
then
if
it
fails
because
there's
no
not
enough
quota,
then
there
is
no
way
to
recover
from
it.
So
this
actually
allows
you
to
do
that,
so
you
can
actually
go
to
retry
with
a
smaller
size
and
then
hopefully
that
will
be
smaller
than
a
quota.
E
This
one
is
also,
I
think,
the
there's
a
pr
out
being
reviewed.
I
think
this
one
should
get
getting
and
the
next
one
is
the
csr
generic
inline
ethernet
volume
feature
so
right
now
we
actually
already
has
a
cisa
inline
volume
feature
which
is
already
in
data
that
defines
this
inline
volume.
E
In
the
part
definition,
but
that
one
has
some
it's
not
very
flexible,
so
this
new
proposal
basically
want
to
have
a
new
type,
which
is
also
in
line,
but
it
is
more
like
a
pv
pvc
style,
so
we
can
actually
make
some
placement
decisions
to
make
sure
that
we
actually
have
enough
storage,
but
it
is
still
ephromo
is
still
basically
has
the
same
life
cycle
with
the
pod.
E
The
cap
was
merged
in
my
down
19,
but
I
think
during
reviews
there
are
some
concerns
on
this
api.
One
is
that
this
is
another
api
that
is
about
ethmoid
volume.
Is
it
possible
to
combine
this
one
with
the
existing
inline
welding?
So
there
are
concerns
like
that,
and
also
there
are
also
security
concerns
with
this.
E
So
it's
I
think
there
are
discussions
on
that
review
of
those
pr's,
not
sure
this
one
can
get
in
this
might
not,
and
the
next
one
is
allow
cesar
driver
to
skip
security.
Linux
re-labeling
this
one
basically
just
add
an
option:
a
flag
in
the
csi
driver
spec
for
a
driver
to
say
whether
I
want
to
support
this
or
not,
because
if
you
don't,
then,
basically
by
default,
it's
going
to
do
the
reliability
for
other
files
in
the
file
system.
E
E
E
This
one
basically
also
has
a
new
option
added
in
csir
driver
spec,
to
say
whether
we
want
to
support
fs
group
or
not,
but
this
one,
I
think
also
there
are
some
concerns.
Some
questions
raised
in
the
review
to
decide
whether
this
one
should
be
combined
with
other
options.
E
So
this
one
also
is
in
question
on
whether
this
one
will
be
merged
or
not
in
1.19,
and
the
next
one
is
the
csr
migration.
So
in
general
cs
migration
is
already
a
beta
feature
for
vcss
driver.
We
just
started
this
one
in
this
release,
so
we
want
to
target
beta,
but
I
think
there
are
some
requirements
for
csi
testing
results
to
be
published.
So
I
think
we
are
still
working
on
that.
E
If
we
get
that
done
before
the
merge
deadline,
which
is
actually
next
thursday,
then
we
can
it's
still
possible
for
us
to
go
data.
Otherwise
it's
going
to
be
alpha,
so
we
actually
already
have
the
intricate
changes
merged,
which
has
a
translation
library
that
allow
us
to
migrate
from
the
entry
cc
driver
from
the
entry
driver
to
the
to
the
csi
vc
driver.
E
So
there
is
one
requirement
that
is
for
the
csr
migration
to
work.
There
is
a
requirement
for
vsphere
to
be
701.
E
So
if
you
are
at
a
version
before
that,
then
this
solution
will
not
work
so
right
now
we
are
looking
at
it
and
see
whether
there
are
some
other
migration
solutions.
If
customers
cannot
upgrade
to
701,
because
the
plan
is
in
1.20
release,
the
entry
driver
will
be
will
be
removed.
D
E
D
Just
waiting
the
migration
paths
for
each
of
those
cloud
providers
to
be
implemented
to
remove
the
core
code,
then.
E
Yeah,
so
all
of
those
drivers
where
the
cloud
provider
drivers
need
to
have
this
migration
path
before
the
entry
driver
can
be
removed,
so
all
of
them
have
already
been
either
alpha
or
beta
yeah.
Some
are
already
beta.
E
So
I
think
the
I
think
right
now,
the
problem
right
now
we
are
trying
to
find.
Maybe
maybe
you
also
have
some
information
on
this
one?
I
we
don't
know
how
many
customers
are
still
at
earlier
versions
and
then
whether
it
is
possible
for
them
to
upgrade
to
701.
E
If
not,
then
we
need
to
figure
out
some
other
ways,
because
our
so
our
csr
driver
itself
actually
can
support
lower
versions.
Six
seven,
I
forgot
it
is
you
you
three
okay,
yeah,
so
our
so,
if
customer
has
been
using
that
they
don't
need
a
migration
from
entry,
then
that's
fine
right,
so
they
can
actually
still
stay
at
the
order,
whisper
version,
but
if
they
need
to
migrate
from
entry
to
csi,
then
here's
the
the
requirement
then
otherwise,
then
we
need
to
figure
out
some
other
ways.
A
Wow,
this
is
timely
because
you
arrived
late,
but
the
first
half
of
the
meeting
was
discussing
windows
container
support
so
great
that
you're
covering
it.
E
So
there
is
yeah,
so
there
is
a
rfr.
It's
already
offer
actually
and
we're
targeting
data
in
1.19
for
the
windows,
csf
windows
support,
but
I'm
not
sure
whether
it's
going
to
make
it
because
I
know
there
are
a
group
of
people
who
are.
They
actually
have
regular
meetings
and
then
they
work
on
this.
E
But
I
don't
I'm
not
sure
if
it's
going
to
be
beta
may
not
because
I
don't
see
I
don't.
I
have
not
got
any
update
on
that
one,
but
at
least
right
now
is
already
alpha
now
I,
but
I
don't
know
that
we
have
done
any
testing
with
the
within
the
this
area
that
I'm
not
sure.
D
I
was
because
we
were
just
talking
earlier
on
about
you
know
some
of
the
external
dependencies
that
csi
might
have,
for
example,
the
the
mind
command
and
things
like
that.
How
do
we
sort
of
implement
a
driver
that
can
work
on
either
windows
or
or
linux?
There
would
have
to
be
some
sort
of
object
that
determines
the
platform
and
then
changes
how
volumes
get
mounted
and
everything
based
on
what
the
platform
is.
E
That's
a
good
question,
so
yeah,
I
don't
think
we
have
done
any
support
on
this
so,
but
I
know
that
this
particular
implementation
there's
a
component
called
csr
proxy,
so
I
think
for
windows
support
you
do
have
to
implement
that
way.
So
I
think
it's
going
to
be
has
to
be
different.
I
don't
think
you
can.
I
think
the
code
will
be
very
different,
but
I
just
have
not
really
looked
at
the
driver.
A
There's
an
effort
to
integrate
support
of
that
csi
spec
into
kubernetes
itself,
and
then
third,
there
are
individual
implementations
of
csi
compliant
storage,
plugins
and
some
of
those
may
or
may
not
have
issues.
When
running
on
windows.
Right,
I
mean
anything
from
some
of
those
used
or
interface
to
custom
hardware,
hbas,
some
of
them
interface
to
storage
over
pretty
generic
networking.
You
know
iscsi
tcp
whatever,
but
maybe
you
could
tell
us
about
what
all
the
potential
issues
there
are.
If
somebody
wanted
to
get
storage,
support
for
running
windows,
containers.
E
D
E
Css
back
at
least,
is
the
same,
but
we
do
have
a.
We
do
have
a
sub
project,
but
it's
actually
under
under
kubernetes
csi.
E
We
do
have
a
component
called
csr
proxy,
so
that
is
for
windows
support,
but
I'm
actually
really
did
not
look
into
it
to
see
how
that
works,
and
also
our
driver
has
not
really
tried
to
qualify
this
one.
Yet
so
is
that.
E
Yeah
yeah
definitely
yeah.
Definitely
is
it
challenging
yeah.
I
I
know
there
are.
There
is
a
few
people
working
on
that.
I
know
they
yeah.
They
have
a
lot
of
issues,
but
at
least
now
we
are
actually
incorporating
the
windows
build
into
our
side.
Cars
now.
So,
like
all
of
our
side,
cars
now
have
beautiful
windows
as
well.
So.
E
Do
you
need
any?
Do
you
want
me
to
find
more
information
on
this?
I
can
look
at
some
existing
documentation
with
our
help.
E
B
Okay,
so
so,
but
what
you're
saying
here
should
definitely
be.
You
should
zoom
in
on
that
during
that
that
session,
because
that's
exactly
what
I
was
the
information
I
was
asking
asking
about
earlier
in
the
call.
So
it's
good.
A
A
Csi,
hopefully,
can
we
get
a
link
to
this
deck
or
if
you
can
shoot
it
to
me
on
slack
I'll.
D
D
A
A
few
links
on
where
people
can
get
involved
if
they
want
to
get
really
hardcore
on
csi,
storage
and
kubernetes.
Just
to
because
you
know
this
is
a
user
group
for
vsphere,
but
certainly
within
the
kubernetes
project.
You
know.
Storage
itself
is
a
very
active
group
with
a
lot
of
sessions
that
might
be.
A
You're,
a
storage
geek-
and
this
goes
beyond
potentially
a
csi
too
I
mean
the
implication-
is
that
if
you're
using
csi
storage,
you're,
perhaps
trying
to
run
stateful
apps
and
if
you're
going
into
production,
you
need
to
be
worried
about
more
than
just
your
storage
interface.
There's
a
whole
realm
of
how
you
would
do
dr
planning,
backup,
restore
snapshotting
of
volumes
and
and
a
lot
of
the
stuff
is
moving
to
a
pretty
stable
condition.
A
But
I
think
it
might
be
helpful
if
we
can
drop
some
links
about
where
people
can
go
to
get
involved
and
learn
more
about
all
of
what's
going
on,
especially
since
this
is
a
user-focused
group.
The
kubernetes
project
is
always
asking
for
feedback
from
users,
because,
unfortunately
I
think
yes,
some
of
these
groups
are
just
wall-to-wall
developers
and
very
few
users
showing
up,
and
it's
always
great,
to
get
some
a
a
vigorous
channel
going
from
users
to
make
sure
that
what's
getting
built
by
the
developers
actually
meets
the
needs
of
the
users.
E
B
Okay,
great,
I
have
a
question
to
you
stephen
into
miles.
So
currently,
a
number
of
the
vmware
products
use
a
customized
version
of
the
csi
driver
and
I
I've
added
a
link
in
the
chat
to
cormac
hogan's
post,
describing
the
differences
in
support
between
the
different
storage
abstraction
layers
and
in
the
vsphere
products
versus
which
drivers
are
being
used
and
how
those
features
differ.
Is
it
and
if
you're,
not
able
to
speak
to
this?
D
So
I
had
a
conversation
earlier
on
with
product
managers
about
this,
and
obviously
we
can't
talk
about
futures.
You
know
anything
he
says
under
the
disclaimer
of
you
know,
there's
no
promises.
D
We
for
the
time
being
we're
going
to
maintain
two
separate
csi
drivers,
so
there's
the
one
for
vsphere
with
kubernetes,
which
cormac
has
listed
in
separate
columns
and
then
there's
the
upstream
csi
driver,
and
there
is
some
feature
lag
on
the
one
in
vcr
with
kubernetes,
because
we've
had
to
re-implement
some
of
those
features
using
something
called
the
pv
csi
driver.
So
it's
got
two
parts
and
we
need
to
re-implement
some
of
that.
There
is
a
desire
to
try
and
coalesce
that
and
make
the
re-implementation
bit
sort
of
go
away
over
time.
D
What
the
time
scale
looks
like
on
that,
I
don't
even
know
so
I
I
can't
tell
you,
but
ideally
less
code
is,
is
better
right.
If
there's
less
code,
there's
less
bugs.
So
if
we
have
to
re-implement
it
twice,
there's
going
to
be,
you
know
twice
the
scope
for
things
to
go
wrong.
So
yes,
in
an
ideal
world,
we
would
like
to
coalesce
that
into
one
over
time.
As
for
feasibility
of
that,
I
guess
we're
just
going
to
have
to
see,
but
the
the
feature
matrix
that
cormac
has
here.
D
A
And
I
think
you're
talking
about
the
kubernetes
bundle
with
vsphere,
where
we
do
have
other
kubernetes
disk
rows
that
maybe
have
a
different
story
and
in
addition
to
that,
we're
fairly
committed
to
supporting
pure
upstream
kubernetes,
as
well
as
kubernetes
distros
by
other
vendors,
like
red
hat,
open
shift.
So
I
wish
it
wasn't
so
complex,
a
story
where
you
know
the
fact
that
you
even
need
a
blog
post
on
a
matrix
tells
you
that
maybe
it
isn't
a
perfect
world.
But
it
is
what
it
is
and
we're
trying
to
listen.
A
D
But
as
as
it
stands,
robert,
the
upstream
csi
has
more
features
in
it.
It's
got
rewrite
many
support
and
you
know:
offline
volume
volume
grow
that
jane
was
working
on
so
there's
stuff
in
the
upstream
csi
driver
that
isn't
in
the
vsphere's
kubernetes
one
again,
because
there's
a
lag,
it
was
re,
it
was
implemented
afterwards,
so
it'll
be
playing
catch
up
for
a
little
while,
but
it
will
get
to
feature
priority
at
some
point.
E
Yeah,
so
for
the
resize
at
least
we
are,
we
actually
already
finished
the
code
for
the
guest
cluster
support,
so
we
do
have
resize
for
the
gas
cluster.
At
least
that's
in
the
next
release.
A
D
Has
more
features
than
the
one?
That's
you
know
the
custom
one.
So
if
anything
running
tkgi
or
you
know,
native
tkg,
you're
gonna
get
that
upstream
experience
versus
something
like
these
serious
kubernetes.
Where
you
hit
the
custom
one
so
and,
like
steve
said
you
know,
it'll
run
with
openshift
and
those
rancher
docker
ee,
whatever
it
is
that
you
want.
So
it's
not
like
if
you're
running
kubernetes
on
vsphere,
you
only
get
this
subset.
D
A
And
you
know
we're
always
trying
to
help
here
that
the
slack
channel
for
the
kubernetes
user
group
gets
a
pretty
regular
stream
of
questions.
So
if
you
can
help
us
with
feedback
on
making
how
well
this
is
documented,
better,
I'm
interested
in
doing
that.
So
you
know
this.
Cormac
blog
post
is
a
great
resource,
but
tell
us
where
you
have
things
are
confusing
or
where
things
could
be
done
better
and
I'll
I'll.
Take
a
mission
to
make
sure
that
the
documentation
story
on
the
support
is
as
simple
as
it
can
be.
A
A
A
A
And
any
other
topics
that
any
members
want
to
bring
up
and
put
on
the
table,
or
I
think
we've
got
queued
up
a
rich
set
of
things
to
discuss
for
next
meeting.
But
if
you've
got
any
ideas
for
things,
you'd
like
at
a
future
meeting
now's
the
time
to
mention
it,
you
can
challenge
miles
and
I
to
go
recruit
speakers
on
the
topic
of
your
desires.
D
Yeah
next
time,
actually
just
for
sort
of
a
sneak
peek
and
put
it
in
the
doc.
There
we've
got
dan
finneran
again
from
from
vmware
and
he's
gonna.
Come
talk
to
us
about
cube
vip,
which
is
a
new
load
balancer
that
they
built
internally
well
internally,
it's
open
source
to
sort
of
act
as
a
replacement
for
h.a
proxy,
so
you
can
use
it
inside
kubernetes
clusters
or
outside
kubernetes
clusters.
D
It's
kind
of
like
metal
load
balancer,
but
it
lets
you
load
balance
the
control
planes
as
well,
not
just
the
services
which
is
kind
of
cool,
and
I
had
a
chat
with
him
earlier
on
today
about
adding
support
for
maybe
claiming
addresses
from
dhcp
so
that
you
don't
have
to
give
it
a
static
block
and
a
sign
from
a
static
block.
It
would
just
claim
release
and
then
use
it
for
vip.
A
That
sounds
great.
I
think
you
missed
that
one
session,
but
we
had
bryson
of
walmart
was
asking
for
this
a
month
or
two
ago,
and
we
covered
middle
lb
and
some
concerns
people
have,
but
that
topic
of
running
a
load,
balancer
on-prem,
is
always
a
challenge.
You
know
when
you
host
in
a
club
one
of
the
major
cloud
providers.
A
They
of
course
provide
a
load
balancer
as
a
service
for
you,
but
as
soon
as
you
drop
down
on-prem
you
own,
the
load,
balancer,
the
ingress
controllers
and
I
think
people
it's
a
mixed
story
where
kubernetes
allows
maximal
flexibility
by
not
forcing
one's
election
down
your
throat,
but
it
means
that
you
do
have
to
do
some
research
to
find
one
and
they're.
Not
all
the
same.
So
it
sounds
like
a
great
topic
for
next
time.
Anybody
out
there
interested
in
that.
A
If
you've
got
specific
questions
for
the
speaker,
why
don't
you
either
drop
them
in
the
slack
channel
or
put
them
in
the
agenda
dock?
And
it's
always
nice?
If
we
can
give
a
speaker
a
little
runway
to
you
know,
maybe
do
a
slide
or
two
on
a
question
rather
than
just
drop
it
on
them
with
a
few
seconds
notice.
I
know
when
I'm
speaking,
you
always
prefer
to
get
questions
in
advance
if
it's
possible.
A
Okay,
if
last
call
does
anybody
have
anything
else
to
nominate
for
talking
about
today
or
for
the
next
meeting
and
if
not
I'll,
shut
down
the
recording
and
declare
this
done
going
going
on?
Okay,
thanks
thanks,
everyone
for
attending
the
next
meeting
is
going
to
be
thursday
first
week
of
august
and
we'll
see
you
then
bye.