►
From YouTube: Kubernetes SIG Windows 20190716
Description
Kubernetes SIG Windows 20190716
A
A
Alright,
so
looking
at
the
agenda
and
I'll
paste
this
over
here
in
the
chat
window,
if
anybody
has
other
items,
they
want
to
add
feel
free
to
paste
them
on
the
list
or
if,
for
some
reason
you
don't
have
access.
I
just
dropped
that
in
the
chat
window
and
I'll
also
check
back
again
once
we
get
through
the
first
couple
items.
A
To
talk
more
about
where
Windows
containers
is,
you
know,
is
going
with
kubernetes
and
we've
got
a
couple:
people
from
the
from
the
windows,
engineering
team,
that'll,
be
there
I'll,
be
there
and,
and
hopefully
I'll
see
some
of
some
of
you
there
as
well
there's
kind
of
a
last
minute
thing
we
set
up,
but
we
hope
that
it
that's
helpful
and
if
it's
good,
we'll
we'll
do
it
again,
so
there's
a
sign-up
link
there
and
that's
going
to
be
held
at
a
building.
That's
over
on
the
Microsoft
main
campus.
A
A
A
lot
of
people
still
had
samples
out
there
and
even
like
some
tests
that
we
found
we're
still
using
those
from
the
API
named
versions
of
v1,
beta
1
and
since
those
have
been
promoted
to
stable
in
either
14
or
15
they're,
actually
removing
the
ability
to
serve
those
from
the
beta
namespaces
in
version
16
and
so
make
sure.
Things
are
updated
if
you're,
seeing
any
weird
failures
and
other
test
passes
and
then
also
one
of
the
Alpha
scheduler.
A
B
Yeah,
so
basically
it's
pretty
much
what
the
Patrick
said:
we're
kind
of
kind
of
this
that
with
them
behind
the
behind
the
with
all
these
changes,
and
we
see
them
in
the
periodic
jobs
and
obviously
without
the
AKS
engine
fast.
So
so
the
jobs
are
not
down
for
a
significant
period
of
time,
but
still
it's
not
the
not
a
pleasant
situation.
So
the
main
problem
that
happens
is
that
you
have
this
small
changes
like
API
changes
or
certain
flags
being
removed
or
like
in
the
case
of
no
label.
B
You
just
had
a
minor
change
in
what
is
allowed
and
not
allowed
there's
an
old
label
and
a
case
engine.
The
tool
we
used
to
deploy
either
the
classical
test
obviously
cannot
keep
up.
It's
not
meant
to
run
in
1/16
in
the
latest
master,
and
so
basically
this
would
be
a
job
to
validate
asparagus.
I
run
a
subset
of
tests.
The
main
issue
is
just
to
validate
a
certain
PR
will
break
them
badly.
I
mean
in
the
sense
that
we
can
deploy
the
cluster
or
something
like
that.
B
You
know
more
more
refine
the
issues
more
smaller
issues
or
stuff,
like
that.
That's
it
does
not
probably
have
the
periodic
jobs
for
that
and
to
figure
that
out.
The
the
main
problem,
as
I
said,
is
just
to
figure
out.
If
we're
in
a
situation
where
we
cannot
deploy
the
caustic,
it
would
be
great
to
have
this
information
before
anything
merges.
B
So
basically
the
problem
here
there
are
two
problems:
one
we
know
we
can't
really
run
on
each
and
every
possible
PR
on
kubernetes.
There
are
a
lot
of
them
and
a
full
conformance
around
that
we
have
in
staging,
takes
around
two
and
a
half
hours,
two
hours,
two
and
half
hour
depending,
and
there
is
a
high
chance
that
we
have
flakes
that
will
invalidate
the
whole
run.
B
Basically,
so
why
I
thought
is
that
maybe
we
can
just
run
run
a
subset
of
changes
so
changes
that
only
affect
you
know
the
changes
in
the
CMV,
your
package
or
whatever
we
we
have
to
decide
exactly
where
and
only
run
a
subset
of
conformance
tests.
Now,
in
that
issue,
I
linked
less
that
mean
cloud
you
compiled,
that's
just
it
takes
tests
from
each
conformance
test
from
each
subgroup
and
we
listen
and
then
so
they're
around
45
tests
there
that
will
one
reduce
the
time
these
are.
Pairs
are
known
to
pass
consistently.
B
So
there's
no
flakiness
that
will
just
invalidate
Iran.
Now
the
reason
I
opened
this.
This
issue
is
one,
so
we
can
the
whole
group
discuss
it
and
see
it,
for
you
think.
Maybe
we
should
have
some
other
tests
and
especially
to
figure
out
where
you
think
the
areas
in
components
where
this
job
should
run
or
not
and
of
course,
as
a
second,
was
a
reference
for
when,
while
living
to
take
this
to
juicing
testing,
so
they
have
something
to
reference,
something
on
there's
a
discussion
within
this
group.
A
A
Okay,
yes,
that's
it
all
right
sounds
good.
Thank
you
very
much
for
opening
this
and
then
well
track
the
feedback
there.
Just.
B
One
second
I
saw
sorry
about
it.
I
saw
in
comments
that
Lumiere
was
asking
about
the
duration
of
the
tests
and
to
come
to
that.
Yes,
on
Windows,
it
takes
a
little
bit
more,
so
they're
like
two
hours,
full
conformance,
run
with
the
test
in
parallel,
not
very
much
a
barrel,
I
mean
I,
think
I
just
have
four
power
nodes,
and
this
cloud
you
mentioned
most
of
the
time
is
because
there
are
the
slow
testing
performance
that
would
take
around
20
to
30
minutes
to
complete.
B
A
A
C
B
B
C
B
Yeah,
that's
actually
my
intention.
After
after
we
have
some
comments
and
some
ideas
on
this,
this
issue
is
to
go
to
you
to
suggest
things.
So,
as
we
mentioned,
we
already
have
a
person
mid
job,
but
that
runs
the
full
conformance,
the
the
intention
that
one
is
mostly
to
be
triggered
on
PRS
that
we
know
that
might
actually,
for
example,
if
we're
adding
a
new
test
or
with
the
Chinese
and
features
or
stuff
like
that,
for
our
developers
should
just
run
that
on
Windows,
it's
obviously
non-blocking
and
doesn't
run
all
the
time.
C
A
Yeah,
okay
sounds
good
all
right
thanks
a
few
notes
on
that
for
today.
So
I
think
I'd
like
to
go
ahead
and
move
on
to
the
discussion
around
storage
with
deep,
so
I've
got
the
issue
in
the
meeting,
notes
and
I
just
pasted
it
here.
Did
you
want
to
go
actually
come
on
Steve
still
on
the
call?
Oh
yeah,
oh
yeah,
yeah,
hey
did
you
all
talk
about?
Do
you
happen
to
have
a
kept
draft?
Yes,.
D
A
D
Alright,
so
this
is
a
cap
to
support
CSI
plugins
on
Windows
notes.
It
builds
on
some
of
the
things
we
had
discussed
earlier.
So
if
some
of
you
recall,
probably
like
a
month
back,
Patrick
wrote
up
instead
of
different
alternatives
that
we
can
use
to
support
privileged
operations
in
Windows
nodes
and
the
support
for
CSI
plugins
kind
of
falls
in
that
bucket.
D
So,
let's
see
like
we
start
off
with
a
basic
summary
we
describe
like
you
know
why
CSI
is
important.
It's
a
modern,
G
RPC
be
a
standard
for
implementing
external
storage,
plugins
and
they're
maintained
out
of
tree.
So
it's
much
easier
to
maintain
by
individual
storage,
vendors
and
I
kind
of
go
down
into
the
motivations
again
eliminating
some
of
the
benefits
of
CSI,
and
you
know
finally
coming
to
light.
D
That's
happening
in
six
storage
and
is
there
something
in
the
chat,
oh
yeah,
so
that's
happening
in
six
storage
and
that
aims
to
get
rid
of
some
of
pretty
much
like
all
the
cloud
provider.
Oriented
entry,
storage,
plug-ins
from
the
kubernetes
core,
and
in
order
to
make
that
initiative
successful,
we
windows
nodes
can
no
longer
depend
on
the
increased
storage
plugins
that
exist
for
image
gcpd
as
your
file
as
your
disk
and
so
on,
and
therefore
we
need
this
sort
of
now.
So
this
becomes
more
important.
D
Some
of
the
goals
for
the
cap
is
to
make
sure
all
the
CSI
note.
Logging
operations
are
supported
on
Windows
notes,
the
two
critical
ones
there
are
stage
volume
and
publish
volume
which
roughly
maps
to
partitioning
a
disk
formatting
it
with
NTFS
and
publish.
On
the
other
hand,
deals
with
like
linking
or
the
by
mounting
step
associated
with,
making
the
volume
available
to
a
specific
container,
expand
volume
would
come
down,
come
down
the
line
and
pretty
much
all
the
other
operations
are
do
not
require
like
specific,
privileged
operations
on
the
page.
D
Another
goal
is
to
make
sure
that
we
can
support
CSI
plugins
in
two
specific
scenarios.
One
would
be
attaching
to
remote
storage
over
I,
scuzzy
and
SMB,
and
the
other
is
directly
a
bad
story.
Scenarios
that
happen
in
cloud
environments
like
as
your
disc,
for
example,
and
the
third
one
is.
We
want
the
ability
to
distribute
the
CSI
node
plugins
as
containers
as
it
is,
as
the
scenario
is
today
for
linux
nodes.
D
If
some,
if
a
plugin
author
does
not
want
to
go
that
route,
they're
free
to
you
know,
come
up
with
a
different
approach
where
they
deploy
the
CSI,
no
plugins
directly
as
policies,
nothing
stops
them,
but
the
focus
on
this
step
is
to
enable
the
scenario
where
no
plugins
are
distributed
as
containers
in
the
non
goals.
I
call
out
that
the
support
for
the
CSI
controller
plug-in
operations
is
not
a
priority
at
this.
D
At
this
point,
we
do
not
like
basically
where
this
becomes
a
problem
is
if
all
the
worker
nodes
in
the
cluster
are
Windows
and
Linux
master
nodes.
Do
not
have
scheduling,
enable
that's
when
we
would
need
the
CSI
controller
plugins
to
also
work
on
Windows
nodes.
It
seems
like
that
should
be
possible.
It's
a
matter
of
this
recompiling,
the
controller
binaries
with
Windows
as
target
and
basing
them
on
Windows
based
images,
but
keeping
that
as
a
low
priority
has
ended
non
go
for
well,
that's.
D
A
D
Secondly
down
it.
Like
you
know,
in
some
of
the
other
sections
of
the
gap,
we
go
around
how
you're
going
to
version
of
several
aspects
of
the
API
and
in
order
to
support
multiple
versions.
In
the
same
binary,
we
want
to
make
sure
that
the
API
exposed
is
pretty
scope
down
the
API
specifically
around
privileged
operations.
So,
in
the
context
of
this
step,
I'm
proposing
that
we
keep
the
privileged
proxy
process,
mainly
scoped,
around
storage
based
operations
that
support
the
stage
and
the
publish
operations
that
are
required
by
CSI
node
plugins.
D
If
we
do
want
to
focus
on
CNI
plugins
I
propose
that
we
do
it
in
a
separate
step.
It
can
follow
a
very
similar
pattern,
but
keeping
the
API
exposed
to
a
minimal
set,
simplify
support
for
multiple
version
and
also
reduces
the
scope
for
abuse
and
potentially
security
issues
that
might
arise
so
I
go
into
quite
a
few
details.
The
main
important
bit
is
probably,
like
you
know
the
sample
yamo
that
points
out
a
demon
set
where
I
have
managed
to
get
GCP
the
GCP
persistent
disk,
CSI
plug-in
working
on
Windows.
D
D
D
Next
I
call
out
that
you
should
be
able
to
have
a
storage
class.
Again.
This
example
kinda
uses
the
PD
driver,
but
that's
just
an
example.
You
should
be
able
to
have
a
storage
class,
referring
to
a
CSI
driver,
create
a
PVC
based
on
it
and
be
able
to
deploy
a
pod
sequel
server
in
this
case
that
mounts
the
C
data
on
the
sequel
wall.
That's
supported,
that's
backed
by
the
person
volume
claim
created
about
from
the
dynamically
provisioned
persistent
volume
from
the
DCPD
driver.
D
Next,
we
go
down
into
the
implementation
details.
I
provide
a
quick
overview
of
what
CSI
node
plugins
are
how
they
interact
with
controller
plugins
like
controller
plugins,
come
and
do
the
first
set
of
operations
which
is
provisioning,
a
volume
attaching
it
to
the
node,
and
then
the
node
plugins
take
over
and
perform
post
specific
operations
which
involves
mapping
the
disk
that
just
showed
up
to
whatever
it
is
that
the
CSI
controller
created
and
attached
so
detecting
the
disk
based
on
different
forms
of
IDs
used
in
different
storage
environments,
is
important.
D
That
is
one
of
the
privileged
operations.
Next,
other
things
involve
partitioning
the
disk.
Formatting
it
and
performing
bite
mounts
also
call
out
things
that
already
exist.
So
this
is
where
I
say
that
you
know
this
depends
heavily
on
domain
socket
support
that
was
already
there
in
Windows,
Server
2019,
as
well
as
golang
supports
it,
starting
with
version
112
and
so
from
the
compiler
and
the
base
OS
support.
D
The
next
few
sections
deal
with
minor
enhancements
that
were
necessary
in
a
bunch
of
entry
and
CSI
related
components.
The
first
set
deals
with
certain
you
know:
minor
enhancements
and
the
cubelet
plug-in
watcher
mainly
to
deal
with
again
sequel
and
base
pads
for
Windows.
This
was
pretty
simple
couple
of
lines.
A
few
changes
were
needed
and
the
CSI
no
driver
register.
We
need
to
refactor
this
slightly
in
order
to
build
for
Windows,
but
once
compiled
it's
just
a
matter
of
basing
it
on
nano
server
and
publishing
and
making
it.
D
The
next
section
deals
with
the
major
new
component,
which
is
the
CSI
proxy
process.
I
call
out
that
this
needs
to
be
developed
and
maintained.
It'll
expose
name
pipe
note
that
domain
sockets
cannot
be
used
for
this
purpose,
since
Windows
does
not
allow
a
continuous
process
to
talk
to
a
host
process
over
a
domain
socket
and
finally,
I
call
out
that
a
gr,
PC
based
interface
would
be
used
to
expose
the
API
from
the
CSI
proxy
to
the
node
plugin
code.
D
Here's
a
rough
description
of
some
of
the
API
calls
that
I
am
forcing
that
buildi.
One
is
sort
of
a
basic
version
request
to
make
sure
that
the
CSI
node
plugin
can
indeed
use
the
proxy.
Their
versions
are
in
sync
call
out
and
make
their
and
remove
their.
This
allows
you
to
create
directories
on
the
host
and
remove
directories
from
the
host
mainly
use
for
creating
the
global
staging
paths.
A
stage
disk
is
a
overall
call
that
takes
care
of.
D
You
know
scanning
the
disk,
make
sure
that
if
they're
it's
not
partition,
partition
it
as
MBR
and
then
formatted
with
NTFS.
Similarly,
there's
also
a
stage
SMB
share.
This
is
much
simpler.
All
it
does
is
create
a
SMB
mount
to
the
appropriate
volume
link
volume
essentially,
does
the
bind
bind
mounting
step
using
this
make
link
command.
D
This
links
the
global
staging
path
of
the
volume
to
a
specific
path
within
a
container,
and
then
a
couple
of
scanning
or
this
detection
related
API
calls
the
first
one
deals
with
detecting
a
disk
based
on
bus
target
and
one
ID,
and
the
second
one
means
they're,
detecting
a
disk
based
on
the
scuzzy
p83
ID.
That
is
kind
of
more
of
a
common
standard.
D
So
gcpd
uses
the
speci
ID
and
the
other
ones
like
EBS
and
Azure
disk
seems
to
be
seems
like
they
can
just
use
the
get
this
number
grid
location
to
detect
their
disks.
I
also
color
that,
in
accordance
with
the
standard
kubernetes
I
mentioned,
we
will
start
off
with
a
v1
alpha
one
of
the
API.
Then
you
know
graduate
Aviv
on
beta
1
and
a
deep
one
and
then,
as
new
enhancements
are
necessary.
They'll
be
introduced
through
alpha
1
and
go
up
to
beta
and
before
certain
enhancements
are
also
necessary
in
the
CSI
node
plugins.
D
You
know
the
biggest
risk
is
that
we're
exposing
a
pipe
that
can
perform
privileged
operations
on
behalf
of
containers,
so
any
container
should
be
able
to
use
this,
which
is
dangerous
and
to
mitigate
the
risk.
Pretty
much
I
call
out
what
Patrick
suggested
earlier,
which
is.
We
will
come
up
with
an
admission
web
hook
that
will
reject
all
continuous
and
I
mount
this
pipe
as
a
host
Pat
volume
mount,
and
that
does
not
have
the
priviledge
flag
said
in
the
fall
security
context.
D
So
this
basically
allows
us
to
emulate
the
privilege
setting
in
the
security
context
for
Windows,
and
anything
that
has
privileged
set
will
be
allowed
to
bind
mount.
This
CSI
proxy
host
Pat
sees
a
proxy
who's.
Fat
volume
mount
to
perform
privileged
operations,
and
you
know
if
there
are
odd
security
policies
and
stuff
better
configure
to
act
on
the
privilege,
setting
that'll
work
as
expected
and
then
to
wrap
up.
We
go
read,
you
know,
test
plan,
graduation
criteria
and
also
just
call
out
some
drawbacks
and
alternatives.
I
guess
we
are
kind
of
time.
A
All
right
now,
yes,
thank
you
very
much.
Yeah
I
post
posted
a
few
comments
in
there
as
well,
and
so
for
what
else
could
take
a
look
that
would
be
great
I
did
notice.
There
was
a
PR
related
to
that.
Someone
had
filed
relating
to
aks
and
so
I'll
go
ahead
and
try
to
paying
that
thread
with
this
proposal.
So
because
they'd
probably
be
a
good
reviewer
for
this
as
well,
not.