►
From YouTube: Kubernetes SIG Windows 20200811
Description
Kubernetes SIG Windows 20200811
A
There
we
go
all
right,
welcome
everyone
to
the
sig
windows
weekly
meeting
today.
The
first
topic
we
have.
Let
me
share
my
screen
real
quick
again,
it's
exciting
privileged
containers
for
windows,
and
we
have
a
proposal
amber
from
the
windows
kernel
team
is
here
to
to
talk
about
it
amber.
B
Yeah
yeah,
as
much
mentioned,
I
am
a
pm
on
the
windows
kernel
team.
I
work
a
lot
on
windows
containers
so
yeah.
I
think
this
is
a
highly
anticipated
feature
from
many
in
the
community
for
windows
containers
to
light
up
privileged
containers.
B
If
people
have
gotten
a
chance
to
dive
into
this,
I
know
it
was
added
to
the
agenda
sometime
yesterday,
so
you
probably
didn't
have
too
much
time,
but
we
can
kind
of
walk
through
this.
Briefly,
we
are
planning
on
following
up
on
this
next
week
and
the
week
after
and
trying
to
gather
your
guys's
feedback
as
we
kind
of
refine
this
kept
and
taken
scenarios
that
people
think
we
should
have
prepped
before
we
kind
of
launch
into
the
further
review
process.
B
So,
as
a
summary,
I
think
many
people
in
this
community
are
probably
familiar
with
all
of
the
different
scenarios
that
windows
currently
requires
workarounds
for
that
are
available
on
linux
via
linux.
Privileged
containers,
some
of
the
main
ones
that
have
been
highlighted
to
us,
are
vaq
proxy.
Different
storage
and
networking
scenarios
with
csi
and
cni,
and
those
workarounds
have
worked
so
far,
but
do
have
some
drawbacks.
B
So
a
lot
of
the
motivation
for
us
to
try
to
do
this
kept
is
to
try
to
help
with
those
scenarios
and
bring
some
amounts
of
experiential
parity
with
linux,
so
that
kubernetes
can
kind
of
behave
consistently
between
the
two
os's,
so
we're
trying
to
light
up
specifically
working
with
privileged
demon
sets
and
just
working
with
privileged
containers
kind
of
one-off.
B
B
We're
also
trying
to
enable
access
to
host
network
resources
for
privileged
containers
and
privileged
pods.
Specifically,
the
other
side
of
that,
though,
is
that
we
are
not
aiming
to
provide
these
host
network
resources
for
non-privileged
containers
and
pods
and
we're
not
trying
to
light
up
a
scenario
where
we
can
run
polish
containers
inside
of
hyper-v
containers,
which
has
different
applications
or
run
those
containers
kind
of
as
like
a
privileged
hyper-v
container.
That
would
require
a
lot
of
different
thinking
and
processes
that
we're
not
exactly
looking
at
here.
B
This
might
be
relevant
as
different
community
services
start
to
adopt.
Hyper-V
containers
or
people
are
looking
into
using
this
as
another
kind
of
method
to
go
about
using
windows
containers,
but
for
now
you
do
need
to
run
politician
containers
kind
of
as
they
are
as
processes.
B
So
our
proposal
kind
of
dives
into
a
couple
of
use
cases,
as
I
mentioned
before,
privileged
damon
sense,
which
are
used
to
deploy
a
lot
of
the
different
scenarios
that
we
highlighted
previously
with
csi
proxy
h,
s,
proxy
so
and
so
forth,
and
also
node
plug-in
containers
that
can
do
kind
of
other
types
of
activities
like
device
enumeration
monitoring
add-ons
and
among
others.
B
Some
notes
and
constraints
and
caveats
is
that,
as
we
mentioned
before,
it's
a
non-goal
that
we
want
to
enable
we
don't
want
to
in
this
cab
enable
host
network
mode
for
non-privileged
pods
engineers.
So
it's
only
for
privilege,
containers
and
privilege
pods
and
additionally,
our
privileged
pods,
just
due
to
the
way
that
we're
implementing
them
can
only
consist
of
privileged
containers.
B
So
this
is
due
to
restriction
of
the
way
that
we
the
way
we're
implementing
this
works
with
different
ip
ranges
and,
as
a
result,
we
can't
have
unprivileged
windows
server,
containers
sharing
in
the
same
pod
as
a
privileged
containers,
some
risks
and
mitigations
as
we're
jumping
into
this.
B
So
we
do
anticipate
several
changes
in
several
layers
of
different
open
source
kind
of
components,
and
this
is
an
area
which
might
be
kind
of
one
of
the
comments
here
by
james
that
we
do
need
to
dive
into
a
little
bit
more
in
detail.
So
we're
aware
of
this.
But
if
you
have
specifics
that
you
know,
people
in
the
community
can
call
out
and
identify
for
us
as
well
our
concerns
this
is
another
place
for
for
them
to
be
mentioned.
B
B
So
this
area
is
something
that
we
would
like
people
if
they
have
thoughts
to
please
kind
of
chime
in,
but
we
are
going
to
continue
working
on.
Additionally,
as
kind
of
mentioned
with
kubelet
part
of
the
changes
are
going
to
be
with
psps
we've
been
doing
some
investigation
kind
of
on
the
windows
side
on
the
different
kind
of
psps
and
how
they
apply
to
windows,
containers
and
specifically,
for
the
for
the
privileged
scenarios.
B
So
we've
only
really
identified
a
couple
of
psps
that
are
really
essential
for
our
privileged
scenario
case.
One
of
them
is,
of
course,
the
privilege
flag
and
another
one
is
kind
of
host
networking
and
how
that
you
know
interacts
with
different
host
networking
scenarios
for
privileged
containers.
Additionally,
there's
gmsa
kind
of
all
the
way
at
the
bottom
of
this
table,
but
there's
a
lot
of
reasoning
in
the
scenario
kind
of
column
about
why
we
think
these
may
or
may
not
be.
B
To
privileged
containers,
at
least
in
this
iteration
of
what
we're
trying
to
implement
here,
if
there
are
thoughts
on
why
this
may
or
may
not
be
you
know,
we
should
think
more
deeply
about
some
of
these.
Those
are
also
very
open
to
feedback
so
to
drive
into
a
little
bit
of
design
details
this
implementation.
We
are
going
to
kind
of
diverge
from
what
we've
done
traditionally
with
windows
containers
and
using
the
server
silo.
We
are
instead
using
privileged
job
objects
to
implement
privileged
containers.
B
This
is
because
the
silo
you
know,
despite
different
ways
for
us
to
kind
of
work
with
it
or
some
flexibility
as
it
does
offers
and
benefits
it
does
offer
in
isolation
for
privileged
containers.
We
do
need
to
have
greater
access
to
the
host
and
and
be
able
to
work
with
different
resources
on
the
host.
So
we've
chosen
to
work
with
the
job
objects.
B
There's
a
lot
of
kind
of
details
on
how
this
works.
If
you
scroll
down
as
well
so
the
different
implications
for
networking
for
resource
limits,
you
would
think
of
a
lot
of
the
constraints
that
exist
with
job
objects
and
windows
in
general
and
and
how
that
kind
of
impacts.
The
way
this
privileged
container
will
work
a
lot
of
those
will
kind
of
translate
over
into
this
privileged
container
implementation.
B
So
resource
limits,
for
example,
is
something
that
you
know
is
useful
for
containers,
especially
in
production,
and
that
is
something
that
is
available
to
us
via
the
job
object
implementation.
So
some
things
are
still
kind
of
being
thought
through
with
this
specifically,
and
we
would
really
really
love
some
feedback
on
is
the
different
requirements
that
we
might
have
around
container
images
and
the
different
experiences
that
may
or
may
not
be.
B
You
know,
usable
in
terms
of
container
image,
building
and
definition,
because
these
will
differ
significantly
from
the
way
that
we
have
them
into
implemented
for
things
like
the
server
silo
based
containers,
container
images.
Specifically,
there
are
some
questions
about
what
type
of
base
images
different
people
would
want
to
use
with
these
privileged
containers.
B
There
are
some
investigations
that
we're
doing
in
terms
of
given
the
job
object.
What
is
the
minimum
type
of
base
image
we
might
be
able
to
use,
and
you
know
whether
or
not
it
warrants
creating
a
new
image
for
it?
Would
that
be
desirable
to
customers?
What
other
experiences
people
might
hope
for
we
understand
that
our
privileged
containers
and
linux
often
do
building
from
scratch,
and
that
is
something
that
we're
kind
of
looking
at.
B
But,
of
course,
you
know,
there's
some
requirements
in
terms
of
the
job
object
needing
certain
hierarchy
and
so
on
and
so
forth.
So
we
know
we
have
identified
that
something
can
be
done
slimmer,
but
we're
trying
to
understand
the
different
use
cases
that
people
imagine
building
their
containers
with
and
ideas
there
so
kind
of
rolling
into
the
test
plan
and
how
we
anticipate
working
with
this
in
alpha
and
beta.
B
We
do
try
we're
trying
to
do
a
preliminary
analysis
and
testing
of
no
chromatic
scenarios
in
alpha
we're
working
specifically
with
james
on
a
lot
of
this
and
identifying
these
scenarios
in
beta
we're
hoping
to
kind
of
broaden
that
to
work
with
the
test
grids
and,
of
course,
as
you
can
see
here,
we're
kind
of
actively
recording
that
here,
it's
pretty
empty.
But
if
people
have
ideas,
do
please
add
to
this
list
as
well,
so
diving
into
graduation
criteria.
B
So
one
thing
to
note
about
the
job
object.
Implementation
is
that
it
doesn't
necessarily
have
dependencies
on
specific
versions
of
ros,
but
we
are
trying
to
look
at
what
is
the
most
reasonable
os
for
us
to
start
kind
of
support
from
in
terms
of
what
versions
of
kubernetes
we
might
be
targeting
for
so
our
most
aggressive
timeline
is
that
we
might
try
to
get
this.
You
know
launched
into
alpha
in
120,
but
depending
on
kind
of
the
development
and
internal
work
that
we
have
on
our
side.
B
The
ga,
I
think,
is
kind
of
dependent
on
a
lot
of
feedback
that
we're
anticipating
coming
from
the
community
kind
of
across
this
time,
but
you
know
we'll
continue
providing
refining
that
criteria
as
we
go
forward
and
again,
as
I
said
multiple
times
now,
probably
if
there's
any
feedback,
please
do
add
it
here:
the
upgrade
and
downgrade
strategy
so
for
windows.
The
benefit
of
the
job
objects
is
that
they
do
not
require
any
back
ports
for
os
components.
B
If
you
want
to
call
out
things
for
us
to
continue
to
review,
but
these
are
things
that
we're
continuing
to
think
about
as
we
further
refine
a
lot
of
these
areas,
so
15
minutes
later
any
questions,
I
don't
want
to
take
up
all
the
time
we
also
are.
We
will
be
looking
at
this
again
in
a
future
meeting,
so
no
rush.
If
you
want
to
take
more
time
to
review
as
well.
A
D
B
B
So
we
do
want
to
do
a
review
of
this
again
next
week
to
have
some
active
discussion
around
it,
but
do
please
feel
free
to
add
actively
throughout
the
next
couple
of
weeks,
different
thoughts
that
come
up
or,
if
you're,
discussing
with
different
folks
as
well
and
we'll
kind
of
try
to
do
some
fast
irrigation
on
it
in
a
couple
of
weeks
and
see
if
we
can
get
this
moving
faster
thanks.
Everyone.
A
Two
weeks
for
any
feedback
again,
this
is
I
mean
if
we
have
more
feedback,
we
can
push
it
back
amber
right.
This
is
just
like
we
try
to
get
it
in
two
weeks
as
much
as
we
can
and
then
we'll
try
to
make
it
an
official
cap
with
the
enhancement
proposal-
and
you
know,
go
to
you
know,
go
to
the
next
step,
which
is
also
another
interesting
thing
that
we
want
to
ask
the
community
and
how
we
want
to
approach
once
we
get.
A
A
A
Okay,
so
the
next
topic
we
have
is
add
nfs
support
for
windows.
I
don't
know
who
added
this.
I
think
I
forgot
the
name.
Are
they
on
the
call.
A
E
Yeah,
so
I
am
trying
to
add
the
support
for
windows.
There
are
some
changes.
Currently
I
put
in
the
monthly
hotel.
E
That
is
the
first
pr,
at
least
in
the
table,
and
so
currently
that
I
use
net
use
command
to
make
the
connection
to
the
nfs
share
and
then
create
some
link,
and
then
I
think
container
can
use
that
sim
link
to
access
the
nfs
share
and
it's
working,
but
I
just
want
to
see
whether
there's
some
expert
here
really
to
have
a
support
for
windows,
to
kind
of
give
me
some
feedback,
whether
there
are
some
potential
issues.
E
Currently
we
are
discussing
how
to
do
the
let's
say
the
cleanup
and
nfs
right
now
the
genetic
driver
is
not,
we
called
device
mountable,
so
it's
in
kubernetes.
We
have
mount
device
and
amount.
So
right
now
I
put
the
commands
in
the
mount
function,
but
instead
we
probably
need
to
put
it
in
the
mount
device
interface
and
we
need
to
enable
fs
driver
to
become
a
driver
like
implement
the
mount
device
interface.
E
Basically,
there
are
some
discussion
we
are
trying
to
see
if
we
don't
clean
the
fs
connection
correctly.
What's
the
potential
issue,
and
also
I
see
there's
someone
somewhere
mentioned
windows
only
support
20
network
connection,
but
when
I
test
it,
I
can
like
easily
support
hundreds
of
fs
network
connection.
I
don't
know,
what's
that
information
come
from
if
there's
someone
here
have
more
details
about
the
fs
support,
I
would
like
to
hear
some
feedback.
E
E
E
A
E
It
yeah
when
I
test
it.
I
test,
like
I,
can
set
up
hundreds
of
connections.
F
It
could
be
that
that's
a
limitation
based
on
what
windows
server,
skew
you're
using
or
windows
sku,
but
if
david's
on
the
call
he
might
have
more
info.
E
A
Okay,
I
can
put
the
link
here.
I
don't
know
if
david
you
want
to
follow
up
on
this.
I
haven't
heard
of
anything
like
that
to
any
connection
that
should
not
be
the
case.
I
don't
know
where
that
is
coming
from.
G
H
Is
that
a
limitation
in
the
nfs
stack
or
is
that
a
limitation
in
the
tcp
stack
it
shouldn't
be
a
limitation
on
tcp
stack
right?
I
think
it's
yeah.
I
mean
I
don't
know
if
I
need
pcp
stack
limitations,
so
I'm
guessing
it
might
be
more
of
an
nfs
in
the
patients.
C
G
The
other
thing
is
that
there's.
G
E
Okay,
okay
yeah,
mostly
like
one
two,
because
this
is
first
time
we're
trying
to
enable
fs
on
windows.
So
I
just
want
to
like
get
some
feedback.
If
and
someone
know
some
issues
about
fs
support
on
windows,
so
we
can
like
anticipate
yeah
those
issues.
A
Okay,
is
there
anyone
here
with
nf
experience,
or
we
can
do
a
follow-up
on
this.
A
Sounds
good
cool
thanks,
jing
the
next
one
we
have
is
add
end
to
end
test.
I
think
last
time
we
have
this,
I
don't
know
if
they
weren't
here
last
time
I
forgot
the
name
again.
Sorry,
I'm
really.
C
I'm
here
I'm
folker,
I'm
thomas
in
github
hi
hi.
Can
you
hear
me
yeah
as
a
status
from
our
side?
Is
that
my
colleague
implemented
on
first
version
of
the
end-to-end
test,
but
as
it
compiles,
but
is
a
dry
version,
as
I
saw,
we
couldn't
run
the
end-to-end
test
yet
and
alina
already
was
of
a
great
help
and
we
probably
would
need
additional
help
for
getting
the
end-to-end
tests
running
itself.
J
Hey,
I
cannot
say
that
I
can
help
you
with
that
with
setting
up
a
a
cluster
and
whatever
other
you
need
on
running
the
tests.
I'll
take
a
look
on
my
own
and
the
orangutan
on
something
that
I
already
have
set
up.
I
think
I
can
I
can
find
the
time
this
week
last
week
was
really
busy.
C
J
I
mean
there
are
two
ways
to
to
run
it
once.
If
the
pr
merges
it
will,
the
test
will
be
picked
up
and
run
in
the
jobs
that
we
already
have
set
up.
Of
course,
if
there
are
some-
and
I
assume
there
are
some
necessary
setting
steps
for
some
steps
to
set
up
the
the
target
cluster,
then
probably
changes
to
the
existing
jobs
will
will
need
to
be
made.
The
other
way
we
run
tests
is
that
we
have
a
job
that
can
be
triggered
on
each
vr.
J
So
you
you
create
a
pr
and
then
there's
a
command
that
the
bot
picks
up,
but
again,
if,
if
there
is
some
necessary
setup
to
be
done
on
the
cluster,
then
obviously
those
those
clusters
are
not
set
up
in
that
way.
So
but
I'll
I'll
talk
about
the
the
issue
once
I
I
get
a
chance
to
look
at
it
more
in
depth,.
I
A
Yeah,
let
us
know
thomas,
if
you
know,
if
you're
not
making
progress
like
we
can
definitely,
I
think
adelina
will
be
able
to
help
you.
There
sounds.
A
Yeah
cool
the
last
one
from
marvin
danravi.
Are
you
guys
there
wanna
talk
about
this.
K
Yeah,
I'm
here
so
yeah
we
did
discuss
this
a
I
think
must
have
been
a
couple
of
weeks
ago,
and
this
was
about
basically
ignoring
some
of
these
labels
that
are
very
linux,
specific
on
windows
pods
at
the
cubelet
level.
So
we
did
get
an
fgtm
from
from
deep
and
so
we're
wondering
what
needs
to
be
done
for
approval
here.
There
are
some
open
conversations
which
I
think
have
been
resolved
so
just
wondering
if
we
need
to
do
something
more
or
you
know,
basically
what
the
next
steps
are
right.
M
C
M
K
L
Want
to
go
ahead
and
look
at
how
if
any
of
these
changes
are
affected
by
the
privileged
containers
kept
as
well
or
if
it
they
might
be,
might
be
able
to
go
in
as
they
are
now,
but
just
make
sure
that
there's
nothing
that
would
maybe
would
be
reconsidered
if
we
were
to
go
down
the
route
with
privileged.
A
Containers
so
james,
are
you
gonna,
look
at
it
and
look
at
it
from
that
point
of
view
yeah,
I
can
look
at
it
again.
Okay,.
K
Okay,
thanks
james.
Let
us
know
if
you
want
any
changes
based
on
that.
A
Okay,
I
guess
we
are
out
of
time.
Thank
you.
Everyone
please,
as
we
said,
please
review
the
cap
for
privilege,
containers,
that's
a
big
change
and
it
will
affect
a
lot
of
things
as
we
can
already
see
it
happening,
and
thank
you
for
attending
we'll
talk
to
you
next
week,
michael
will
be
back.
I
have
scheduled
a
bi-weekly
backlog
review
meeting.
I
encourage
everyone
to
to
join
it.
You
should
have
an
invite.
A
I
will
post
the
link
here
as
well
right
here
in
the
meeting
notes
for
that,
and
I
will
create
a
separate
doc
for
for
that.
It
will
be
starting
this
thursday.
So
with
that,
thank
you.
Everyone.