►
From YouTube: Kubernetes SIG Windows 20201006
Description
Kubernetes SIG Windows 20201006
A
All
right,
hello,
everybody
and
welcome
to
another
sick
windows:
meetup
it's
the
6th
of
october
and
as
always,
it's
a
recorded
meeting.
So
please
adhere
to
the
cncf
code
of
conduct.
We
don't
have
a
huge
agenda
today,
but
let's
dive
in
the
first
issue
amber
privilege,
containers.
B
So
we
have
a
couple
of
updates
since
the
last
meeting
we
kind
of-
let
you
guys
know
of
a
couple
of
the
networking
challenges
that
we
discovered
kind
of
out
of
conversation
and
discussions
to
kind
of
reiterate
what
those
were
is
that
in
our
current
implementation,
there
is
a
scenario
in
which,
if
a
privileged
container
is
added
to
a
non-privileged
pod,
the
privileged
container
will
have
the
host
network
namespace,
whereas
the
non-privileged
pod
will
have
its
own
pod
name,
space,
which
kind
of
breaks
the
the
same
name,
space,
kind
of
assumption
and
kubernetes.
B
This
was
something
we
originally
thought
that
we
could
would
not
necessarily
have
too
many
impacts
on
alpha.
But
after
some
discussions
we
realized
that
that
was
a
necessary
thing
for
us
to
figure
out
how
we
can
align
the
namespaces
of
a
privileged
container
within
a
non-privileged
pod.
From
that
there
was
another
couple
of
scenarios
that
came
up,
which
I
think
deep
mentioned
the
csi
scenario
and
as
well
as
kind
of
the
init
container
scenario
for
service
meshes.
B
Where,
in
this
exact
scenario,
where
a
privileged
pod
or
a
privileged
container
is
deployed
to
a
non-privileged
pod,
it
needs
net
admin.
Access
to
be
able
to
configure
certain
networking
things
or
csi
might
require
different
access,
but
not
necessarily
need
host
networking,
but
there's
there's
kind
of
some
nuances
as
to
how
we
can
enable
that
access
within
windows
with
this
kind
of
networking
situation
and
how
we
align
those
networking
name
spaces.
B
The
reason
why
this
is
kind
of
blocking
a
lot
of
us
potentially
moving
forward
at
this
point
is
because
we
anticipate
some
significant
changes
or,
as
a
result
of
this
investigation,
some
changes
in
either
the
cry
api
or
perhaps
other
layers
that
we've
previously
identified
as
things
that
may
or
may
not
need
significant
changes.
B
However,
we
are
kind
of
looking
into
other
ways
for
us
to
provide
what
we
have
currently
in
120
for
people
to
kind
of
try
and
get
additional
feedback
and
and
kind
of
figure
out.
If
there's
other.
You
know
nuanced
scenarios
that
we
need
to
take
a
closer
look
at
similar
to
this
namespace
one
we're
thinking
about
using
runtime
classes
or
some
other
way
to
kind
of
expose
this
to
people
to
kind
of
work
and
test
out
with.
A
Absolutely
having
some
experimental
support
or
actually
some
experimental
release
without
any
support
that
could
work
and
then
obviously,
then
we
can
validate
certain
scenarios
around
csi
with
deep
and
and
cluster
api
that
ben
and
others
are
looking
into
and
kalia
and
potentially
identify
more
more
issues
that
you
could
fix
in
121.
yeah.
I
I
was
a
little
bit
surprised
the
other
day
when
I
saw
the
the
the
the
the
give
up
ticket
changes,
but
your
explanation
and
this
data
makes
a
lot
of
sense.
Thank
you.
C
Yeah,
so
I
was
just
wondering
because
we
are
at
the
enhancement,
freeze
and
like
what
we
actually
landed
up
for
121.
Do
we
have
a
list
or
something.
A
Try
to
see
if
I
can
enumerate
them
and
you
guys
can
correct
me
deep
csi,
we're
taking
it
to
stable,
that
that
was
updated
right.
The
cap.
D
So
I'm
having
second
thoughts
on
that,
mainly
because
we
can
do
that.
But
the
main
thing
is
like
we
wanted
to
see
how
privileged
continuous
is
doing,
and
we
should
combine
that
as
in
like
if
privileged
containers
is
coming
around,
should
we
keep
it
beta
for
one
more
and
then
once
privilege
containers
alpha.
We
do
all
the
changes
entry,
if,
if
anything,
is
needed
to
support
csi
and
then
release
a
stable
that
supports
both.
A
That
could
delay
things
by
like
almost
a
few
more
months
right,
because
if
privileged
containers
might
not
be
in
alpha
or
beta
stage
for
for
quite
a
while,
I
I
don't
know
I'm
looking
to
you
to
determine.
Like
I
mean
if
csi
is
stable
using
the
current
model
and
if
we're
thinking
that
potentially
you
might
be
able
to
use
privileged
containers
to
run
the
csi
proxy.
E
A
Regardless
we
want
to
get
folks
to
utilize
to
make
forward
progress,
because
even
if
privileged
continuous
ends
up
being
our
our
final
plan,
privileged
containers
being
production,
great
and
ready-
might
not
be
until
kubernetes
123
right.
D
F
D
A
I
mean
so
so,
if
you're,
if
you're
thinking
deep,
that
you
should
just
keep
it
to
beta
one
more
release
and
wait
until
some
experimental
support
around
privileged
containers
is
out,
so
you
can
see
how
it
maps
to
ssi
approximately
I'm.
I
think
you
know
it's
not
an
unreasonable
ask.
I
think
we
can
leave
it
that
way
for
one
more
release
and
see
how
what
happens
yeah.
I
think
that's.
F
Yeah,
I'm
kind
of
leaning
that
way
too.
I
think
that
since
it's
not
really
enabled
or
disabled
via
feature
flag,
it's
not
quite
as
jarring
for
maybe
potential
users
to
use
it
still
in
the
beta
state
and
if
we
don't
have
really
significant
changes
to
to
make
now,
I'm
not
sure
but
but
we're
planning
on
them
in
the
next.
You
know
six
months
might
be
better
to
keep
it
that
way,
but
also
if
there
are
folks
who
kind
of
would
want
to
see
this,
like.
F
D
Yeah,
I
think
peter
posted
a
question.
Does
the
csi
team
anticipate
any
problems?
Transitioning
kubernetes
cluster,
with
the
current
csi
proxy
approach
to
a
future
approach
using
privileged
containers?
So
yeah?
That's
a
good
question.
I
think
there
were
some
thoughts
and
I
think
maz
had
a
really
good
idea
that
you
know
we
have
the
existing
ap
client
api.
D
We
can
probably
just
instead
of
talking
to
this
proxy.
We
can
like
make
it
talk
to
privileged
containers.
So,
instead
of
having
the
same
pipe
and
the
proxy
binary
in
the
host,
all
of
that
can
shift
within
the
container
itself
but
yeah
well.
D
My
preference
would
be
to
just
like
if
it's
just
a
matter
of
a
single
release
and
by
that
time
in
121
we
have
a
pretty
good
idea
of
how
privileged
containers
is
shaping
up.
We
can
probably
have
a
provide
a
much
clearer
pathway
as
part
of
the
move
to
stable
to
other
csi
authors.
A
Make
sense
all
right,
let's,
let's,
let's
call
it
then
the
next
next
thing
is
obviously
continuity.
So
I
think
all
the
prs
have
for
that
have
been
merged,
so
it
will
go
to
stable
in
120.
F
Yeah,
at
least
like
all
of
the
test
grid
results,
at
least
for
windows.
Server
2019
are
looking
really
good,
and
I
think
most
of
the
work
here
now
is
documentation
and
I've
started
to
take
a
look
at
updating
the
cube
adm
that
what
we
have
in
sig
windows
tools
to
to
have
directions
for,
using
that
with
continuity
nodes
as
well,
and
I'm
making
some
progress
on
that.
A
A
It's
already
a
feature
that
exists:
yeah,
yeah,
so
so
yeah
since
it's
a
linux
feature
you're,
just
mapping
it
okay.
Is
there
anything
else
from
networking?
Besides
local
traffic
policy,
I
know
there
were
some
dsr
enhancements
that
your
team
was
looking
at.
I
don't
know
if
there's
anything
that
will
that
will
be
called
out
for
120..
A
G
A
Yeah
yeah
so
yeah.
I
remember
that
we
were
trying
to
do
some
dslr
enhancements.
Okay,
so
anyway
they
don't
need
a
cap,
but
so
basically
I'm
gonna
call
it
the
networking
work
no
kept
needed.
A
A
All
right,
and
then
are
we
I'm
sorry
like
I
haven't
followed
as
closely
on
that
and
on
the
cluster
api,
so
kalia
and
others
are
you
working
on
that.
Are
we
gonna
be
able
to
claim
any
alpha
support
for
cluster
api,
for
windows,
for
azure
and
for
this
release.
H
Yeah,
so
I've
got
it
mostly
working.
I've
got
a
pr
for
the
image
builder.
I
can
drop
in
the
link
here
and
then
we
also
have.
H
A
cluster
api
that
I
can
drop
a
link
in
as
well
both
of
those
are
in
progress.
I
think
the
as
far
as
the
like
cape
goes.
It's
I
think,
we've
got
mostly
agreement.
I
just
have
to
do
some
final
updates.
I
was
out
the
end
of
last
week,
but
yeah
I'm
working
through
one
last
thing,
with
inter
node
connectivity
for
for
using
flannel.
H
It
looks
like
our
our
docs
upstream
you're
able
to
deploy
windows
containers,
but
when
you
can
try
to
communicate
across
nodes,
it's
not
working.
So
I'm
looking
into
that
and
once
that's
resolved,
I
should
have
those
pr's
ready
to
go.
A
Cool
thanks
james,
so
should
we
comfortable
is
a
cab
merged
that
it
will
go
to
alpha.
H
Yeah
well
so
it
doesn't
apply
to
the
same
life
cycle
as
kubernetes,
so
yeah.
I
think
we
have
general
agreement
that
it's
in
pretty
good
shape
and
should
merge
in
the
next
week
or
two.
A
C
A
Well,
I
don't
know
if
we
should
be
tracking
the
test
changes,
at
least
not
through
the
cap
process,
but
definitely
track
this
deliverables
that
we're
planning
to
to
to
do.
Okay,
okay,
sounds
good
all
right,
so
you
can
talk
a
little
bit
about
the
the
privileged
containers
and
volume
access.
I
Sure
so
this
is
a
question
related
to
privileged
containers.
So
for
css
proxy
right
we
have
disk
api
volume
api.
The
main
task
for
them
is
to
access
those
volume
or
this
host.
For
example,
we
get
volume
with
certain
volume
id
and
then
we
can
format
or
get
status
from
it
mounts
green
bundle
etc.
I
So
for
privileged
container
inside
of
container
right,
even
though
it
is
privileged,
can
it
access
those
volumes
disks
on
host
right,
let's
say
get
a
list
of
volume
on
the
host
get
a
bit
of
disks
on
host?
I
Okay,
so
initially
like
we,
I'm
more
focusing
on
the
file
system
set,
but
actually
that's
not
a
problem,
even
though,
even
if
it
is
not
a
privileged
container
and
with
host
pass
container
can
see
the
host
directories
but
like
get
access
to
the
disks
or
volumes
on
the
host
is
actually
different.
B
Yeah,
I
think
the
only
thing
that
I
would
say
now
is
like,
if
we're
aware,
if
job
objects
can
do
that,
then
this
would
be
possible,
since
the
only
thing
that's
kind
of
happening
in
the
privilege
container
scenarios
us
enabling
job
objects
as
a
type
of
container
like
essentially
I'm,
if
that's
within
the
bounds
of
job
objects,
which
is
something
that
I'd
probably
have
to
go
back
and
check.
Then
I
think
this
would
be
addressed
too.
D
F
A
D
A
C
Yeah,
that's
that's
correct,
that's
correct,
michael,
michael
and
and
as
far
as
the
that
goes
to
amber
in
the
goals,
we
have
explicitly
called
doubt
that
the
privileged
containers
will
you
know,
providing
a
method
to
for
privilege
container
to
host
to
access
host
resources,
including
host
network
service
devices.
This
and
under
disk.
We
have
host
paths
and
volumes.
So
I
think
that's
one
of
the
goals
for
privileged
containers.
I
Great
well,
it
doesn't
need
like
special
api
or
it's
just.
Let's
say
you
get
inside
of
container
right
now,
right
you
you
that
volume
gun
disk,
you
won't
see
the
host
once
and
if
I
want
to
get
the
host
ones,
you
need
to
separate
api
or
separate
commands
to
do
that,
and
also
I
need
to
like
format,
ether.
F
I
F
Kind
of,
for
all
intents
and
purposes,
the
privileged
containers
will
act
just
like
any
other
process
that
is
running
on
the
host
and
should
be
able
to
see
the
disks
on
the
host.
I
think
we
can
try
and
confirm
that,
with
some
of
the
sample,
the
prototype
code
that
that
danny
carter
cantor
had,
though
so
I
I
think,
I
think
what
we're
thinking
is.
Yes,
I
think
we
expect
it
to
work
that
way.
We
just
need
to
confirm
yeah.
A
So,
let's,
let's
take
this
offline
and
and
amber
and
mark
if
we
can
have
someone
try
it
out
in
the
next
couple
of
weeks.
That'll
be
great.
E
E
So
this
issue
is
specific
to
windows
server
2019,
so
you
can
continue
using
the
latest
patches
on
windows,
server,
19,
h1
and
above
1903,
and
higher
this
issue
will
be
resolved
in
the
10c
release
that
is
coming
on
december
2019
later
this
month,
and
you
can.
If,
if
you
suffer
from
this
issue,
you
can
work
around
it,
it's
not
really
an
optimal
workaround.
E
The
probably
the
best
solution
would
be
to
avoid
installing
hcn
9b,
all
together
on
kubernetes
on
windows,
server,
kubernetes
nodes,
but
the
workaround
would
be
to
pause,
basically
pause,
cube
proxy
and
remove
the
the
policy
lists
and
then
restart.
A
E
A
So
so,
if,
if
we
have
someone
that
hates
it
possibly
might
work
on
having
a
kb
article,
if
nobody
hits
it
and
nobody
talks
about
it,
then
we
can
just
silently
wait
till
end
of
october
when
it
gets
fixed,
but
people.
If
people
hit
it
very
likely,
you
might
need
some
form
of
kb
article
or
something
to
point
to
people.
A
So
we
can,
like
you,
know,
articulate
it
there
like
like
on
the
wiki
or
something
it
doesn't
have
to
be
yeah.
E
I
can
I
can
file
an
issue
and
link
it
in
the
in
the
sig
windows
notes.
E
B
C
That's
a
good
idea:
yeah
thanks
david.
A
Okay,
so
so
yeah,
okay,
so
david,
let's
file
an
issue
and
then
something
here.
Yes,
thank
you
all
right.
Last
item
on
the
agenda.
Graceful
notes,
shot
down
mark.
F
Hi,
so
I
linked
the
cap
that
is
going
through
signode
right
now.
I
believe
that
it
got
merged
on
friday.
F
This
kepp
is
kind
of
was
specifically
focusing
on
linux
scenarios,
but
after
I
was
reviewing
the
cap,
I
think
this
could
be
applicable
to
windows
as
well,
and
I
commented
in
the
pr
about
the
windows
apis
needed
to
light
this
up,
and
it
was
suggested
that
that
I
just
make
a
follow-on
pr
to
this
kept,
which
I
should
be
doing
hopefully
this
week.
F
But
the
point
of
this
kept
is
that
for
linux,
nodes
that
are
enlightening,
the
cubelet
via
systemd
inhibitors,
currently
to
be
able
to
detect
when
the
process
is
being
requested
to
terminate
and
then
delaying
that
for
some
time
and
then
issuing
graceful
and
then
gracefully
terminating
pods
that
are
running
on
that
node.
To
just
better
have
continuity
when
nodes
go
up
and
down
windows
has
similar
apis
for
delaying
shutdown.
F
You
could
probably
see
that
if
you
leave
like
notepad
open
with
an
unsaved
document,
it's
the
easiest
way
to
trigger
it.
So,
as
part
of
the
discussion
for
the
cap,
they've
agreed
to
kind
of
make
that
to
split
up
the
os
specific
implementations
out
when
the
cubelet
changes
are
being
done,
and
I
just
wanted
to
raise
awareness
because
I
think
that'd
be
pretty
cool
to
if
we
could
get
windows.
Support
for
this,
and
also
I
was
just.
F
This
might
be
a
good
kind
of
opportunity
if
anybody
is
interested
in
making
some
changes
to
the
cubelet
with
some
kind
of
windows-
specific
knowledge
here.
So
I
just
raising
this
for
everyone's
awareness.
A
F
I'm
not
exactly
sure,
with
that.
Most
of
the
discussions
were
focused
around
once
once
a
shutdown
is
actually
like,
initiated
I'll
have
to
double
check
those
parts
of
the
cap,
but
there
there
were
talks
in
there
about
automatically
triggering
cordon
and
drain
of
the
nodes.
F
Once
the
keyboard
is
aware
that
the
shutdown's
coming.
A
I
think
that's
kind
of
I
think
I
think
I
think
that's
kind
of
reverse,
though
right,
because
we
expect
the
cloud
provider,
like
you
know,
whatever
your
is
you're
running,
underneath
to
kind
of
drive
that
execution
of
basically
cordoning
the
nodes
and
draining
them
and
that
will
instruct
the
cubelet
to
start
executing
the
action
around
the
graceful,
node
shutdown.
So
not
the
other
way
around
like.
I
don't
expect
the
cubelet
to
start
that
process.
F
Yeah
yeah
I'll,
add
those
if
it's
not
already
addressed
when
I
update
with
the
windows
apis
god.
Thank
you.
A
Yeah,
if
anybody's
interested
please
reach
out
to
mark,
I
guess.
A
I
I
A
C
Michael
just
a
quick
question,
and
maybe
embryo,
but
just
adding
the
windows
cap-
that
pr
is
still
under
review.
I
know
we
have
enhancement.
Freeze.
Does
this
pr
need
to
be
merged
before
today,
or
which,
which
pro
just
adding
the
cap
windows
kept
to
the
to
the
to
the
cab
directory?
I
just
pasted
the
link
to
the
to
the
pull
request.
A
A
F
Yeah,
I
think
that
there's
some
comments
with
the
the
enhancement
leads
about
this
and
we
can
always
reach
out,
but
I
think
the
deadline
is
a
lot
of
it's
so
that
the
enhancement
leads
can
start
knowing
what
they
need
to
track
and
what
they
don't
need
to
track.
I
don't
think,
there's
anything
stopping
us
from
checking
in
a
pr
or
for
emerging
of
the
pr
just
to
get
the
dock
in
there.
No,
it.
F
F
Was
we
need
to
add
a
cap.yaml
there?
Recently?
There
were
some
changes
to
the
cap
format,
and
that
should
say
that
should
be
explicit
about
what
release
we're
targeting
alpha.
B
A
Sorry,
cool
yeah
just
tag
us
once
you're
done
and
we'll
see
about
getting
any
merchandise,
but
since
we
don't
have
specific
milestone
for
120,
I
mean,
if
you
can
merge
tomorrow
or
the
day
after
it
should
be
fine.
I
think
very
cool
all
right,
everybody
all
right
we're
out
of
time.
Thank
you
all
for
attending,
see
you
all
next.