►
From YouTube: Kubernetes SIG Windows 20210713
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
july
13th
2021
iteration
of
the
sig
windows
kubernetes
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube.
So
please
adhere
to
all
of
the
cncf
code
of
conduct
and
standards.
Let's
get
into
things.
A
I
see
a
lot
of
people
on
the
call-
and
I
haven't
been
here
for
many
months,
so
I
don't
know
if
there's
if
most
of
the
people
here
are
new
or
just
returning,
but
if
anybody
kind
of
wants
to
say
hi
or
would
like
to
know
who
anybody
else
is
on
a
call
either
raise
your
hand
or
just
speak
up
and
say
hi,
I
see
a
couple
new
people
marguerite.
Have
you
been
joining?
Do
you
want
to
introduce
yourself.
B
Good
morning,
folks
yeah,
this
is
actually
my
first
time
joining
those
meetings.
My
name
is
marguerite.
B
A
A
Okay,
continue
with
the
agenda.
First
thing
is
announcements,
we're
kind
of
nearing
more
into
the
end
game
of
the
kubernetes
122
release
code
freeze
was
last
thursday.
I
think
that
all
of
the
prs
that
we
really
were
hoping
to
land
landed.
A
Anybody
can
correct
me
if
I'm
wrong,
but
if
not
the
best
we
can
do
is
get
them
in
early
in
123
and
try
and
backport.
The
next
important
milestones
are
test
freezes
coming
up.
That's
this
thursday
july
15th,
james
or
jay.
Are
we
tracking
any
kind
of
critical
test
coverage
improvements,
or
I
think
the
tests
look.
A
Yeah,
we
were
just
reviewing
those
the
15
minutes
prior
to
this
meeting
for
anybody
who
kind
of
joined
in
the
middle
of
that.
So,
if
you're
interested
in
keeping
helping
to
keep
our
test
signal
green
feel
free
to
join
next
week.
The
next
kind
of
important
state
is
docs
freeze,
so
july.
27Th
is
when
they,
the
docs
team,
would
like
all
the
docs
prs
for
122
to
be.
A
A
A
If
there's
any
other
docs
just
make
sure
it
has
the
sig
windows
label
on
the
pr
and
we'll
pick
it
up
and
get
that
done
all
right,
jay's,
adding
a
couple
of
agenda
items
for
do
you
wanna
take
over
for
a
little
bit
jay.
D
Yeah,
I
don't
know
if
anyone
else
has
anything
else
feel
free
to
interrupt
me.
I
just
added
these
because
they
were
on
my
mind
and
we
were
going
to
go
through
them
during
the
pairing
sessions
anyways,
but
I
do
see
csi
proxy
unstable.
D
C
A
C
Yeah
so
they've
been
doing
a
lot
of
work
to
to
get
this
to
go
to
stable.
The
kep
got
approved
for
stable
and
they've,
updated
a
bunch
of
docs
and
improved
and
got
a
review
done
on
the
api
that
they've
created
to
to
make
this
work,
and
so
tomorrow,
they're
actually
cutting
the
release
of
the
csi
proxy
binary,
and
so
that
should
become
available
either
tomorrow
or
thursday.
But
sometime
this
week.
C
So
if
you're,
using
windows
and
you're
using
csi
go
ahead
and
try
this
out
and
make
sure
you
get
feedback.
So
thank
you.
A
And
I
may
be
a
little
bit.
A
My
information
may
be
a
little
bit
outdated,
but
if
I
remember
the
plan
for
this
was
to
go
to
stable
without
the
host
process
without
having
these
run
as
a
daemon
set
in
as
host
process
containers,
because
there
is
some
kind
of
criticality
with
due
to
the
entry
storage
providers
getting
removed
is
that
this
release
or
next
release
I'll
have
to
follow
up
on
that,
but
either
way
they
wanted
a
stable
way
to
run
the
out-of-tree
storage
plug-ins
and
it's
been
really
stable
for
a
while
now.
A
So
now,
it's
right
time,
there's
still
going
to
be
some
effort
in
this
area
to
run
the
csi
proxy
in
a
daemon
set.
Once
host
process,
containers
are
released,
which
will
be
122.
A
C
D
Yeah
yeah-
I
was
just
telling
some
of
our
downstream
friends
here
about
that
news
about
csi
proxy
they'll,
be
excited
to
hear
that.
So
thanks
for
that
update
cool,
so
yeah,
I
was
wondering
how's
hyper-v
looking.
I
know
there's
just
friedrich
and
mark
so
for
those
of
you
that
are
new.
We
have
this
sig
windows,
dev
tools,
environment,
that
will
spin
up
a
working
windows
cluster
with
different
cni
providers
and
stuff
to
play
around
with
from
1.22
from
from
bleeding
edge.
D
So
you
can
so
you
can
really
bang
away
at
the
kubernetes
on
windows,
including
hacking,
the
source
code
and
stuff,
but
so
far
only
works
on
virtualbox,
and
you
know
I've
hit
various
cni
and
cri
issues.
So
once
you
really
drill
into
it,
there
are
a
couple
of
kinks
we
haven't
worked
out
yet,
but
for
the
most
part
it
works
for
me
for
testing
kubelet
coupe
proxy,
the
rest
of
it
so
mark
yeah.
What's
the
latest
with
the
hyper-v.
A
I
spent
a
little
bit
of
time
trying
to
get
this
to
work
on
a
windows
machine
that
had
hyper-v
kind
of
as
a
virtualization
layer.
The
first
issue
like
at
first
I
was
trying
to
just
do
it
with
pure
hyper-v,
so
using
the
hyper-v
provider
with
vagrant,
which
is
what
the
dev
environment
is
based
off
of,
and
I
ran
into
some
issues
that
I
think
a
couple
of
folks
have
also
run
into
when
we're
trying
to
work
around,
in
particular
jamie
phillips
the
issue.
A
One
of
the
big
issues
that
I
ran
into
right
off.
The
bat
is
the
way
that
the
provisioning
scripts
work
assume
that
there's
a
static
static.
Ip
addresses
that
gets
passed
into
the
cubanium
init
column
and
all
of
the
cubadium
joined
calls
turns
out
that
with
hyper-v
on
vagrant
you
can,
you
can
have
it
create
private
networks,
but
you
can't
you
need
to
either
rely
on
dhcp
or
some
other
way
of
kind
of
assigning
the
ip
addresses
you.
A
A
Maybe
yesterday
I
found
that,
apparently
you
can
run
virtualbox
on
top
of
hyper-v
and
now
virtualbox
supports
what
they
call
para
virtualization,
where,
if
you're
running
virtualbox
on
top
of
hyper-v
on
a
windows
machine,
it
will
kind
of
defer
most
of
the
calls
to
the
hyper-v
apis
and
run
vms
with
that.
That
has
the
added
benefit
of
a
lot
of
the
niceties
that
virtualbox
has
like
shared
folders
that
don't
rely
on
smb
shares
and
assigning
private
ip
addresses
just
work.
A
So
I
was
able
to
after
tweaking
a
lot
of
timeouts
get
the
dev
environment,
as
is
defined
just
at
the
in
the
master
branch
for
the
kubernetes
to
come
up
in
on
virtualbox
on
a
windows
machine
that
was
using
the
hyper-v
pair
virtualization
layer.
So
I
don't
that
might
be
a
kind
of
an
easier
time
for
people
to
to
go
forward
with.
I
opened
up
a
pr
with
my
changes.
What
can
we
up
for
comments?
That's.
D
A
Okay,
so
you've
got
it
and
by
I
will
clarify
by
work
the
both
of
the
nodes
joined
or
came
up,
and
I
did
have
a
two
node
cluster.
When
I
ssh
into
the
control
plane,
I
did
not
try
and
schedule
any
workloads
to
it,
but
I
saw
all
the
anterior
provisioning
came
up.
Cni
like
when
I
was
poking
around
in
the
cluster.
It
said
that
cni
was
up.
E
I
had
it
with
pure
hyper-v
working
like
two
or
three
minutes
ago.
E
I
finally
got
it
working
with
pure
hyper
v,
I'm
basically
at
the
same
position
as
mark
is,
I
I've
not
run
any
containers
on
it,
but
it
I
got
two
nodes
and
they're
connected
and
the
workaround
or
yeah
one
workaround
was
getting
the
ip
address
of
the
server
with
a
linux
command,
and
then
you
we
sent
the
cube
join
directly
with
the
ip
address,
so
ip
address
was
resolved
and
the
other
problem
was
with
the
shared
synchronized
directories,
because
that
was
really
messed
up
and
I
temporarily
fixed
this.
E
So
I
guess
it
would
no
longer
be
compatible
with
with
with
the
normal
hypervisor
that
we
use,
but
I
guess
it
proves
that
it
is
possible,
and
maybe
we
have
to
do
a
little
work
a
little
bit
some
if
statements,
but
it's
obviously
possible.
D
Oh
great,
so
we've
got
sort
of
a
good
problem,
which
is
there
are
multiple
ways
to
get
it
working
on
high
property
so
who
wants
to
help
bart
because
bart,
just
joined
and
in
the
middle
of
all
this
chaos
he's
like
how
do
I
get
this
thing
working
on
hyper-v?
So
there's
two
paths
forward.
You
could
do
mark's
thing
or
you
could
do
free
drinks
so.
F
A
Great
yeah
moving
forward.
We
can
either
use
this
for
one
of
the
pairing
sessions
to
try
and
decide
which
way
we
want
to
which
route
we
want
to
just
take
going
forward.
If
we
want
to
do
the
keep
everything
on
virtualbox
and
rely
on
pair
of
virtualization
or
if
we
want
to
try
and
do
edit,
the
vagrant
files
to
be
more
kind
of
hyper-v
native,
focused.
F
It
feels
that
we
need
to
go
to
the
second
one,
but
let's
first
start
with
the
the
first
one
and
then
like
and
then
try
to
to
move
over
to
full
hyper-v.
D
A
Ran
into
all
sorts
of
issues
with
getting
the
synced
folders
working
with
smb
shares
to
apparently
that
doesn't
work.
If
your
computer
accounts
are
active
domain
or
active
or
aad
joined,
like
accounts
and
not
local
accounts,
so
there
may
be
a
lot
of
gotchas,
whereas
the
at
least
for
me,
this
syncs,
folders
and
virtualbox
just
worked.
G
A
So
the
this
dev
environment
is
an
issue
that
I
guess
jane
a
couple.
Other
folks
started
because
it's
very
hard
to
spin
up
a
new
kubernetes
cluster
when
you
don't
have
access
to
a
cloud
environment
that
is
using
kind
of
bleeding
edge
or
tip
of
tree
kubernetes
bits.
So
this
was
one
kind
of
way
of
helping
people
and
hopefully
helping
it,
making
it
easier
to
contribute.
F
A
To
you
know,
kubernetes
or
sick
windows
to
spin
up
this
environment.
G
G
Needed
I
have
done
like
setup,
environment
james
one
time
helped
me
like
two
years
ago
and
that
machine
crashed.
So
I
still
don't
have
environments.
I
think
it's
a
very
important
effort.
What
I'm
trying
to
understand
is
so
when
you're,
using
hyper-v,
so
you're,
probably
expecting
the
developer
to
have
windows
server
running
on
your
machine.
Is
that
what.
A
H
A
Or
windows,
I
think
most
people
who
are
probably
running
on
windows
are
going
to
have
the
hyper-v
virtualization
stack
installed.
So
that's
why
we're
kind
of
pursuing
this,
but
on
linux
I'm
guessing
it's
going
to
be
virtualbox
or
I.
F
Start
work.
I
started
also
another
part
because
I'm
using
wsl
a
lot,
that's
like
hey,
maybe
I
can
do
a
mixture
there.
This
like
this.
That
was
my
starting
point
and
I
was
like
it
was
like
very
far
with
that.
It's
like
the
only
thing
what
I
need
is
like
a
hyper
box
for
my
windows,
one
the
risk
cross
network,
possible,
etcetera,
etcetera.
F
F
A
F
F
I
I'm
pretty
far
with
that,
but
I'm
going
to
switch
over
to
the
hyper
part
now
and
take
the
bit
from
friedrich
and
and
the
other
guys
and
see
if
I
can
get
that
part
up
and
running
and
then
jump
back
to
wsl
so
because
I
think
it
should
be
an
awesome
thing
if
we
have
wsl
in
combination
with
hyper
v
and
that
we
can.
That
should
be
an
awesome
setup.
But
let's
see.
G
G
Yeah,
once
again
I
mean
that's
a
great
point
bart.
I
think
one
suggestion
I
was
looking
at
this
stick:
windows
development.
I
mean
it's
pretty
awesome,
but
I
think
it's
not
clearly
called
doubt
on
what
machine
you
are
doing.
I
believe
a
lot
of
these
commands
won't
work
on
you
know,
windows
and
even
setting
up
a
hyper-v
or
wsl
could
be
a
challenge
for
someone
who's
just
starting
right.
So
I'm
just
thinking
if
we
should
add
those
instructions
here
what
you
guys
are
going
through.
H
A
A
H
G
D
D
D
And
trying
this
stuff
out
and,
of
course
friedrich
for
for
for
for
getting
it
working
today
on
hyper-v
and
all
the
other
stuff
he's
done
to
get
us
to
this
point.
So
but
everybody
that's
testing
thanks
a
lot
so
yeah
I
can.
I
can
share.
D
I
my
update
is
so
I
so
I
don't
know
if
danny's
here
I
was
I
I
added
calico
support
last
last
weekend
and
and
it
was
working
but
but
I
hit
this
wall
where
hcs
gym
would
time
out
I
mean
hcs
shim
was
not
able
to
attach
the
network
to
start
the
container,
so
it's
real
weird
because
actually
calico
was
able
to
receive
a
cni,
add
command
from
continuity,
but
after
it
added
it,
it
wasn't
able
to
attach
it
and
I
never
really
collected
any
logs
on
it
or
anything.
D
So
I
don't.
I
don't
have
any
logs
or
anything
to
to
prove
that,
but
we
could.
We
could
try
to
look
at
that
today
and
I
wish
I
had
a
environment
up
and
running,
but
I
could
try
to
try
to
set
one
up.
It
doesn't
usually
take
that
long.
D
Does
anybody
else
have
anything
else
they
want
to
do
before
we
jump
into
the
pairing
stuff,
and
also,
of
course
we
don't
have
to.
We
don't
have
to
do
that
for
pairing.
We
could
look
at
other
stuff.
If
people
have
other
ideas
of
stuff,
they
want
to
look
at.
C
C
I'm
gonna
paste
the
link
in
here,
but
that
was
resolved
in
one
of
the
latest
releases
of
docker,
and
so,
if,
if
you
are
running
into
that
issue,
I
think
jeremy
initially
reported
it
and
helped
reproduce
it,
and
so,
if
you,
if
you
did
run
into
that,
let's
I'll
link
it
in
here
there
you'll
need
the
latest
release
of
docker
to
resolve
that
issue.
So
I
just
wanted
to
call
that
out.
It
was
on
our
notes
from
a
few
weeks
ago.
C
A
And
I
know
that
we're
still
in
the
122
release
too,
but
if
anybody
is
planning
on
either
progressing
or
introducing
new
enhancements,
it
helps
to
get
those
reviewed
early.
I
know
that
ravi's
still
working
or
has
the
work
in
progress,
one
open
for
using
runtime
classes
to
identify
windows
nodes
that
we're
looking
for
either
comments
on,
but
even
if
it's
not
enhancement,
it
helps
to
get
just
a
head
start
on
all
of
those.
I
D
I
All
of
them
would
be
perhaps
at
the
admission
server
layer
and
the
other
thing
is
say.
If
we
go
ahead
with
the
other
approach,
instead
of
going
with
runtime
classes
say
if
we
decide
to
go
with
having
a
field
in
the
pod
spec
that
needs
api
reviews
too.
So
in
both
the
cases,
we
need
people
from
other
areas,
owners
to
actually
respond
to
us,
and
that
would
take
some
time.
That's
what
I'm
a
bit
concerned
about.
A
Yeah,
probably
we'd
want
sig
node
to
comment
too,
but
jordan
liggett
is
probably
the
most
up-to-date
the
person
who
would
be
able
to
do
any
api
reviews
and
knows
the
scheduler
quite
well
and
is
kind
of
familiar
with
windows.
A
So
we
should
definitely
get
feedback
from
him.
I
Yeah
one
thing
they
are
waiting
on
is
is
actually
getting
consensus
within
windows
6
like
once,
we
get
that
I
can.
I
can
actually
push
jordan
and
other
folks
to
to
review
it
immediately.
I
I
Yeah,
the
main
thing
is
like:
once
we
get
consensus
internally,
we
need
to
get
reviews
from
external
sigs.
That
will
be
the
hardest
fact.
D
Yep
do
do
we
have.
Is
it
simple
enough
that
we
could
get
a
handshake
consensus
right
now
like?
Is
it
that
complicated?
I?
It
didn't
seem
like
it
was
that
controversial,
but
two-way?
I
know
I
know
it
requires
a
full
review,
but
is
there
anything
specific
that
you
think
is
gonna,
be
a
long
tail
debate,
or
is
it
just
a
matter
of
us
going
through
the
details?
H
I
think
the
main
thing
that
we
could
maybe
get
agreement
on
is
the
use
of
runtime
classes,
because
that
seems
like
the
path
of
least
resistance.
At
this
point.
A
Okay,
yeah
the
one
thing
that
came
up
with
this,
for
I
forgot
why,
but
earlier
was
that
there
was
there
were
other.
Somebody
else
floated
the
idea
of
using
runtime
classes
to
help
identify
this.
Oh,
I
think
for
some
of
the
like
for
the
pod
security
policies,
and
one
thing
that
was
one
thing
that
came
out
of
that
discussion
was
that
for
docker
shim
there
is
a
default
runtime
class
named
docker
shim
that
you
can
use,
but
the
issue
is
that
that
runtime
class
is
named
docker
shim
for
both
linux
and
windows.
A
So
you
can't
differentiate,
but
given
that
docker
shim
is
pretty
much
I'd
say
like
sunsetted
for
container
d,
that
may
not
be
like,
given
the
timeline
of
when
this
would
actually
go
into
effect.
That
may
not.
That
may
be
a
moot
point.
I
C
Side
yeah,
I
think
I
had
maybe
some
reservations
on
just
whether
or
not
we
should
be
using
particular
pods
spec
like
like
adding
a
field
to
the
pod
spec
versus
just
using
runtime
classes.
But
I
I
guess
I
didn't
realize
that
you're
using
the
the
label
inside
the
runtime
class,
which
actually
kind
of
makes
a
lot
of
sense.
So
let
me
I'd
like
to
read
through
it
in
more
detail,
but
just
that
detail
in
particular
helps
me
be
a
little
bit,
leaning
towards
that
a
little
bit
more
so.
A
If
we
do
want
to
get
some
feedback
from
some
other
people,
it
might
be
worthwhile
to
have
them
in
some
of
the
conversations
with
this
too.
So
maybe
we
can
try
and
schedule
one
at
a
time
that
would
work
for
some
of
the
other
folks
too,
but.
I
Yeah,
I'm
fine
with
having
a
dedicated
time
to
for
me
to
go
over
this
and
then
and
then
we
can
discuss.
A
I
think
that
sig
security
is
also
going
to
be
interested
in
this,
because
it
has
implications
for
restricting
different
pod
security
policies
that
are
os
specific.
A
Like
having
them
either
like
apply
or
not
apply,
so
that
might
be
good
to
get
involved
too.
I
can
find
some
people.
I
Yeah,
that's
how
we
started
this,
and
if
I,
if
I
remember
correctly
what
tim
during
the
psp
replacement
time
frame
suggested
was
in
124
when
this
goes
gea
by
default,
the
policy
that
would
come
up
is
restricted
and
in
restricted
policy
we
are
going
to
enforce
certain
constraints
that
may
be
linux
specific
and
by
that
time
they
would
want
us,
the
windows
community,
to
come
up
with
a
way
to
uniquely
identify
the
windows
spots
during
the
admission
time
so
that
they
can.
They
can
say
that
hey.
I
This
is
a
windows
power.
I
do
not
care
about
it.
If
you
want
to
implement
some
security
policies,
you
may
have
to
implement
your
own
admission
web
hook,
because
that's
that's
the
extension
mechanism.
They
are
providing
for
the
entry
ones.
They
are
going
to
provide
certain
security
constraints,
but
not
everything
is
going
to
present.
A
And
I
think
a
follow-up
to
that
conversation
was
that
it
may
be
acceptable
for
the
cubelet
to
do
some
checks
too
and
not
like
not
add
all
of
the
security
context
details
for
based
on
what
os
it
is
to
to
help
with
that
too.
But
I
need
to
refresh
my
memory
on
that
too.
I
Yeah
you
mean
like
stripping
some
unnecessary
linux,
specific
constraints
on
the
windows
cube,
let's
say.
I
I
mean
we
do
that,
but
the
problem
is
like
say:
every
distribution
is
going
to
have
its
own
admission,
plugin,
like
in
case
of
openshift.
We
have
our
own.
Some
of
those.
I
I
But
in
any
case,
if
you
want
to
go
over
this,
I
can
perhaps
set
up
a
meeting
a
follow-up
meeting
sometime
next
week
or
the
week
after
that,
and
we
can
go
through
this.
G
I
Yeah
I'll
send
that
invite
to
sig
windows
mailing
list.
Whoever
is
interested
in
can
join
security
to
mark
in
case
they
are
interested.