►
From YouTube: Kubernetes SIG Windows 20220510
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
First
announcement
is:
we've
recovered
the
zoom
account,
so
we
were
able
to
get
the
recordings
for
the
past
meetings
and
upload
them
we'll
work
on
that
sometime
early,
like
in
the
next
couple
of
days-
and
this
will
include
this
meetings-
reporting
too
next
announcement-
is
we're
going
to
cancel
next
week's
community
meeting
because
of
qcon.
I
think
a
lot
of
folks
are
going
to
be
traveling
and
I
think
people
should
you
know,
make
the
most
of
kukan
whether
they're
attending
virtually
or
in
person.
A
Next
announcement
is:
oh,
I
think
the
125
milestones
are
still
not
totally
in
place,
but
right
now
the
tentative
release
or
enhancements
freeze
is
friday
june
17th.
So
that's
five
about
five
weeks
right
now,
but
it
always
helps
to
have
enhancements
ready
to
review
and
merge
sooner
than
that.
So
that's
all.
I
have
for
announcements:
if
anybody
has
any
announcements
right
now
feel
free
to
add
them
or
speak
up
now
we
can
move
on.
A
A
Okay,
we'll
move
on
from
that.
As
for
the
agenda,
there's
a
couple
of
agenda
items
and
then,
if
we
have
time
I'd
like
to
just
talk
a
little
bit
about
125
planning
a
first
item
on
the
agenda,
I
added
we
have
finally
have
a
pull
request
up
for
all
of
the
docs
work
that
some
of
us
have
been
doing
in
124.
A
I
commented
in
the
pull
request
that
I
was
hoping
to
have
those
land
with
124
docks,
but
the
release
team
just
said
they
couldn't
commit
to
bandwidth,
to
review,
review
that
and
make
sure
it
got
merged.
So
you
said:
we'd
merge
it
pretty
early
in
to
the
124
like
in
into
maine
into
the
live
view
site
pretty
soon
after
124
releases.
A
So
that's
up.
If
there's
a
link
in
the
the
agenda,
you
can
take
a
look
most
of
the
it's.
It's
a
pretty
substantial
review.
Most
of
what
we're
doing
is
we
were
breaking
up
that
big
windows
or
intro
to
windows,
page
and
moving
sections
out
to
more
relevant
areas
like
existing.
You
know
networking
storage
sections.
A
A
lot
of
these
have
been
reviewed
in
prs
that
were
already
open
before
we
just
decided
to
not
merge
those
pr's.
So
take
a
look
if
you
are
not
familiar
with
how
pull
requests
work
in
the
website.
A
Repository
too,
whenever
there's
a
pull
request
coming
up,
there's
a
you
think
it
will
build
and
deploy
a
preview
site.
So
you
can
use
this
to
help.
You
know
see
how
things
are
going
to
get
laid
out
too.
So,
please
take
a
look
and
provide
feedback.
A
Next
agenda
item
is
a
windows
server,
20h2
support?
We
were
just
talking
about
this
in
the
ci
triage
the
see
I
said
no
triage
meeting
previously.
If
you
came
early
but
james
claudia,
did
you
guys
want
on
ravi
anybody?
Did
you
guys
want
to
discuss
this.
B
So
quick
question,
which
one
is
this
about
the
currently
the
current
test
or.
C
Sure
so,
for
the
sac
releases
for
our
windows
test
grid
jobs,
we
typically
run
that
against
the
master
or
the
main
branch
of
kubernetes,
and
we
have
trimmed
out
all
of
the
sap
releases,
except
for
20h2
at
this
point,
because
they've
ended,
they
had
hit
end
of
life.
C
Currently,
this
test
is
failing
on
the
main
branch
of
kubernetes,
because
aks
engine
no
longer
is
is
not
going
to
support
the
the
latest
version
of
kubernetes,
and
so
we
were
just
discussing
this
morning.
C
What
we
should
do
with
the
job
and
and
then
it
kind
of
came
up
whether
or
not
we
want
to
have
20h
to
support
for
125.,
and
so
we
started
looking
into
it
a
little
bit
and
the
release
the
end
of
life
for
20h2
is
in
august,
and
the
kubernetes
release
for
125
is
also
in
august
slightly
after
I
think
a
week
or
two
depending
on
things,
and
so
I
think
the
proposal
is
to
not
support
20h2
for
the
125
release,
but
we
wanted
to
bring
it
up
in
the
community
to
make
sure
if
there's
any
objections
to
that,
and
so
we'd
take
the
job
that
we
have
for
aks
engine
and
the
125
branch
and
just
move
it
to
the
124
branch.
C
I
think
just
to
keep
it
running,
but
is
there
anybody
out
there?
I'm
not
sure
how
many
folks
are
actually
using
the
sac
releases
at
this
point,
the
sac
just
for
more
information
exact
releases
are
not
happening
anymore,
so
they're
they're.
They
announced,
I
think
about
eight
months
ago,
that
they're
only
gonna
be
releasing
windows,
server,
2019
and
2022,
and
then
the
next
lts,
after
that,
whatever
it
is,
and
so
that
makes
it
easier
for
everybody
to
consume
these
but
yeah.
So
that's
what
we've
got.
A
So
I
think
the
the
proposal
so
right
now
we
have
the
tests,
for
the
sac
releases
were
all
on
aks
engine
and
aks
engine
is
we've.
Microsoft
has
deprecated
that
project
and
won't
be
so
won't
be
doing
a
feature
work
in
it
to
support
one.
You
know:
125
plus
releases,
the
very
early
on
like
already
125,
the
125
branch
has
taken
in
some
changes
that
have
broken
compatibility
with
aks
engine,
specifically
they've
removed
a
bunch
of
feature
gates
that
were
referenced
in
some
of
the
setup.
A
So
what
I
heard
james
just
mentioned
is
the
proposal
is
we'll
still
keep
the
tests
running
for
125,
but
they'll
be
until
the
end
of
that
scheduled
support,
which
is
in
august,
but
we
will
only
we'll
run
those
tests
against
the
124
branch
since
aks,
since
aks
engine
has
support
for
that.
E
Yeah
we
plan
to
release
support
for
20
22
as
a
part
of
wmsu
6.0.
That's
a
part
of
ocp
411
which
uses
cube
124.
So
I
think
we
should
be
fine.
E
D
E
F
E
A
A
Okay,
next
agenda
item:
there's
a
new
pull
request,
job
for
the
main
branch.
I
think
this
is
this
is
related.
We've
switched
all
of
the
we've
been
working
on
switching
most
of
the
windows,
jobs
from
aks
engine
to
cap
z,
of
course,
the
api
provider
for
azure,
and
we've
done
that,
for
the
main,
the
pull
request,
job
that
you
can
trigger
on
prs
into
kubernetes
kubernetes.
A
A
Okay,
does
anybody
have
any
other
agenda
items
that
they
wanted
to
discuss?
If
not,
we
can
talk
a
little
bit
about
the
125
planning.
I've
tried
to
capture
some
of
the
work
streams
that
I
know
about
here
with
who,
I
think,
is
driving
those
work
streams.
A
G
Yeah,
so
I
just
had
one
question
regarding
continuity
installation,
so
I
think
there
is
a
script
I
can
try
to
post
the
link
here
and
I've
also
seen
like
in
one
of
these
thing
calls
that
there
was
a
recommendation
to
install
hyper-v
and
hyper-v
powershell
just
wanted
to
make
sure
if
that
is
required
and
we're
trying
to
do
this
as
part
of
moving
to
windows.
Server
202,
since
the
containers
feature
is
not
installed
in
that
one.
G
A
Oh
sorry,
just
the
I'm
not
sure
if
we're
still
recommending
does
anybody
on
the
call
know
for
still
recommending
to
install
the
windows,
the
hyper-v
and
the
powershell
command
light
features
for
hyper
or
for
continuity.
Specifically
so.
C
To
get
the
hyper
v,
vm
switch
commandlets.
You
need
to
install
hyper-v
the
hyper-v
feature.
You
can
disable
that
after
the
fact,
if
you
do
not
need
the
hyper-v
feature
itself,
so
I
don't
know
if
we
have
a
recommendation
on
whether
or
not
you
do
either
one
of
those.
But
I
know
the
ova
needed
this
needed
to
be
able
to
turn
off
hyper-v
because
they
weren't
they
didn't
have
that
support
in
the
os
yeah.
H
H
Yeah,
like
other
like
you,
need
to
turn
it
off
as
a
service,
because,
if
you
don't
support
nested
virtualization
when
it
starts,
but
you
need
to
install
the
feature
because
there's
something
related
to
containers
that
you
need
like.
You
can't
run
containers
unless
you
install
hyper-v
as
a
feature
on
the
vm.
But
when
the
vm
starts,
if
you're
in
an
environment
that
doesn't
support
nested
virtualization,
you
better
also
make
sure
you
disable
that
thing
from
the
hyper-v
service
from
starting.
That's
something.
We
learned
the
hard
way
at
vmware
when
we
were
doing
rci
jobs.
H
Perry
learned
that,
because
we
realized
we
do
all
of
our
ci
on
nested
virtualization
and
we
needed
to
we
needed
to
make
sure
we
disabled
that
in
our
image
builder,
stuff.
C
So
I
dropped
a
link
into
the
chat
there.
That
shows
the
work
that
perry
did
to
turn
that
off
after
after
the
fact.
A
Also
recently,
as
part
of
the
docs
effort
that
I
mentioned,
there's
been
a
push
on
the
kubernetes
website
to
remove
content
on
behalf
of
third-party
or
non-kubernetes
related
like
components
there.
So
we
used
to
have
some
sample
installation
script
or
setup
steps
for
how
to
install
container
d
on
on
windows
that
has
been
removed
and
replaced
with
the
hyperlink
to
docs
in
the
containing
d
repository,
we
should
probably
add
a
little
bit
of
details
about
this.
There
too.
F
Okay,
yeah
thanks.
I
did
see
it.
Okay,
just
question
on
this
again
back
to
the
hyper-v
like
the
cna,
which
we
are
using
the
content,
networking
plugins.
Also
don't
I'm
assuming
we
don't
need
hyper
v
for
that
right,
just
to
confirm
that.
A
Yeah
you
sh,
my
understanding
here
is:
you:
shouldn't
need
hyper
v
to
the
hyper-v
feature
and
able
to
run
this
you.
It
sounds
like
that
there
is
some
if
you
need
to
do
some
advanced
setup
for
virtual
switches
to
get
your
container
networking
to
work.
You
may
need
to
enable
these
features
in
order
to
get
like
powershell
commandlets.
In
order
to
do
that
configuration,
but
hyper-v
itself
is
not
required.
G
A
And
yeah
what?
If,
if
you
have
any
other
questions,
I
think
this
would
be
a
good
candidate
to
reach
out
to
folks
in
the
sig
windows
slack
channel
too,
and
we
can
help
if
you
have
specific
issues
after
trying
this,
you
should.
A
Okay,
does
anybody
have
any
other
agenda
items
nope,
look
at
them,
some
planning
for
a.
A
Bit:
okay,
I'll
do
a
quick
rundown
here
of
what
I
have
captured.
If
I
missed
anything,
feel
free
to
either
add
it
to
the
agenda
or
mention
it
in
the
chat
we'll
get
it
added.
A
I
think
these
are
some
of
the.
The
features
are
big
buckets
of
work.
That
enhancements
are
big
buckets
of
work
that
I'm
aware
of
that.
We'd
like
to
work
on
the
first
one
is
graduating
host
process
containers
to
stable.
We
did
a
demo
a
little
while
ago,
a
couple
meetings
ago
about
some
of
the
behavior
changes,
and
I
just
mentioned
here
that
when
I
was
doing
the
demo
I
had
mentioned
that
we
were
possibly
have
a
difference
in
behavior
on
windows,
server,
2019
and
2022..
A
I
believe
that
we've
worked
around
some
of
those
issues,
so
we
should
have
the
same
experience
on
all
on
all
windows,
supported
operating
versions,
os
versions.
So
that's
good
news.
Next
is
the
the
pod
os
to
feature
to
stable.
A
I
believe
that
there's
not
much
work
needed
for
that
right.
We
that's
just
update
all
the
enhancements
and
just
put
the
feature
gate
correct.
D
Actually,
there
is
some
dependency
on
port
security
policies,
so
I'm
making
some
change
like.
If
you
remember,
we
had
to
make
some
change
to
cubelet
and
we
we
will
support
three
versions
of
cubelet
to
a
cube
api
server
or
three
releases
of
cubelet
to
a
particular
cube
api
servers.
D
So
that's
one
of
the
reasons
we
did
not
enable
some
of
the
changes
that
we
want
to
do
in
port
security,
admission
plugin,
so
it
it
is
a
bit
involved
this
time,
but
I
think
I
have
already
pr
up
an
implementation.
Pr,
jordan
has
reviewed
it
two
releases
ago.
I
need
to
revive
that.
D
D
The
other
thing
that
I
wanted
to
ask
the
community-
and
I
have
posted
this
in
slack
as
well-
is
if
anyone
is
using
pod
os
field.
Let
us
know,
because
the
graduation
criteria
from
beta
to
stable
is
having
two
use
cases
or
two
users
who
are
actually
using
this.
We
have
started
using
this
in
openshift
in
4.11.
D
A
We,
the
sick
windows,
cubecon
talk,
does
talk
about
that
feature,
so
hopefully,
once
people
see
that
more
more
adoption
of
it.
A
Cool
okay
next
was
operational
readiness,
a
meme
is
amir
genji
on
the
call,
I
guess
or
jay.
Can
you
join
us
because
is
this
still
like
folks
are
working
on
this?
Obviously
I
mean
just
pop
down.
I
Yeah
we
are
working
on
that.
We
have.
E
Like
chinchi.
I
A
H
You
want
to
watch
it.
Oh,
it
looks
like
you
just
joined.
I
was
just
ping
chinchy
discussing
it.
I
don't
know
if
there's
any
other
updates,
but
I
don't
have
any
other
updates,
but
yeah.
A
Yep
so
we're
just
making
just
confirming
that
folks
are
working
on
this
and
for
125.
next
one
was
some
of
the
perf
and
soak
tests
mario's
on
the
call,
I
think
he's
gonna
be
driving.
This
there's
been
some
ongoing
work,
so
hopefully
we'll
continue
with
this.
J
A
Great
next
is
hyper-v
isolated,
container
support
in
container
d.
I
know
some
folks
in
microsoft
were
working
to
try
and
enable
this
in
the
in
in
container
d.
Danny
cantor
is
going
to
be
working
on
that.
If
anybody
is
interested
in
in
this,
oh,
please
reach
out
to
danny
right
now.
I
think
the
focus
is
getting
that
support
in
container
d,
not
and
not
much
work
in
the
kubernetes
repository,
but
there
may
be
some
usability
issue
like
enhancements
that
come
to
how
these
get
scheduled
via
prospects
and
stuff
in
the
future.
F
This
is
a
slack
handle
in
case.
You
want
to
know
more
about
that.
A
It's
I
don't
know
what
I
don't
remember.
The
slack
candle
is
it's,
his
github
is
decanter.
Okay,
usually
you
can
just
if,
if
you
post,
I
can
add
him
in
in
slack
I
pick
and
redirect.
E
Mark,
I
think
wanda
did
a
question.
I
think
there
was
some.
I
think
it
was
maz.
You
know
a
couple
of
weeks
back
a
couple
of
months
back.
Actually
he
was
giving
a
really
good
explanation
or
enumerating
the
differences
between
hyper-v
process,
isolation
and
host
process
containers.
If
you
have
any
slides
or
any
like
docs
around
like
when
to
use
what
pros
and
cons
of
each
option.
If
you
could
point
you
know
into
that,
that
would
be
awesome.
A
Yeah,
I
think
that
that
part
part
of
the
work
is
figuring
is
it
is
doing
that
too.
So,
like
hyper-v,
isolated
containers
are
a
lot
of
that's
captured
in
here.
I
think
that
james
just
posted,
I.
A
Hyper-V
isolated
containers,
let
me
try
to
figure
out
how
to
say
this,
but
there's
going.
I
think
right
now
we're
focused
danny
and
is
focusing
on
enabling
some
very
specific
use
cases
of
hyper-v,
isolated
containers
in
continuity
and
and
in
kubernetes,
and
that
may
so
general
documentation
may
not
capture
all
of
those
use
cases
and
then
we're
hoping
to
like
online
more
functionality
in
the
future.
A
We'll
try
and
get.
I
think
we're
still
trying
to
nail
down
what
those
use
cases
are
and
like
and
provide
that
guidance,
because
there
is
like
we've
almost
always
seen.
There
is
a
pretty
big.
You
know
performance
overhead
with
hyperbaricity
containers
and
it
definitely
impacts
density
on
the
nodes.
So
part
part
of
this
workstream
is
is
also
having
all
that
guidance
for
users
available.
A
A
B
So
I've
been
looking
into
adding
the
stuff
necessary
for
building
the
proxy
image
directly
in
kk.
B
Still
have
a
couple
of
things
to
to
figure
out,
because
how
the
build
process
works,
it
won't
actually
publish
to
any
registry
directly.
B
It
seems
that
it's
just
basically
exporting
the
images
as
oc
images
as
star
files
and
then
in
some
scenarios
they
are
actually
getting
consumed
by
things
like
kind,
for
example,
while
the
release
process
itself
is
seems
to
be
done
by
the
release
management
of
kubernetes
manually.
B
So
there's
no
automation
regarding
this
there,
but
I
am
assuming
that
they
are
going
to
use,
make
release
images
which
basically
builds
and
generates
the
images
for
all
for
all
binaries
and
architectures,
and
so
on
now
being
tar
files,
I'm
not
exactly
sure
how
they
are
uploading
them
to
the
registry
itself.
B
A
A
It's
been
open
since,
like
114
115
release,
and
that
kept
is
wildly
out
of
date
right
now,
I
think
it's
all
focused
on
having
things
set
up
with
docker,
and
that
has
a
lot
of
workarounds
that
use
things
like
wins
and
other
kind
of
tools
that
we
don't
even
that
we
don't
recommend
to
use
or
other
or
that
we're
not
using
in
you
know,
production
environments,
part
of
that
is
figuring
out.
What
we
want
to
do
with
the
cap,
like
cube
adm
for
windows,
support
mostly
works.
A
A
I
think
cleaning
that
up
too,
because
while
it
may
be
working
today,
if
users
stumble
on
that
enhancement,
I
think
they're
gonna
be
quite
confused.
The
other
facet
of
that
work
is
cube.
Adm
likes
to
try
and
deploy
your
q
proxy
image
for
you
and
figuring
out
how
to
standardize
on
that
is
is
is
part
of
this
work.
I
know
we
have
a
couple
there's
a
couple
of
different
flavors
of
q,
proxy
images
that
are
getting
built
in
the
sig
windows
tools.
A
Repository
capacity
is
using
images
out
of
that
repository,
but
I
think
long
term.
That's
not
the
right
place
for
those
images,
so
figuring
out.
Both
of
those
stories
are
what
I
had
in
mind
here
for
this
work
and
I'll
try
and
capture
that
in
the
meeting
notes
after
the
meeting.
H
I
forgot
whether
we
brought
this
up
last
time
when
we
talked
about
it.
But
did
we
ever
talk
about
just
not
installing
the
cube
proxy
as
part
of
kube
adm.
H
C
H
C
It
works
today;
it
just
requires
quite
a
bit
of
knowledge
and
expertise
and
understanding
of
how
to
wire
everything
together,
and
I
I
think
that's
great
for
like
a
beta
feature,
but
I
think
if
we
really
wanted
to
call
it.
Ga,
in
my
opinion,
would
be
that
we
we
would
make
that
easier
for
for
folks,
I
think
that's
the
core
of
it,
and
that
may
be
just
a
lower
priority
thing
since
there's
quite
a
bit
other
work
happening,
but
I
think
that's
the.
H
Kind
of
yeah-
I
wouldn't
say
we
shouldn't
do
this,
but
I
would
just
say
that,
like
there's
a
way
out
because
we
can
say,
cni
providers
nowadays
are
tightly
coupled
to
the
coop
proxy
implementation,
you're
running
and
may
even
provide
that
functionality
for
you
and
blah
blah
blah
blah
blah.
There's
that
whole
argument
we
could
make
if
we
want
to
go
ga
and
not
do
this,
just
just
a
thought
right
like
I,
don't
have
a
strong
opinion
either
way.
Yeah.
A
Part
of
part
of
this
work
could
just
be
saying
things.
Work
here
are
some
examples.
Here's
more
information
on
how
to
set
this
up
yourself,
you
know,
have
have
fun,
but
I
think
we
like,
I.
I
think
we
should
update
the
event
the
documentation,
at
least
the
enhancement
proposal
at
some
point,
if
we
are
going
to
leave
it
at
the
current
state.
C
And
I
mean
some
of
that
work
could
be
as
part
of
k,
p
and
g.
We
don't
necessarily
have
to
build
a
q
proxy
into
there.
It
could
be
you
know.
When
kpng
has
support
for
q
proxy
out
a
tree,
then
we
can.
We
could
build
the
image
in
that
repository
and
you
know
have
docs
on
how
to
install
it
with
after
using
cube
adm.
So
I
think
yeah,
that's
the
other
path
we
can
take.
A
H
Yeah,
I
think
I
think,
dash
scott
and
some
friends
are
going
to
join
today
and
I
was
going
to
help
to
see
if
we
could
start
getting
that
work
handed
over
and
I
think
matt
fenwick
also
joined
who's
been
pairing
with
me
on
some
of
that
stuff.
Lately,.