►
From YouTube: Kubernetes SIG-Windows 20220913
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
September
13th
2022
iteration
of
the
kubernetes
Sig
Windows
community
meeting.
As
always,
these
meetings
are
uploaded
to
YouTube
and
also
be
sure
to
adhere
to
the
cncf
code
of
conduct
for
anybody
new
that
pretty
much
just
boils
down
to
be
nice
to
everybody.
A
Let's
get
started
with
the
announcements
first
announcements.
Well,
only
set
of
announcements
is
the
126
release.
Schedule
has
been
posted.
These
do
are
subject
to
change
sometimes,
but
here's
kind
of
what
we're
looking
at
first
deadline
is
the
production
Readiness
review
freeze,
which
is
the
Thursday
29th.
That
is
like
a
little
over
two
like
two
and
a
half
weeks
from
now.
A
So,
for
those,
if
anybody's
new
here
production
Readiness
is
one
of
the
kind
of
approval
processes
that
are
needed
for
the
kubernetes
enhancement
proposals
and
the
goal
of
those
is
to
make
sure
that
the
features
are
kind
of
ready
for
production,
make
sure
that
there's
things
like
metrics
added
and
that
people
have
thought
about
what
happens
when
you
enable
or
disable
features
in
a
cluster
and
things
like
that
next
important
deadline
is
the
enhancements
freeze,
that's
the
follow,
or
that
is
tooth
or
yeah.
A
That's
the
following:
Thursdays,
that's
October,
6
2022,
that's
about
three
and
a
half
weeks
away
and
then
code
freezes
at
the
end
of
November.
Does
anybody
have
any
questions
or
anything
else
to
announce
here?
Oh
one,
other
thing
that
I'll
announce
is
the
contributor.
Summit
is
still
open
for
sign
up,
so
anybody
who's
a
kubernetes
or
kubernetes
org
member
and
is
planning
on
attending
kubecon
in
person.
Please
sign
up
for
that.
That's
a
great
way
to
meet
fellow
contributors
and
there's
some
usually
some
nice
swag
and
some
sessions
kind
of
specific
to
Developers.
B
A
Okay,
if
there's
no
announcements
you
can
move
on
next
is
we
can
see
if
there's
anybody
any
new
contributors
on
the
call
who
would
like
to
say
hi
or
introduce
himself
to
say
what
they're
working
on
this
is
optional?
But
if
anybody
wants
to
feel
free
to
either
raise
your
hand
or
just
say,
hi.
A
All
right
next
is
the
agenda
first
agenda
item
that
I
had
is:
they
are
taking
call
for
proposals
for
to
speak
at
the
contributor,
Summit
and
I
was
wondering
if
anybody
had
any
interesting
ideas
about
what
to
do
about.
If
anybody
wanted
to
submit
something
for
Sig
windows,
I,
don't
know
if
the,
if
you
guys
would
like
to
go
over
the
developer
environment,
something
that
might
be
good
and
I
think
right
up
like
really
kind
of
geared
towards
the
spirit
of
the
contributor
Summit.
A
A
A
I
think
that
they're
also
encouraging
each
Sig
to
just
submit
a
call
for
proposal
for
like
a
face-to-face
meet
for
the
developers
who
are
in
in
the
area.
All
confirm
and
I
will
submit
that
one
and
I
have
another
one
that
I
might
submit
about
using
GitHub
projects
for
triaging
things,
since
I've
done
a
lot
of
work,
automating
that
but
yeah
I
think
that
submissions
open
for
everybody
who's
attending.
So
I'll
probably
mentioned
it
again.
Next
week,
I.
A
Next
on
the
agenda,
I
was
going
to
go
over
the
this
I
was
going
to
go
over
some
of
the
enhancements
that
Sig
Windows
is
putting
forward.
If
anybody
has
anything
else,
they'd
like
to
talk
about
before
that,
because
this
might
take
the
rest
of
the
meeting
feel
free
to
suggest
a
topic.
E
A
One
thing
another
thing
that
I'll
mention
here
is
I
think
there's
been
a
lot
of
talks
about
it
and
mailing
lists
to
like
kubernetes,
Dev
and
things,
but
the
release
team
is
changing
the
way
that
enhancements
are
being
tracked
for
this
release
so
most
were
instead
of
using
a
GitHub
or
a
Google
sheet
to
track
everything
they're
using
a
GitHub
projects
board.
You
can
do
a
quick
demo
of
that
too.
A
There's
a
lot
tighter
integration
into
GitHub
now,
which
is
really
nice.
So,
for
example,
you
can
see
I'll
open
up
this
one,
since
actually
it's
actually
filled
out,
but
you
can
go
right
to
the
GitHub
issue
and
then
you
can
see
all
of
the
fields
that
are
here.
A
So
you
can
actually
check
to
see
who,
like
the
prr
signee,
is
if
the
prr
is
done,
if
there's
any
like
who
to
reach
out
to
for
for
docs
issues
and
verify
that,
like
this
stage
and
everything
is
correct,
so
the
enhancements
T
part
of
the
release
team
will
be
working
on
populating
all
of
these
managing
this
GitHub
project
board
going
forward.
So
things
are
no
longer
being
added
as
part
of
those
old
spreadsheets.
A
If
anybody
has
any
questions
about
that,
could
ask
me
or
I
think
Post
in
slack
the
release
management
page
her
Channel
and
Slack.
A
Okay,
talking
about
enhancements
I'll
go
over
a
couple
of
the
ones
that
I
think
are
probably
going
to
be
pretty
short
discussions.
The
host
process
containers,
one
I,
think
we're
still
planning
on
pushing
for
GA
for
this
release.
A
We
have
been
running
ede
tests
using
pre-release,
builds
of
container
D
and
do
have
Mo
pretty
much
all
of
the
backwards
compatibility
for
volume
mounting
working
exactly
as
expected,
which
was
one
of
the
big
concerns
for
the
for
going
to
GA,
so
I
think
we've
got
that
covered
and
other
than
that.
I
think
we're
just
going
to
continue
making
progress
on
that
Mike
Zappa
said:
yep
I
posted
in
the
chat
post
process,
containers
GA
for
126.
yep.
That's
the
that's!
The
plan.
A
Next
one
is
the
cube
CTL
log
viewer,
I'm,
assuming
I'm,
hoping
that
we
can
get
an
alpha
for
this.
Our
event.
B
Yeah,
that's
my
hope
to
I
have
pinged
Jordan
and
Tim
I'll
ping
them
again
today
on
the
pr
and
hopefully
they'll
respond.
A
Okay
sounds
good.
Did
anybody
update
the
kept.yaml
with
the
new
milestones
for.
A
And
I
did
add.
I
did
add
the
label
that
the
leads
were
asked
to
add
to
have
things
show
up
on
that
board
and
it
did
get
sucked
up
so
okay
release
theme
is
tracking.
That
and
I'll
do
that
for
the
rest
of
these
next
set
of
oh
no
or
caps
I
think
James
added
this
this
year,
I
pod
stats.
A
This
is
something
that
we're
still
pursuing
so
Sig
node
is
working
on
some
changes
so
that
this
you
can
get
all
of
the
stats
for
pods
from
the
container
runtime
over
the
CRI
API,
and
there
is
a
kind
of
naive
or
partial
implementation
for
Linux
and
we're
hoping
to
have
Windows
implementation
of
that
too,
and
there's
just
been
a
lot
of
back
and
forth
between
what
the
shapes
of
those
structures
look
like
in
there
is
there
anything
you'd
like
to
add
for
that
James.
B
Yeah,
we
just
need
to
get
that
support
emergency
to
cry.
I
have
a
like
prototype
API
open
in
containerd.
That
does
the
work
but
yeah
so
I
think
we're
bringing
up
at
signal
next
week
and
or
this
week
this
week
in
20
minutes,
hopefully
get
some
resolution
on
that
and
be
able
to
move
forward.
A
Thank
you
yeah.
If
anybody
has
any
questions,
please
reach
out
to
James
myself
for
that.
Okay,
the
next
ones
are
gonna,
be
a
little
bit
interesting.
A
It's
for
host
Network
support
for
Windows
I
have
a
PR
open
for
the
draft
here,
but
basically
the
Windows
like
we
have
the
Windows
operating
system
has
all
the
functionality
we
need
to
support
the
host
network
mode
or
to
join
the
containers
to
the
host's
networking
namespace,
just
like
Linux,
and
you
can
set
that
in
the
Pod
spec
and
it
doesn't
do
any
validation
so
I.
This
is
a
pretty
straightforward
cup,
but
my
thoughts
are.
We
should
just
try
and
support
this
properly.
A
For
two
main
reasons:
one
is
consistency
because
it's
confusing
today,
if
you
set
host
Network
to
true
in
their
Windows
containers
and
they
join
the
Pod
Network
dot,
the
windows
Network
or
not
the
host
Network
and
the
other
one
is.
We
have
seen
some
cases
where
we
think
it
can
help
with
Port
exhaustion,
because
people
can
or
like
users
can
set
up
some
in
specific
situations.
A
People
can
set
up
workloads
to
use
the
the
nodes
you
know
Port
Port
or
like
Port
spaces,
and
not
create
a
lot
of
services
in
the
cluster
which
has
been
shown
to
lead
to
Port
exhaustion
for
the
windows
nodes.
Oh,
this
is
a
pretty
straightforward
change.
It's
just
add
in
a
couple
of
the
CRI
fields
and
populate
them
in
the
cubelet
and
then
have
container
D
do
the
rest
of
the
work.
A
A
This
is
just
a
cap,
the
the
cubelet
when
it's
filling
out
a
bunch
of
stuff
for
the
run,
the
the
Run
pod,
sandbox,
config,
I-
think
I
linked
to
that
here
will
populate
a
bunch
of
these
namespace
options,
including
this
namespace
mode
Network,
and
it
will
set
it
to
either
pod
or
node,
and
then
that
gets
passed
over
the
CRI
to
cryo,
container
D
or
whatever.
This
only
happens
for
and
then
oops,
where
I
think
I
linked
to
the
spot.
A
The
container
D
code
checks
for
that,
but
only
if
the
like
the
go
OS
for
the
running
for
containerdy
is
Linux,
so
we
can
play
around
so.
A
It
requires
a
container
detanged
to
look
out
for
that
to
look
out
for
the
settings
on
Windows
and
then,
as
part
of
the
cap
I'm
just
saying
we
should
actually
set
the
settings
correctly
in
in
Windows
trucks,
but
we
could.
We
could
experiment
with
the
containerdy
changes
before
all
of
these.
Other
changes
are
done,
so
did
I
not
link
to
thought
I
linked
to
the
code
where
that
gets
set.
C
I'm
not
I
mean
I'm
interested
in
I,
I'm
I'm,
not
I,
think
it
was.
This
would
be.
This
is
great
I
I'm,
just
so
I'm
wondering
like
we
would
to
play
with
it.
It
sounds
like
it's
not
that
hard
to
it
sounds
like
it
wouldn't
be
that
hard
to
start
prototyping
it
out.
Someone
makes
a
containerdy
change.
Somebody
puts
some
glue
in
the
kublet
and
then
we
could
start
playing
with
this
and
see
what
we
can
do
with
it.
So
I'm
just
interested
in
from
that
perspective,
because
that
yeah
that
that's.
D
A
A
A
The
use
case,
one
is
that
right
now
it
one
is
consistency
and
the
other
one
is
I
kind
of
mentioned.
This
I
think
David
did
you
have?
Did
you
want
to
comment
on
that
too,
but
basically
on
Windows?
If
you
have
a
cluster
with
lots
and
lots
of
services
in
it,
you
can
very
quickly
exhaust
your
available
like
the
ephemeral,
Port
ranges
and
we've
seen.
C
A
I,
don't
David
Mike,
you
want
to
come,
I
think
some
of
it's
kind
of
related
to
cni,
but
some
of
it
is
a
very,
very
large
clusters.
Yeah.
D
Well,
there's
some
of
them:
it's
like
you
only
have
like
what
30
000
actual
available
ports
and
David
can
correct
me
if
I'm
wrong,
but
like
without
the
host
Network
namespace
say.
If
you
have
a
you
know,
a
pod
that
needs
to
expose.
You
know
30
or
40
ports
that
is
cluster-wide
usage.
D
C
So
this
makes
sense
to
me
for
this
situation,
where
you
have
an
app
that's
binding
to
a
whole
bunch
of
different
ports
and
it,
and
so
for
that
reason
it
you
want
to
run
it
on
the
host
Network.
Okay,
and
maybe
you
don't
want
to
even
Define
ahead
of
time
what
those
ports
are,
because
that
would
just
be
a
disaster.
C
C
Think
you
might
want
to
clarify
that
in
the
cap,
because
I
didn't
understand
that
until
you
explain
it
to
me
like
there's
one
app,
that
is
like
a
video
streaming
app
or
whatever
the
hell
kind
of
thing.
It
is
it's
doing
this
or
an
FTP
server.
I,
don't
know
who
uses
a
thousand
ports
at
once,
but
whatever
it
is
right.
C
A
D
Much
better
job
explaining
that
use
case
I,
have
it
open
right
now,
so
I
can
add
it.
To
my
add
it
to
the
queue.
C
C
B
C
Yeah
yeah,
it's
not
yeah
yeah,
it's
not
relevant
to
Windows
nodes,
but
it's
just.
It
was
a
similar
argument
that,
like
there
were
when
we
tried
to
say
there
was
pods
that
needed
access
to
a
million
ports.
People
were
like
what
the
hell
are
you
talking
about,
and
then
somebody
brought
up
Uber
and
it's
suddenly.
It
all
made
sense
right
like
because.
B
I
mean
the
way
I
would
look
at
this.
Is
it's
not
even
in
my
mind,
this
is
a
bug
that
Mark
is
trying
to
fix,
because
today
I
can
actually
do
this.
I
I
said
I
can
actually
say
host
Network
and
it
just
doesn't
do
anything
in
some
ways.
This
is
you
know.
Mark
is
fixing
a
bug.
It's
not
a
it's
just
he's
doing
the
implementation
to
actually
get
this
all
working
rather
than
it
just
you
know,
not
working,
but
not
letting
people
know
that
yeah.
C
D
This
is
filling
in
a
validation,
fixing,
a
validation,
bug
and
then
filling
in
a
feature
Gap.
You.
A
At
least
they
don't
really
have
caps
over
there
I
think
that
I,
don't
we
could
I
kind
of
called
out
and
said
that
those
changes
were
out
of
scope.
Where
did
I?
Do
that
update
continuity?
A
I
I
did
find
exactly
where
it
needs
to
happen.
I
think
that
just
having
a
PR
that
references,
this
and
and
actually
does
all
the
wire
up
and
then
Ed
tests
is
going
to
be
sufficient
for
continuity.
Okay,.
D
Yeah
Danny
and
myself
can
help
drive
that
effort
and
who
else
is
on
the
container
D
side
so
yeah,
because
it's
really
just
modifying
the
Run
pod
unpod
sandbox
call
effectively
to
take
the
windows
whatever
type
you
just
named
it.
A
D
E
Is
there
some
users
that
have
they're
really
relying
on
this
feature,
called
host
port
kind
of
a
cni
feature
which
creates
a
Nat
mapping
from
your
pod
to
a
port
on
the
Node
itself,
and
some
users
are
doing
that
like
for
a
lot
of
different
pods,
and
instead
of
relying
on
that
not
mapping,
they
would
like
to
the
host
network
mode
for
their
workloads
and
they
they
were
worried
about
having
to
like
the
only
way
to
do
that
is
to
launch
host
process
containers
which
they
didn't
want
to
necessarily
have
you
know
the
full
functionality
process,
container
switches,
I'm,
removing
kind
of
the
boundary
of
the
process,
isolation.
E
And
yeah
regarding
the
netting
I
think
Zappa
covered
that
we
we
have.
A
lot
of
ports
are
reserved
for
outbound
connections,
the
workloads
that
have
many
outbound
connections.
They
will
consume
ports
as
well
ephemeral,
ports
and
actually,
we
have
by
default
for
every
pod,
there's
outbound,
not
policy,
so
that
will
Reserve
first
64
ports
as
well
from
the
node
for
outbound
connectivity
purposes.
E
Yeah
there
is
a
setting,
actually
that
is
being
posed
at
least
an
AKs
to
remove
that
denying
the
outbound
matting
or
the
pods
a
little
bit
specific
to
a
chaos.
I
don't
know
if
other
providers
are
working
on
that,
but
that
will
reduce
the
consumption
of
ephemeral,
ports
drastically,
I
think
for
Windows
nodes
as
well.
E
A
I
don't
know,
but
since
the
work
is
kind
of
I
I
would
say
yes
because
of
its
changes
to
this,
the
CRI
API
and
usually
how
that
works
is
we
have
to
put
in
the
changes
into
the
CRI
API
and
then
wait
for
the
next
release
of
kubernetes
for
container
D
to
vendor
in
the
changes
to
the
CRI
API
and
then
do
the
actual
implementation.
Oh.
A
Usually
it's
just
oh
like,
even
though
it's
it's
it's
more
drawn
out
anytime,
there's
a
CR,
IP
I
changed
it
and
we
need
to
have
containerdy
consume.
Those
changes.
I
think
it's
automatically
kind
of
I,
think
yeah
at
the
CRI
changes
and
do
the
wire
up
in
Alpha.
Well,
it's
off
by
default
and
then
and
then
after
there's
a
container
D
like
after
there's
some
support
and
container
D4.
It
then
go
to
Beta.
A
I
need
to
drop
First
Signal
to
discuss
this
in
some
of
the
other
caps,
but
I
wanted
to
give
a
quick
update
on
the
Wind
ESR
and
wind
overlay
feature
Gates.
So
I
was
doing
some
spelunking
and
we
have
a
windiest
star
feature
gate,
which
is
still
an
alpha
that
was
added
in
114
and
it
went
overlay
feature
gate
which
was
added
in
114
as
Alpha
and
went
to
Beta
in
120.,
and
neither
of
these
feature
gates
are
tracked
by
a
cap,
the
initial
PR
kind
of
snuck
in,
and
we
like.
A
A
That
needs
to
be
done
and
docs
updates.
So
I
think
that,
since
these
are
both
windows,
specific
I'll,
probably
try
and
draft
a
cup
for
that.
Unless
somebody
else
wants
to
do
that.
But
I
was
going
to
bring
this
up,
but
then
signode
meeting
on
this
Thursday
or
Sig
Network
meeting
on
Thursday
to
kind
of
give
an
FYI.
But
this
is
what
we
want
to
do,
because
I
totally
I
actually
didn't
even
think
that
either
one
of
this
set
of
functionality
was
behind
a
feature
gate
until
I
was
looking
and
it
it
is.
A
Okay,
that
yeah,
that
would
be
great
but
yeah,
I
think
as
far
as
I'm
concerned.
Both
of
these
are
pretty
stable.
I
know
like
every
kepsi
cluster
that
we
make
sets
both
of
these
feature
gates
for
cube
proxy
and
I'm
guessing
any
other
cni.
That's
using
overlay
networking
is
setting
these
automatically
and
has
been
for
a
while.
So.
C
Mike,
if
you're
doing
the
are
you
gonna
help
own
that
DSR
kept?
If
so,
maybe
if
there's
anything,
we
need
to
add
to
the
cape.