►
From YouTube: Kubernetes SIG Windows 20220405
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
april
5th
2022
iteration
of
the
kubernetes
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct,
we'll
start
with
some
announcements,
the
let's
continue
with
the
124
release
schedule.
A
Important
deadlines
are
test,
freeze
is
tonight
and
if
all
of
the
docs
prs
for
the
for
features
that
are
supposed
to
go
into,
124
branch
should
be
ready
for
review
or
ready
to
merge
by
today.
Our
event,
I
know
that
we're
planning
on
filing
an
exception
for
the
or
possibly
filing
exception.
A
I
guess,
depending
on
how
reviews
go
for
the
node
service
log
viewer
enhancement,
have
has
there
been
any
movement
on
docs
updates
for
that
that
set
of
features,
because
I
wonder
if
that
might
also
hold
like
that
might
be
a
question
that
the
release
team
asks
if
they
let
it
into
the
release
of
the
docs
updates
ready.
I
just
thought
of
that
now.
B
A
How
complete
that
should
be
so
that
that
one
was
so
that
they
knew
who
would
be
on
the
hook
for
doing
the
docs
updates?
They
want
the
prs
to
be
in
a
reviewable
or
mergable
state
by
today
as
well.
Okay,
let
me
double
check
that
yeah
so
that
they
have
a
they.
They
have
a
couple
of
deadlines,
so
they
want
the
pierre's
ready
for
review
by
the
end
of
today.
A
A
B
A
Content
that
might
be
helpful,
yeah,
okay,
I'll
I'll,
write
something
up.
Okay
and
while
we're
here,
I
guess
we
can
just
go
over
the
next
set
of
milestones.
I
don't
think
we're
planning
on
doing
any
vlogs
or
sig
windows
isn't,
but
those
are
should
be
ready
for
review
by
tomorrow.
If
they
are
next
set,
are
so
yeah
it's
they
want
the
doc
all
the
pr's
reviewed,
that's
confusing
by
the
12th.
So
there
still
is
some
time
and
then
rc
release
rc0
should
be
out
on
monday.
So
that's
good!
A
That's
all
I
had
for
announcements.
If
anybody
else
has
any
announcements
feel
free
to
add
it
to
the
docs
or
chat
or
comments
here,
if
not
can
go.
Is
there
any
we'll
welcome
new
contributors
again?
If
there
are
any,
let
me
check
to
see
who's
on
the
call
yeah.
If
there's
anybody
who
wants
to
kind
of
introduce
themselves-
and
you
know
say
hi
figure
out
how
to
get
involved-
feel
free
to
introduce
yourselves
now
and
we
can
try
and
help
you
out.
A
Okay,
we've
got
a
couple
of
agenda
topics
today.
It
looks
like
both
networking
related.
So
let's
jump
in
david.
Did
you
want
to
talk
about
this?
A
little
bit.
C
Sure
I
can
go
over
this
we've
been
doing
some
testing
with
cube
proxy
on
environments
with
you
know,
a
lot
of
endpoints
or
pods,
as
well
as
services
running
in
a
given
cluster.
We
noticed
that
the
time
taken
to
plumb
all
the
hns
rules
was
taking
a
long
time
and
we
realized
that
so
q
proxy
is
doing
a
lot
of
expensive
calls
into
h
when
trying
to
sync
the
rules.
So
we
wanted
to
optimize
that
slightly
it's
pretty
short
before
the
release
milestones.
C
So
we
try
to
make
the
changes
as
close
to
the
previous
logic
as
possible,
so
that
we
have
wave
regressions
focus
more
on
validation,
so
we
hope
to
follow
up
with
more
optimizations
in
coming
months.
But
basically
this
brings
down
the
time
on
windows,
server,
2022
dramatically
over
10
times
less
time
taken
to
plumb
2000
services
or
the
rules
in
h9000
services.
C
There's
also
hns
changes
coming
on
windows,
server
2019,
so
the
q
proxy
changes
today
alone
won't
really
make
a
big
difference
on
windows,
server
2019,
because
there's
a
different
bottleneck,
the
operating
system
and
the
way
the
policies
are
created,
and
so
we
have
kind
of
another
hotfix
coming
to
windows.
Server,
2019
that,
together
with
this
proxy
binary,
would
reduce
the
time
taken
to
plum
services
there
as
well,
but
ultimately,
the
best
results
occur
on
windows,
server,
2022.
C
A
Okay,
that's
good
so
for
this
meeting
it
looks
like
I
guess,
there's
maybe
a
little
bit
of
discussion
in
the
chat
too.
What
what
do
we
want
to
do
with
this?
Do
we
want
to
try
and
have
this
get
merged
into
124,
or
do
we
want
to
hold
and
wait
until
125
and
then
back
port?
A
I
think
that
it
looks
like
the
release
team
is
willing
to
accept
this
into
the
124
release,
provided
that
we
can
have
somebody
from
the
wind
kernel.
Proxyer
approve
it.
C
A
C
A
Think
that
if
we
want
to
get
it
into
124,
it
need,
let
me
double
check,
but
probably
want
it
to
happen
today.
Yes,.
A
Yeah,
I
I
I
mean,
I
think
these
are
good
improvements.
I
think
we
should
try
and
just
merge
this,
especially
if
just
because
I
don't
know
when
the
125
like
when
the
kubernetes
main
branch
would
be
open
again,
to
put
it
into
one,
the
125
branch,
so
that
we
so
that
we
can
backboard
it
to
the
other
releases.
It'd
just
be
easier
and
quicker
to
get
it
in
now.
As
long
as
provided
that
it's
like
passes
the
bar
and
doesn't
cause
any
other
regressions.
C
A
Oh
yeah,
and
if
we
get
it
into
the
124
release,
we
can
backport
it
to
123
122
earlier
on
the
call
you
did
mention.
You
said
that,
because
it's
I
I
didn't,
hear
the
exact
wording
but
because
it's
a
bug
fix
you
tried
to
match
the
exact
logic.
Is
this
the
correct
fix
or
is
it?
Would
there
be
kind
of
a
reason
to
go
and
re-architect
this
in,
like
once,
125
opens.
C
Yeah
we
want
to
react
to
proxy
more
have
more
optimizations,
but
those
are
too
risky.
So
for
now
we
kept
the
core
logic
pretty
much
the
same
and
just
cached
as
much
information
as
possible.
Instead
of
repeatedly
calling
into
hms.
A
A
Okay
yeah,
so
I
think
next
steps
are
to
try
and
see
if
cervanth
can
review,
to
get
the
win
proxy
approval.
We
should
also
ask
jay
to
to
take
a
look.
A
A
A
A
Okay,
we
can
go
into
the
next
item.
James
did
you
want
to
discuss
this
or
I
can
provide
some
background.
E
Yeah
yeah,
I
can
go
ahead
then
so.
Last
night
there
was
a
prequest
that
was
merged,
which
is
the
that
one
exactly.
E
Basically,
there
are
a
bunch
of
tests
in
conformance
which
are
excluded,
excluded
from
windows
runs.
They
are
typically
typed
with
linux,
only
tag,
but
most
of
them
were
also
limitations
from
docker,
for
example.
But
since
we've
switched
to
container
d,
we
could
basically
enable
more
and
more
tests,
and
especially
since
we've
also
introduced
host
processes,
which
can
also
support
host
networks.
E
A
few
tests
that
have
been
enabled
for
windows
by
removing
that
linux
only
skip
tag
are
currently
failing
in
the
capsule
capsid
job,
which
is
using
calico,
but
only
for
overlay
networks
for
calico
at
the
very
least,
but
we
did
see
that
for
container
deflander
as
an
overlay.
It
does
work,
which
would
be
something
like
this.
This.
This
is
an
example
of
where
this
test
is
working.
I'm
gonna
paste
the
test
name
there,
so
that
passed
at
least
once
it's
probably
going
going
to
run
another
one
today.
E
Now
the
plan
is
for
now
to
exclude
those
two
tests
from
the
capsi
master
jobs
with
overlay
networks.
For
now,
but
ideally
they
should
be
addressed
and
have
them
working
for
other
cnas
as
well,
for
overland
networks.
E
Technically
it
it
should
work,
we've
already
seen
flannel,
do
it
and
pass,
but
other
cni
should
also
be
able
to
do
it.
We're
talking
about
conformance
test
conformance
this
year,
so
it
would
be
a
good
idea
to
conform
to
them
to
say
so.
A
Yeah,
so
I
actually
see
jay
just
got
online.
He
might
be
interested
a
couple
of
comments.
I'm
actually
a
little
bit
surprised
that
this
pr
merged
yesterday,
just
given
where
we
are
on
the
release,
especially
given
that
it
was
open
for
many
many
months,
but
that's
neither
here
nor
there.
You
might
be
interested
in
this
jay
and
anybody
who's
interested
in
andrea,
too.
A
A
That
is
passing
with
an
overlay
and
final
configuration,
which
leads
us
to
indicate
that
it's
not
an
issue
or
with
windows
server
or
like
with
with
windows,
with
overlay
networking
on
windows
and
it's
an
issue
with
calico
and
I
think,
as
claudio
just
mentioned.
Since
these
are
conformance
tests,
it
would
make
sense
for
them
to
have
the
same
behavior
across
all
of
the
different
networking.
A
F
E
You're
right:
well,
it's
also
worth
noting
that
those
two
tests
which
are
failing
on
calico
overlay
they
pass
on
all
l2
bridge
networks
or
zenbridge
networks.
E
So
this
is
another
of
those
things
that
does
doesn't
have
matching
behavior
across
cni's
and
network
types.
F
F
A
F
C
So
they
changed
the
htsm
version.
You
go
to
the
commit
specific
commit
where
the
changes
are
go
to
files,
yeah
change.
C
Yeah
yeah
yeah
and
then
yeah
changes
from
all
commits.
A
C
You
see
the
if
at
host
route,
adding
this
network
policy,
that's
ensures
for
overly
networking
that
their
node
to
pod
communication
will
work
at
least
for,
like
local
pods,
that
are
running
on.
F
A
F
E
One
one
quick
question
david:
they
just
say
that
basically,
it
fixes
the
the
node
pod
communication
for
the
same
nodes.
Same
node
communication
basically
increases
the
communication
between
the
node
and
a
pod
on
it,
but
not
necessary
they're,
not
not
necessarily
cross
node.
C
No,
no,
not
necessarily,
but
the
parts
can
still
connect
to
other
node
ports.
For
example,
those
tests
would
still
be
succeeding.
E
C
To
reach
out
to
a
pod,
then
you
need
this
change
for
it
to
work
locally,
at
least
I'm
just
thinking
of
a
possible
difference
between
flannel
and
calico,
because
it
seems
that
one
test
was
passing
on
flannel,
and
this
might
be
something
that's
unique
to
flannel
that
I
know
we
have
worked
on
in
the
past
and
added
to
flannel.
But
I
don't
remember
us
adding
something
similar
to
calico.
C
E
A
D
So
it
does
have
add
host
route,
so
I
just
looked
it
up.
A
D
A
Okay,
I
guess
okay
and.
E
D
Yes,
so
we
did
see
david,
we
also
did
see
some
issues
with
calico's
dsr
support.
I
put
in
a
pr
to
start
to
fix
that.
Is
it
possible
that
this
is
causing
problems
because
they
don't
have
the
right
dsr
or
like
loopback
stuff.
D
F
D
D
A
Yeah
pretty
similar
tests,
yeah,
okay,
anybody
who's
working
on
entry.
I
may
want
to
double
check
the
behavior
on
andrea
and
I
think.
F
Andrea
passes
this,
I
think,
for
sure
right,
a
meme
I
mean
we
run
this
test
in
rci
now
right
well,.
F
F
A
C
A
D
A
Okay,
I
think
we
could
finish
up
off
offline
discussing
this
since
we're
almost
out
of
time.
D
F
And
I
also
I
didn't-
I
never
thought
about
the
fact
that
we're
going
to
be
enabling
new
conformance
windows
tests.
So
maybe
claudio
can
you
file
an
issue
or
something
for
what
we
need
to
add
to
the
operational
readiness
filters
and
spec
cap,
so
that
we
get
all
that
in
there.
E
G
E
There
aren't
too
many
of
them.
I
don't
know
how
many
like
10
ish
tests.
I
don't
I'm
not
exactly
sure.
Okay,
I
don't
count,
though.
F
Yeah,
that's
fine,
we
can.
We
can
count
them
up
and
figure
it
out.
If
you
just
file
an
issue
of
what
things
we
should
count
and
how
often
we
should
count
them
and
what
we
should
track,
I'm
sure
we
can
do
the
rest
or
if
you
want
to
file
a
pr
or
whatever
I'd
file
the
issue
myself,
but
I'm
not
quite
sure
exactly
how
to
what
the
workflow
is
for
adding
windows
conformance
stuff.
So
I
just
want
to
make
sure
I
do
the
right
thing.
You
know.
E
Okay,
where
do
you
want
me
to
open
this.
F
Oh
anywhere,
I
mean,
if
you
want
to,
maybe
it's
in
enhancements
right
because
we're
still
working
on
the
operational
readiness
kept.
So
you
could
either
add
a
pr
to
the
operational
readiness
cap
of
like
what
these
new
tests
are,
that
you're,
adding
or
you
could
just
file
an
issue
to
remind
like,
I
don't
know,
who's
working
chinchi's
working
on
it
now
right,
so
just
assign
an
issue
to
chinji
or
something
or
or
me,
or
a
meme
to
to
talk
to
you
about
the
new
test,
we're
adding
or
whatever
you
know
it
could
be
anything
right.
E
G
E
E
Typically,
you
cannot
enable
host
networking
for
windows
pods,
but
if
you're
using
post
processors
you
can
so
basically
for
those
cases
we
are
creating
privileged
containers
or
post
processes
and
enable
host
networking
on
them
in
order
to
pass
the
test
for
whatever
cns
are
passing
right
now,
so
some
of
some
of
them
are
making
some
changes
to
the
tests
themselves,
but
not
necessarily
to
how
cubelet
or
container
d
is
working.
E
F
E
F
E
E
No,
I
don't
think
so
you
would
have
to.
We
would
have
to
typically
add
a
label
for
every
single
border
spawned
by
the
test
itself.
F
F
E
D
E
E
J,
typically,
when
you
say
that
you're
spawning
a
different
type
of
pod,
that's
not
really
the
case
so
much
anymore
because
most
of
the
e3
test
images
support
both
linux
and
windows,
so,
basically
using
the
same
test
images
for
both
oss.
But
there
are
quite
a
lot
of
places.
E
Yeah,
it's
just
a
label,
but
but
there
are
all
different
checks
in
the
tests
themselves
which
rely
on
that
not
voice
distro,
which
is
for
all
sorts
of
cases
and
scenarios.
This
is
one
of
them,
for
example,
for
those
tests
we're
basically
looking
at
the
node
was
this
row
and
if
it,
if
it
is
windows,
we
are
basically
setting
the
host
processes
to
true
yeah.
F
F
E
But
there's
at
least
one
test
which
basically
is
running
on
a
hybrid
cluster.
Basically,
it
is
spawning
pods
only
linux
and
windows
and
make
sure
that
they
can
talk
to
each
other,
which
is
an
extremely
useful
use
case
to
have
you
know
just
to
make
sure
that
you
can
properly
work
in
a
hybrid
environment.
F
I
think
I
might
ping
you
in
slack
tomorrow
morning
claudio
my
morning
or
your
morning.
Your
morning
my
morning.
F
I
have
a
little
updated
input
in
the
dock.
I
am
talking
to
stravanth
about
handing
over
the
coup
proxy
kernel
space
kpng
port
over
to
microsoft.
So
I
was
really
excited
that
microsoft
is
officially
investing
now
in
moving
the
coop
proxy
over
to
kpng,
and
so,
if
anybody
wants
to
help
or
be
a
part
of
that
conversation
and
as
we
hand
that
code
over,
let
me
know.
E
E
D
All
right
so
are,
we
are
we
done
with
the
the
official
meeting
or
is
there
anything
else
we
wanted
to
go
through.