►
From YouTube: Kubernetes SIG-Windows 20221101
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
November
1st
iteration
of
the
kubernetes
sync
Windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
YouTube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct.
I
will
start
with
a
couple
of
announcements.
First
announcement
is
that
there's
a
feature
blog
freeze
tomorrow.
Any
buddy
who
wants
to
author
a
feature
blog
post
to
go
out
at
with
the
1.26
release
should
open
a
at
least
a
PR
placeholder
by
tomorrow
to
to
get
that
done.
I've
opened
one
for
host
process.
A
Containers
going
to
stable
and
I
need
to
confirm
with
our
event
about
the
node
service
log
viewer,
one,
the
actual
like
content
for
the
blog
post,
doesn't
need
to
be
completed
until
much
later
in
the
release
cycle.
Next
freezes
coming
up
is
code.
A
Freeze
is
coming
up
next
week,
so
yeah
we'll
just
need
to
make
sure
everything
is
merged
before
then
and
then
the
following
I
believe
that's
except
Wednesday,
yeah,
that's
Wednesday,
UTC
and
then
the
next
one
is
there's
a
docs
PR
placeholder
freeze
for
any
docs
updates
related
to
feature
enhancements,
and
that
is
the
following
day.
A
A
B
Yes,
this
is
Anthony
Linda
new
here
I
haven't,
contributed
anything
yet
I'm,
just
lighting
around
to
see
where
I
can
help,
but
I'm
from
Microsoft's
Mac
I
missed
your
meeting,
but
I
I
watched
the
recording
I'm
from
the
Nairobi
team,
okay,
yeah
so
I
hope
to
be
joining
most
of
these
meetings
and
also
to
raise
my
hand
for
to
help
in
anything
that
I
can
currently
I'm
leveling
up
on
goals,
I'm
new
to
go
but
I'm.
It's
quite
interesting,
so
and
I
joined
in
the
the
other
meeting
for
triage.
B
If
you
think
there
is
anything,
I
can
look
at.
That
is
a
bit
of
good
good
for
fast.
First
timers
I
will
be
happy
to
look
at
it.
A
A
Yeah
we
can,
we
can
sync
up
and
see
if
there's
any
thing
for
the
the
test,
like
looking
at
some
of
the
test
issues
on
on
slack.
That
would
probably
be
a
good
place
to
do
that.
A
Okay,
now
we
can
get
into
the
the
agenda.
There
is
one
announcement
Michael
did
you
want
to
take
this
or
should
I
about
the
the
recent
OS
updates
for
the
agents
policy,
syncing.
D
Do
you
want
to
take
it
sorry,
I
didn't
know
if
there's
another
Michael
here,
yeah.
A
Yeah
sure
I
can
take
it
so
I
was
hoping.
David
would
also
be
here,
but
a
while
ago
there
were
some
issues
determined
or
like
that
came
up
on
Windows
nodes,
where,
if
a
new
node
was
joined
to
the
cluster
or
for
whatever
reason,
a
node
came
a
node
node
during
the
cluster
and
need
to
sync
all
the
network
policies.
A
If
there
was
a
large
number
of
services
in
the
cluster,
it
could
take
like
up
to
an
hour
to
sync
all
of
those
Network
policies,
which
means
any
workload
that
got
scheduled
to
the
node
or
any
workload
was
interacting
with
the
node
would
be.
Basically
just
undeterministic
of
the
network.
Routes
were
programmed
in
there.
A
David
shot
made
some
changes
to
Q
proxy
to
help
increase.
Okay,
good
I
was
looking
for
that.
Thank
you.
James
David
chat
made
some
updates
to
the
Q
proxy
to
help
alleviate
some
of
that
by
by
caching.
Some
things
that
were
going
on,
but
they're
also
were
some
OS
level
fixes
to
help
use
some
of
that
too.
So
those
fixes
are
now
available
in
optional,
patches,
I
believe
if
you
have
the
latest
Cube
proxy
and
yeah,
and
the
OS
updates
on
Windows
Server
2022.
A
The
OS
fixes
are
available,
they're
I
believe
they're
on
by
default.
If
you
install
the
optional
updates
on
Windows
Server
2022,
the
10c
patches
and
if
you're
running
it
on
Windows
Server
2019,
there's
a
couple
extra
steps
to
enable
it
for
this
patch
release,
which
are
documented
here,
but
I,
know
Julian
from
relativity
who
gave
a
talk
about.
A
Windows
containers
said
that
this
went
live
like
the
day
before
his
talk
and
said
this
does
seem
to
address
most
of
the
issues
he
was
having
with
hns
those
policies,
thinking
so
a
pretty
high
confidence.
He
was
also
saying
to
recommending
everybody
you
switch
to
Windows
Server
2022.
In
addition
to
like
this
fix,
you
just
said:
there's
a
lot
more
improved
stability.
So
if
anybody
is
running
Windows,
no,
it's
certain
with
a
lot
of
services
in
the
cluster
highly
recommend
to
to
investigate
this
I.
A
E
I
I
joined
late.
That's
the
that's
caused
by
Network
having
a
bunch
of
network
policies
that
that
slowness
in
the
yeah.
A
E
A
Yeah,
it's
there's
more
here.
My
understanding
is
that
there
was
there
was
some
well
I
think
there
was
multiple
bottlenecks
in
the
cube
proxy.
You
know
just
processing
everything
and
getting
all
the
network
routes
set
up.
Some
of
them
were
identified
as
in
Cube
proxy
and
then
a
lot
more
were
identified
in
the
hns
itself
and
so
there's
a
two
sets
of
fixes
to
help
fix
to
help
address
this.
A
E
A
E
A
A
The
key
proxy
updates
have
been
out
for
a
while,
but
the
cute
proxy
updates
helped
the
issues
a
lot
didn't
it's
still
like,
as
you
can
see
in
the
chart,
10
minutes
to
sync
all
right,
at
least
like
10
minutes
to
sync,
your
your
policies
like
or
to
sync
all
of
your
routes.
This
could
still
have
pretty
under
like
bad
issues
in
your
cluster
2.
E
000
cluster
IPS,
five
thousand
endpoints,
okay
and
15
endpoint
local
endpoints,
pods
15.
E
This
is
great
information
because
we're
trying
to
do
this
scale,
testing
stuff
for
kpng
and
we're
not
sure
when
we'll
see
Things
fall
down
and
yeah
thanks
cool
yeah.
A
A
E
I
I
have
an
interesting
update,
which
is
that
Ricardo
and
amim
and
Mikhail
are
all
at
my
house
right
now
and
they're
living
here
for
a
week,
and
so
we've
been
going
through
all
the
code
and
hanging
out
in
my
living
room,
and
so
we
split
up
the
data
model
so
that
the
global
data
model
on
the
server
side
is
separate
from
the
local
data
model.
E
On
the
on
the
back
end
side,
we
did
some
stuff
there
and
then
that's
part
of
what
we
want
to
try
to
fulfill
for
Sig
network
of
generally
making
the
code
more
understandable.
So
we're
just
going
through
every
line
of
the
code
and
commenting
it
and
stuff
like
that
on
the
Windows
side,
Dimitri
was
hacking
around
on
it
and
he
found
an
issue
and
I
think
we
are
now
back
in
a
situation
where
we
have
a.
E
We
had
a
development
environment
issue
that
I
think
I
need
to
look
at
again
and
one
thing
happened,
which
is
that
I
broke
Sig,
Windows
Dev
tools,
but
I
I,
fixed
I,
fixed
that
and
then
I
think,
maybe
hopefully,
today
or
sometime
this
week,
I'll
be
able
to
spin
things
up
and
start
being
able
to
test
because
I
we
haven't
tested
the
windows
implementation
for
a
while.
Ever
since
you
did
that
last
set
of
patches
on
it,
that's
kind
of
all
I've
got.
E
But
overall,
it
seems
like
it's
going
forward
and
the
goal
right
now
is
to
get
it
in
the
new
goal,
now
is
to
say:
let's
try
to
take
kpnb
separate
directories
and
break
them
up
and
move
them
all
into
tree
sort
of
so
that
so
that
there's
a
staging
repository
entry,
which
represents
what's
in
kpng
right
now,
so
we're
trying
to
stabilize
and
Shore
up
the
code
as
much
as
possible.
So
it's
understandable
and
something
that
we
has
a
chance
in
hell
of
making
it
through
a
Sig
Network
code
review
codes
review.
E
That's
the
that's.
The
overall
idea
is
that
you
know
like
right
now
we
have
this
grpc
interface
and
we
want
to
make
that
grpc
interface
explicit.
A
E
E
But
we're
kind
of
pretending
right
now
that
the
easiest
thing
to
do
is
to
have
two
coup
proxies
in
tree
at
once:
I
I,
totally,
don't
think
I,
don't
know
why
we're
doing
that,
though
so
okay
we'll
see,
but
but
exactly
since
the
hard
part
is
figuring
out
how
to
get
it
all
in
the
staging
and
getting
cigars
to
sign
off.
On
that.
E
But
it's
not
a
Time
sink
for
us,
because
either
way
we
need
to
clean
up
the
code
and
comment
it
so
like
that's
where
you
know
what
I
mean
it's
like
a
trojan
horse
to
just
keep
getting
the
work
done,
that
we
that's
obvious,
work
that
we
need
to
do
no
matter
what.
F
A
question:
yes,
if
we
start
working
on
kpng,
can
we
also
move
in
with
you.
E
Yeah,
so
yeah
I
still
have
this
outstanding
proposal
that
I
think
from
a
win.
I
mean
you're
joking.
So,
but
you
know
you
bring
up
this
thing
of
like
what,
if
other
people
at
the
windows
were
interested
in
it
well
I
mean
we're
still
sick
windows
and
I
I.
Don't
really
see
why
we
couldn't
just
have
an
out
of
tree
Coupe
proxy.
So
you
know
I
mean
that
we
had
complete
control
over.
So
I
put
this
on
the
mailing
list
that
nobody
seemed
to
have
a
strong
I
know.
E
Like
why
not,
why
do
we
even
need
to
be
an
intraco
proxy
I
mean
if
we
can
do
a
better
job,
maintaining
our
own
true
proxy
out
of
tree?
Oh,
what
I,
don't
even
I,
don't
I,
don't
even
see
why
we
need
an
entry
group
proxy.
To
be
honest,
that's
this
is
my
strong
opinion
here,
but
nobody
else,
yeah
I,
think
that.
A
F
Ahead,
there's
a
tendency
to
split
apart
bits
from
the
monority.
Basically,
for
example,
even
the
E3
testing
framework
itself,
We
There
was
recently
just
a
new
release
for
it
as
well,
which
basically
solidifies
the
the
separation
of
the
e2e
framework
itself.
So
yep
shouldn't
really
brush
to
just
merge
in
KK
again.
E
I
agree,
but
but
if
you
agree
with
that,
you
should
respond
to
the
thing
I
posted
on
the
mailing
list,
because
you
know
it
would
be
hearsay
for
somebody
to
just
go
and
start
taking
the
coup
proxy
out
of
the
tree
without
some
kind
of
community
consensus.
So
if
you
have
opinions
on
that,
I
know
Mark
you
had
some
concerns
and
concerns
are
great
too.
You
know
what
I
mean
like,
but
I
think
we
should
talk
about
this.
A
Yeah
so,
like
I
I
think
that
we
want
to
move
the
the
wind
kernel
praxier
out
of
tree
and
maintain
that,
but
we
don't
want
to
maintain
the
rest
of
the
parts
of
Q
proxy
that
interact
with
the
API
server
and
keeping
that
up
to
date
with
all
the
enhancements
that
are
kind
of
evolving
in
Sig
Network
too,
and
my
understanding
is
that,
with
the
entry
queue
proxy
today,
we
can't
do
that.
Those
are
coupled,
but
with
kpng
part
of
the
design
goal.
That
was
to
to
be
able
to
do
that
right.
So.
E
Yeah
I
mean
with
kpng.
Now
we
could
essentially
yeah
I
mean
we,
the
the
sick,
Network
API
is
pretty
I
mean
the
amount
of
stuff.
That's
changing
the
API.
Is
it's
not
like
we're
adding
new
Fields
every
week.
You
know
what
I
mean
like
I
think
we
could
do
it
I
think
we
could
kind
of
do
it
now.
I
think
you
know
there
are
things
like
topology
hints
or
whatever
I
guess
that
might
change.
You
know,
I
mean
I,
mean
I.
E
D
J,
if
you
move
it
yet
just
I'm
kind
of
curious.
So
in
this
model
you
know
we
deploy
a
kubernetes
cluster,
we're
not
deploying
Coupe
proxy,
so
we
just
delete
that
Daemon
set
effectively
or
I.
Don't
know
if
you
can
just
say,
don't,
deploy
it
and
then
you're
running
kpng.
What
changes
are
have
to
be
made
like?
How
do
you
actually
deploy
it
using
your
this
approach.
E
C
A
So
I
think
it
goes
back
to
James
did
a
a
paper
like
a
document
that
said
this
is
probably
the
changes
we
need
to
make
it
so
that
we
don't
need
to
have
cluster
specific
information
in
a
Powershell
script.
That's
used
to
you
know,
start
BQ
proxy,
so
I.
E
E
This
is
great
okay
yeah,
so
you
already
have
that.
So
this
thing
is
a
Daemon
set
and
you
have
the
container
that's
talking
to
the
API,
and
then
you
have
the
separate
container.
That's
talking
to
the
windows,
kernel,
yeah,
that's
great
and
and
you
could
even
make
that
one
one
process
if
we
wanted
to.
E
Two
or
you
could
do
the
in-memory
thing
as
one,
but
that's
good
enough
I
mean
that's
a
you
know,
cool
yeah,
so
it's
basically
the
same
thing
Mike.
It's
not
it's
not
functionally
different
from
an
end
user's
perspective.
It's
only
just
it's
it's
as
a
Sig,
it's
different
Strate!
It's
a
strategic
decision
for
us
as
a
Sig
to
say
we're
gonna
have
our
own
Windows
Cube
proxy,
that
we
maintain
completely
all
on
our
own
and
we
just
push
YOLO
to
it.
E
A
E
Exactly
it
means,
which
means
we
also
take
on
a
few
more
responsibilities,
but
I
mean
right
now,
I
mean
the
last
Windows
cve
was
in
the
windows
user
space
proxy,
and
we
did
that
right,
like
we
like.
That
was
the
last
one
that
I
remember
and
all
you
know,
Antonio
showed
it
to
us
and
and
I
think
I
I
think
we
just
deleted
the
code
like
he
showed
me.
The
patch
and
I
took
a
patch
and
I
deleted
the
code
and
we
submitted
a
PR
and
we
merged
it.
E
So
it
wouldn't
be
any
the
process,
wouldn't
be
any
different.
You
know,
I,
don't
think
at
least
what.
D
E
We
could
decide
I
mean
if
we
did
that
I
could
I
could
sort
of,
say:
okay
well,
the
cap
is
still
there
and
if
folks
want
to
continue
working
on
the
cap,
that's
fine,
but
we've
solved
the
problem.
We
have
from
a
Windows
perspective,
which
is
that
we
now
have
a
a
lighter
weight,
Coupe
proxy,
that's
more
configurable
and
we
could
focus
on
just
maintaining
that
and
and
let
Sig
Network
finish.
The
cap
I
mean
I.
E
I,
of
course,
would
keep
working
on
the
cap
of
network
wanted
us
to,
but
I
think
at
that
we've
probably
added
more
value
than
writing
a
cup,
because
we
have
a
working
implementation
that
we're
maintaining
and
we'll
find
bugs
faster
than
anybody
at
that
point
right.
So
so
I
think
for
me
it
might
change
where
I
focus
I
don't
know,
but
you
know
if
we
could
think
about
that.
E
D
That's
what
I
was
curious.
I
was
like:
where
are
we
dedicating
energy
in
regards
to
KP
and
G?
Are
we
finishing
kpng?
Are
we
like
trying
to
take
the
cap
to
the
end?
You
know
end
zone
and
get
it
approved
merged
and
have
Dominus
Dominus
from
Sig
Network.
It's.
E
D
E
D
E
Just
like
us,
you
know
for
us
it's
a
lot
easier
if
that
cup
was
merged
because
we
can
say
look,
we
know
this
thing's
going
to
be
around
forever.
It's
got
sign
off
from
Sig
Network
and
it's
a
lot
easier
for
us
to
pull
the
trigger
at
that
point
right.
So
any
organization
in
the
world
has
the
same
little
existential
crisis
to
decide
about
right.
E
Yeah
so
yeah,
it's
not
really
a
technology
thing,
it's
more
like
a
strategic
thing.
Where
do
we
want
to
how
YOLO
do
we
want
to
be,
and
so
you
know
but
but
yeah
like
there's
a
thread
like
folks
yeah?
Definitely,
if
you,
if
y'all
have
opinions,
let
me
know
positive
or
negative,
you
know
what
I
mean
either
way,
because
either
way
it's
fine.
E
You
know
what
I
mean
like,
but
but
I
think
it
could
be
a
lot
of
fun
if
we
just
if
we
I,
don't
think
the
mate
I,
don't
think
the
overhead
cost
would
be
that
high,
like
I,
think
it
would
be
easy
easier
for
us
to
maintain
I
think
we
can
get
some
people
involved.
Who
will
do
releases
for
us?
E
You
know
we
can
create
up
a
pretty
good
testing,
release,
workflow
and
stuff
and
and
I
I.
Think
yeah
I
think
the
overhead
could
be
surmountable
I,
don't
think
it
would
be
that
high,
but
it
would
be
non-zero.
Of
course
you
know
it
would
be
work
right.
So,
but
maybe
that's
the
question
right.
Maybe
maybe
the
question
is
hey?
E
Can
you
find
two
people
to
maintain
this
and
that
are
committed
to
do
that
for
the
next
two
years
or
whatever,
if
so
yeah?
Maybe
this
is
not
a
bad
idea.
If
not
we're
better
off
just
staying
intrigued,
I
mean
maybe
that's
the
decision
and
then
maybe
we
can
come
back
and
post
that
to
the
Sig
right
and
we
can
say:
hey
we
got
this
new
project.
Does
anybody
want
to
maintain
it,
and
you
know
foreign.
B
E
A
Book
that
James,
you
might
be
interested
in
that
too.
That's
the
the
user
guide
and
the
setup
yeah
we
shouldn't
do
that
and-
and
maybe
try
and
link
to
that
with
the
1.26
website,
update
like
I,
think
we
were
planning
on
doing
that,
but.