►
From YouTube: Kubernetes SIG Windows 20220621
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
sig
windows.
This
is
a
recorded
session.
This
is
a
cncf
project,
and
so
we
do
follow
cncf
code
of
conduct.
If
you
have
any
questions
about
that
or
concerns
you
can
reach
out
to
any
of
the
leads
here
or
anybody
else
involved
in
the
kubernetes
project.
A
So
welcome
today
is
june,
21st
2022
and
we're
going
to
get
started
with
sig
windows
meeting.
So
mark
is
out
this
week
and
next.
So
you
have
any
questions
for
him.
You
can,
you
know
direct
them
to
me
or
or
claudio,
and
then
a
couple
things
I
wanted
to
announce
before
we
get
started
with
the
meeting.
Is
we
have
host
process
containers?
That's
gonna
stay
in
beta
for
125.
A
After
quite
a
bit
of
discussion,
we
decided
that
the
best
thing
to
do
here
was
to
get
the
changes
that
are
required
for
the
new
mount,
the
new
volume
mounting
behavior
into
container
d
1.7,
which
isn't
scheduled
to
release
until
after
125,
and
so
once
that's
out
in
one
1.7.
A
We'll
then
have
the
ability
to
have
a
little
bit
more
feedback
from
customers,
and
it
just
makes
a
smoother
upgrade
path
and
those
types
of
things,
and
so
it's
going
to
stay
in
beta
for
125
and
then
and
126
we'll
plan
on
going
too
stable.
So
the
enhancements
open,
I'm
gonna.
I
think
it
just
needs
an
lgtm
at
this
point.
So
if
you
want
to
read
it
over,
ask
any
questions
otherwise
end
of
the
day,
I'll
probably
give
that
lgtm
for
hospice
containers.
A
Okay,
enhancements
is
freezes
this
week.
It
got
pushed
back
to
this
thursday
from
last
thursday,
so
I
know
we
have.
We
have
a
couple
enhancements
that
are
going
in
and
so
we'll
do
a
check
in
a
little
bit
later,
but
just
wanted
to
get
that
out
to
everybody.
A
All
right:
well,
we
typically
leave
a
little
bit
of
room
for
new
contributors.
If
anybody
wants
to
say
hello,
I
see
a
couple
new
names.
I
think.
B
Hello,
I
got
the
wrong
camera
going,
but
that's
okay.
I
I
know
james
on
this
call.
That's
all
I
know
from
a
co-engineering
thing
from
a
while
back
I
work
at
psycor
and
I
didn't
realize
this
group
existed,
so
I
just
want
to
introduce
myself
and
I'll
periodically
pop
in
and
see
what's
what's
new
and
what
things
are
being
worked
on.
A
All
right
so
for
the
agenda
we,
this
was
this
one.
I
added,
I
think,
david's,
on
the
call
as
well.
Last
week
you
may
have
seen
the
email
go
out
to
kubernetes
dev
that
the
122
and
123
patch
releases
got
delayed
by
a
day.
This
was
due
to
a
back
port
that
we
put
in
and
we
didn't
catch
it
in
ci
until
kind
of
the
last
day
or
so.
A
The
the
patch
that
we
were
working
on
was
an
improvement
to
cue
proxy
for
when
there's
a
lot
of
services
and
a
new
node
is
coming
online.
The
sync
time
for
that
can
be
significant
in
the
hundreds
of
minutes,
and
so
we
or
david
patched
this
and
has
some
fixes
when
we
were
backporting
it
to
123
128.2
that
still
included
docker
shim,
and
so
it
missed.
A
When
we
ran
the
pr
sub
the
pre-submits,
we
ran
them
against
container
d,
but
we
missed
them
against
docker
shim
in
our
ci.
We
caught
them
in
docker
shim
and
we
ended
up
reverting
those
tests
and
david
is
going
to
reopen
those
back
ports
early
in
this
cycle
for
the
next
release
and
get
them
out.
A
D
A
Yeah
we
caught
it
before
the
release
got
out,
so
nobody
should
be
affected
by
it.
But
it's
a
good
reminder
to
upgrade
your
your
cni's
to
using
the
latest
versions
of
the
hms
api.
D
Yeah,
I
did
double
check.
I
mean
that
seems
like
pretty
much.
Every
cni
was
using
container
india,
they
had
to
move
to
the
v2
low
and
as
part
of
that,
you
were
using
the
hcn
apis.
D
I
think
the
this
issue
should
only
occur
on
container
d.
I
did
double
check
calico
and
when
overlay
also
was
using
the
v2
apis
already,
so
I
I
don't
think
there
should
be
any
impact
to
anyone.
D
The
message
is:
if,
if
anyone
is
using
v1,
if
anyone
is
a
cni
provider
or
uses
cni
plugins
make
sure
you
use
the
v2
apis,
which
you
should
should
already
be
using.
A
Cool,
I
think
openshift
was
on
a
different
scene.
I
I
see
many
on,
don't
know
why
maybe
that's
not
the
same
person,
so
I
I
think,
but
I
think
they
just
upgraded
to
the
the
newest
versions
of
the
cmi,
so
I
think
we're
good.
There.
D
And
the
andrea
guys,
I
think
you
guys
are
using
the
win
users
base
proxy
anyway
right
so
that
there
would
be
no
impact
here
as
well.
A
Okay
yeah,
so
they
did
upgrade
to
be
too
cool
all
right.
Any
other
questions
on
on
that.
A
Okay,
great,
so
I
don't
know
if
we
have
everybody
that
we
need
for
the
check-in,
but
I
just
wanted
to
add
this
since
the
enhancements
coming
up
the
end
of
the
week
here.
If
anybody
had
anything
or
any
updates
on
the
the
enhancements,
I
think
pod
os.
I
think
ravi
made
the
changes
and.
A
Yeah,
I
think
I
think
this
one
is
so
looks
like
we
are
probably
going
to
be
going
to
stable
for
pot
os,
which
is
pretty
exciting.
A
A
A
A
Okay,
cool:
is
there
anything
else
anybody
wanted
to
discuss
today,
kind
of
a
quiet
meeting.
F
Hi,
may
I
give
a
short
update
about
the
uplandis
windows
test.
F
Yeah,
so
I
have
a
read
a
lot
of
the
tests
currently
upstream
in
the
linux
conformance
test,
and
I
find
that
most
of
tests
don't
have
the
node
selector.
So
even
if
the
tests
that
cover
the
loss
of
expat
of
the
networking
part.
Well,
it's
still
like
hard
to
reduce
that
for
windows
because
we
need
to
add
an
also
no
selector
in
each
of
the
case
when
they
create
the
power,
create
their
resources.
So
I'm
thinking
that
maybe
it's
even
easier
to
add
the
new
test
directly
to
the
windows
folder.
F
So
I
started
to
do
that
and
I
have
like
a
draft
pr
in
the
in
in
a
windows
e3
test-
and
I
haven't
finished
that
so
what
I
added
is
is
a
cluster
ip
service
test
to
the
service
doctor
file
and
I
also
create
another
file
to
test
the
stateful
set,
which
is
the
which
is
part
of
the
networking
test
in
the
cap
that
jay
previously
I
mean
previously
merged
into
label.
F
So
that
is
what
I'm
doing
and
also
after
the
networking
part,
I
will
go
ahead
to
the
storage
and
network
policy
and
other
part.
So,
if
anyone
that
are
interested
in
contributing
to
the
windows
e3
test,
please
let
me
know-
and
probably
we
can
work
together.
F
So
this
is
about
the
uprightness
test
and
also
a
quick
update
about
refactoring
the
gmsc
that
this,
and
that
is
the
other
pr
that
I
opened
a
few
weeks
ago.
So
I
test
so
I
test
it
in
the.
F
F
And
for
the
other,
one
that
I
have
is
the
gms
apr
and
I
already
checked
the
dns
and
the
connectivity
between
the
pawn
and
the
node
and
the
node
and
and
between
the
node
and
the
active
directory.
Actually,
we
find
out
it's
the
the
resources
are
accessible
using
ipv,
but
not
the
domain
name,
but
actually
the
test
itself
looks
good
to
me.
So
I'm
thinking
that
maybe
there's
something
wrong
in
my
environment.
F
So
if
that
is
the
case,
I'm
wondering
that
if
you
have
the
time
to
test
it
in
the
in
the
other
environment,
so
if
that's
it,
if
it
can
pass
the
edge
environment,
that
means
that
the
test
is
itself
is
correct
and
we
can
still
go
ahead
and
merge
the
pr.
F
So
I
don't
know
if
that
if
that
sounds
good
to
you,
but
we
can
like
work
on
this.
If
you
like,
have
time
on
that.
A
I
I
it's
on
my
list
to
do
to
get
that
to
test
that
pr
against
the
azure
environment
I'll
probably
get
to
it.
This
week
I
did
when
I
when
I
was
doing
the
gmsa
previously,
I
I
was
able
to
get
to
work
with
the
dns
name.
A
I
think,
if
I
remember
correctly,
you
have
to
use
the
fully
qualified
like
dns
name,
so
I
don't
know
if,
if
you're
using
that
or
not
but
and
then
you
need
to
have
like
the
dns
on
the
node
setup
and
then
the
dns
and
the
pod
set
up
there's
just
quite
a
like
moving
parts
there.
So
it
could
be
any
little
thing
going
wrong.
F
You're
right,
I
have
done
dns
on
the
node
and
pause
setup
and
for
the
full
domain
name.
I
I
actually
think
I'm
using
the
full
domain
name,
so
I
can't
really
see
what's
probably
wrong
in
my
environment,
from
something
else
when
I
starting
node
or
registration
or
whatever
other
things
yeah.
F
A
A
F
So
actually,
yeah,
no
matter
to
add
tank
or
the
at
no
selector
I'm
thinking
about
if
I
should
modify
the
existing
linux
test
or
adding
a
new
windows
test.
So
now
I'm
thinking,
maybe
it's
easier
to
add
new
windows
test
so
and
that
is
what
I'm
doing
right
now.
So
do
you
mean
that
you
want
me
to
modify
the
existing
linux
test
and
make
it
work
for
windows.
A
E
Yeah,
that's
how
we
typically
do
our
tests
there's
only
one
exception.
For
example,
we
do
have
an
e3
test
for
windows
for
hybrid
network
in
which
we
spawn
both
the
linux
container
and
the
windows
container,
of
course,
for
drinks
container.
We
also
added
tolerance
for
that
for
that
taint
that
we
previously
mentioned.
E
I
think,
ideally,
we
wouldn't
copy
or
duplicate
other
tests,
because
that
would
basically
mean
that
we
would
have
to
maintain
them
as
well.
If
there's
only
one
test
and
it
actually
gets
updated
in
time,
it
will
also
be
updated
for
windows
as
well
without
having
to
have
any
worries
about
it.
E
Yeah,
I
think
that's
the
main
approach
we
we
had
with
testing
as
well.
Whenever
there
was
any
new
linux
only
test
and
we
were
considering
enabling
for
windows,
we
would
rather
have
the
same
test
with
some
changes
to
make
it
work
for
windows.
I
think
that's
a
reasonable
approach.
F
But
if
we
have
the
tent
for
the
you
know,
but
we
if
but
if
we
add
a
10,
which
means
that
we
we
allow
the
part
to
be
scheduled
on
the
snow,
but
it
doesn't
really
guarantee
that
there's
nobody
on
the
windows.
No
right.
If
there
are
other
linux
known.
F
So
if
we
add
the
tent
to
the
to
to
to
to
to
the
part
or
the
node,
which
means
that
we
allow
this
part
to
be
scheduled
on
the
windows
mode,
oh.
E
I
see
oh
yeah,
that
makes
sense.
You
can
actually
have
another
node
selector
on
the
pod,
which
basically
specifies
hey.
I
want
this
part
to
spawn
on
a
linux
node
and
if
you
also
add
the
tolerance
for
that,
they
will
guarantee
you
that
you'll
have
a
linux
port
on
linux.
Basically,.
F
E
Fully
fully
scheduled
that.
F
Is
exactly
what
I'm
thinking
so
so,
for
example,
if
I
want
to
test,
let
me
see
if
I,
if
I
want
to
test
the
a
service
that
should
should
have
the
end
point
on
the
windows
pod
on
the
on
the
windows.
Note,
if
I
only
haven't
had
the
tent,
I
can't
guarantee
that
I
need
to
have
the
node
selector,
so
I
can
guarantee
that
I
schedule
on
the
windows
node.
A
Did
you
say
you
had
a
pr
like
a
draft
pr.
F
A
Cool-
maybe
you
can
link
that
here
and
we
can
discuss
a
little
further
by
looking
at
that
pr.
E
And
also,
by
the
way,
here's
the
example
I
mentioned
in
which
we
with
a
test
which
we
basically
spawn
a
linux
spot
and
a
windows
pod
in
the
same
test
and
how
we're
handling
the
node
selector
and
the
tolerance
for
the
node
taint
at
the
bottom
is
there's
a
function
called
create
test
pod,
which
enters
that
part
yeah.
F
I
will
use
some
functions
in
my
pl
to
do
that
test
because
I,
but
my
main
point
here,
is
that
I
I
don't
think
it's
a
good
idea
to
modify
the
existing
linux
test.
So
I
add
new
test
cases
to
the
windows.
E3
folder
directly.
E
Yeah
that
works,
if
you
would
have
to
make
a
huge
amount
of
changes
to
make
a
linux
test
to
pass
on
windows.
I
think
in
that
case
it
makes
more
sense
to
just
have
it
in
the
windows.
Folder
just
have
it
for
us.
E
Especially
since
you
also
have
the
skip,
unless
not
always
this
throw
is
windows
yeah
that
makes
perfect
sense
to
be
in
the
windows
folder
then
yeah,
yeah.
C
G
Yeah
well,
this
is
really
on
behalf
of
dimitri,
so
huge
thanks
to
dimitri
he's
taken
over
the
original
kpg
kernel
space
pr
and
he
I
just
pinged
him
this
weekend,
and
I
heard
that
he
now
has
got
it
writing
out,
writing
out,
load,
balancers
and
end
points,
and
so
on
and
so
forth.
G
So
pretty
pretty
happy
to
hear
that
I'm
gonna
steal
his
fork
and
do
some
kind
of
a
thing
where
we
can
get
it
all
the
code
all
together,
and
so
I
can
start
sort
of
testing
against
it,
and
so
it's
probably
late
for
him.
So
he's
not
here
today,
but
I'm
just
updating
on
his
behalf.
So
big
thanks
to
dimitri
and
the
folks
at
microsoft.
For
for
helping
on
that
front.
A
Cool
for
anybody
not
familiar
with
what
kpng
is
it's
cuproc
proxy
next-gen
and
trying
to
bring
q
proxy
out
of
tree
and
make
some
much-needed
improvements?
I
guess
jay
might
be
able
to
explain
a
little
bit
more.
G
Yeah,
that's
pretty
much
it
we're
we're
moving
the
kpg
out
a
tree
and
so
from
a
windows
perspective,
it'll
yeah!
It's
not
that
from
a
windows
perspective
it'll
give
us
the
ability
to
run
the
coup
proxy
in
a
manner
where
the
thing
that
talks
to
the
api
server
is
separate
from
the
windows
process
that
writes
the
rules.
So
it'll
give
us
a
much
lighter
weight
way
of
doing
proxing
and
also
it
will
give
us
the
ability
to
add
whatever
configuration
options
we
want
to
the
windows
proxy
without
having
to
be
coupled
or
rely
on.
G
The
overall
life
cycle
of
the
linux.
Coupe
proxy
folks
have
seen
issues
in
the
past
where,
for
example,
we
wanted
to
add
a
priority
to
the
windows
coupe
proxy
back
in
and
then
turned
into
a
long
debate
about
how
that
would
you
know,
be
one
global
parameter,
but
there's
other
ones
right,
yeah
cool,
that's
all
I
got
and
again
sorry
for
being
late.
I
have
this
critical
meeting
that
happens
at
the
exact
same
time
as
this
meeting
so
but
I'm
always
happy
to
stay
afterwards.
If
folks
want
to
talk
about
things,.
E
Not
this
time
you
know,
I
mean
I
think
it
might
be
worth
mentioning
that
quantities.
121
is
approaching
end
of
life
next
week,
so
we'll
probably
have
to
do
some
cleanup
in
the
best
jobs.
For
that.
B
I'm
wondering
since
it's
my
first
time
here,
obviously
I
I
joined,
because
I
was
following
an
issue
that
we
have
and
then
ended
up
on
slack
and
then
found
out
about
this
through
james.
B
How
does
it,
how
does
how
do
these
issues
get
triaged
and
how
does
it
get
assigned
to
someone
on
github?
I
just
I
don't
want
to
circumvent
anything,
I'm
just
curious
if,
if
they're,
it's
an
issue
reported,
maybe
some
steps
to
reproduce
like
what
happens
at
that
point
for
issues
to
to
get
looked
at
and
I'm
being
no
sarcasm
here.
I
really
wanted
to
follow
the
process
here.
A
Yeah,
so
so
we
triage
the
issues
we
do
that
bi-weekly
and
then
you
know
people
do
it
periodically
throughout
the
week
and
once
we
determine
that
it
is
an
actual
issue,
we
try
to
get
somebody
assigned
to
it
to
fix
it.
The
clearer
we
have
for
steps
the
easier
it
is
to
like
reproduce
the
easier
the
faster
this
process
goes,
but
it's
it's
mostly
a
volunteer
product
thing.
Okay,
it's
a
matter
of
kind
of
you
know
someone
has
the
bandwidth
to
pick
one
up
and
fix
it.
A
If
it's
a
critical
bug-
and
you
happen
to
have
somebody
that
is
really
passionate
about
it-
we
encourage
them
to
give
it
a
try
and
try
to
reproduce
it.
Try
to
fix
it.
We're
here
we're
here
to
help
for
that
in
sig
windows
as
well.
You
can
always
ask
questions
on
doing
dev.
B
F
B
To
dig
in
deeper
into
the
networking
you
know,
the
internet
working
of
pods
on
kubernetes
is
just
not
not
our
thing,
we're
at
the
app
level,
but.
A
So
the
the
issue
itself
is
looks
like
q
proxy
is
writing
the
load,
balancer
policies,
but
then
on
an
update
if
they
refresh
the
pod
multiple
times
it
somehow
drops
that
policy.
So
I
think
we
got
to
the
we
got
to
that
reproduction.
A
Just
yesterday.
I
think,
and
so.
B
B
Added
a
workaround:
if
we
flip
the
services
to
load,
balancer
and
then
back
to
cluster
ip,
then
everything
kind
of
magically
fixes
itself.
B
The
issue
is
they
have
we
have
fiverr
up
to
on
another
demo,
eight
or
nine
services
that
spin
up
at
the
same
time,
some
are
windows
summer
or
linux,
and
the
windows
nodes
or
the
windows.
Pods
pardon
me
lose
connectivity
or
cannot
connect
to
the
linux
pods
or
the
linux
services.
So
just
by
flipping
those
two
linux
services
from
one
to
the
other.
Somehow
james
was
saying
we
must
rewrite
something
because
it
seems
to
work
around
our
our
issue.
A
B
G
B
You're
eastern,
perfect,
okay
because
we
have
so
I'll
give
give
a
30.
Second,
what
we
do
and
james
knows
he
was
part
of
helping
us
get
here.
We
host
dozens
and
dozens
and
actually
hundreds
of
of
aks
instances
or
deployments,
which
are
the
demos
that
our
sales
engineers
use
and
for
the
past
multiple
months,
it's
a
daily
multiple
times
a
day
and,
of
course,
we're
global.
B
So
I'm
eastern
by
the
time
europe
starts
spinning
up
instances
I'll
wake
up,
and
I
have
three
o'clock
in
the
morning
notifications
that
you
know
and
they're
custom
notifications.
But
every
five
minutes
we
get
a
nag
and
team
saying
the
demo
is
not
working,
so
we
go
in
and
we
just
literally
kill
our
linux.
B
Pod
just
kill
it
and
it
comes
back
up
and
everything
magically
appears
and-
and
that's
that's
been
our
life
for
the
past
four
to
six
months,
and
it
would
be
awesome
like
alexander's
working
on
this
workaround
and
doing
like
a
job
to
monitor
and
and
when
the
thing
doesn't
work.
It'll
he'll
just
kill
the
pod
automatically
and
have
it
but
we'd
like
to
help
the
so.
B
Our
contribution
will
be
the
steps
to
reproduce
in
the
environment
in
which
it
happens
fairly
consistently
and
if
one
of
them
happens
and
it's
not
a
critical
instance
that
needs
to
spin
up
it,
can
reach
out
to
the
to
the
end
user
and
say
leave
that
one.
There
spin
up
a
new
one
and
we'll
get
nagged
for
a
while,
but
I'll
reach
out
to
you
guys
and.
G
G
You
know
a
coupe
proxy
issue
right
like
because
if
it
was
just
a
coupe
proxy
issue,
it
would
be
really
really
easy
to
get
somebody
to
work
on
this
because
we
have
a
developer
environment.
It's
called
sig
windows
devtools,
you
can
clone
it
down
and
it'll
spin
up
a
vagrant
instance
with.
F
G
F
G
B
G
B
Okay
sure,
let
me
I'll
just
check.
G
A
I
so
the
issue
was
initially
created
in
aks,
but
I
at
this
point
I
believe
it's
actually
acute
proxy
issue.
Jay,
that's
what
I
was
trying
to
do
is
triage
it
down
and
we've
gotten
to
the
point
where
I
think
we're
close
to
being
able
to
reproduce
it
in
any
environment.
So.
G
E
G
Okay,
all
right,
I'm
gonna
ping,
you
all
in
slack.
B
A
A
So
I
guess
I'll
end
it
and.