►
From YouTube: Kubernetes SIG Windows 20210119
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
and
welcome
to
the
january
19th
edition
of
the
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube.
So
please
make
sure
you
adhere
to
the
coding
or
cncf
code
code
of
conduct
and
standards,
all
right.
Let's
get
started
today.
A
I
thought
we'd
do
something
a
little
bit
differently
and
start
off
with
some
intros
to
people
and
we've
had
a
fair
amount
of
new
folks
show
up,
and
I
thought
it'd
be
good
to
just
have
anybody
who
wants
to
just
kind
of
introduce
yourself
and
maybe
just
give
a
brief
sentence
about
like
why
you're
interested
in
what
you
hope
to
get
out
of
this
sig?
I
can
start
and
then
we
can
go
to
the
tech
leads
and
then
anybody
else
who
wants
to
introduce
themselves.
A
So
I
guess,
feel
free
to
raise
your
hands
or
comment
and
slack
or
I've
commented
in
the
chat
and
we
can
get
to
you
I'll
go
ahead
and
start.
My
name
is
mark
rossetti.
I
work
at
microsoft
in
the
azure
team
and
I'm
the
co-chair
of
sig
windows
right
now.
I'm
the
only
co-chair
michael
michael
just
recently
left
and
I
work
on
azure.
It's
part
of
kind
of
a
team
that
focuses
on
the
open
source
on
container
open
source
workloads
and
we're
under
the
azure
kubernetes
service
org
james.
B
Sure
so
I'm
james
I've.
I
also
work
at
microsoft
on
the
azure
team
with
mark
and
I
guess
I'm
one
of
the
tech
leads
and
I
helped
maintain
some
of
the
tests
and
do
a
lot
of
the
cluster
api
work.
C
Hey
everybody
yeah,
so
I've
been
working
a
lot
on
the
the
window
stuff
over
here
at
vmware,
getting
it
working
with
andrea
and
getting
working
with
getting
really
the
most
important
thing
that
I'm
working
on
right
now
is
making
sure
we
have
good
parity
and
good
signal
across
different
cni's
and
different
for
the
e2es
and
stuff,
and
me
and
perry
are
working
on
a
team
together.
C
So
if
anybody's
interested
in
the
cni
story,
especially
with
with
android
or
windows
on
cluster
api,
we
do
a
lot
of
testing
in
that
area
feel
free
to
reach
out
to
us.
Lately,
container
d
has
been
the
big
thing.
A
Yeah,
if
anybody
else
wants
to
introduce
yourselves,
I
guess
feel
free
and
go
ahead.
If
not,
we
can
get
on
to
the.
D
D
Okay,
I
just
want
to
give
a
quick
intro.
My
name
is
arvind.
I
lead
the
windows
container
effort
at
red
hat
under
the
openshift
umbrella.
I'm
happy
to
be
part
of
this
team.
A
E
Hey
yeah
yeah,
this
is
maz
imam.
I
am
the
pm
in
the
windows
team
focused
on
kubernetes.
I
have
been
part
of
sick
windows
for
almost
a
year
now
so
glad
to
chat
with
all
of
you.
F
Hi
and
I'm
perry,
I'm
a
cre
at
vmware
primarily
focused
on
some
of
our
windows
windows,
support
with
our
customers
and
working
a
lot
at
the
moment
with
jay
to
try
and
get
get
container
dean,
andrea
working
and
keep
finding
bugs
and
stuff
like
that.
So
yeah
happy
to
be
here.
G
G
H
Hey
everyone:
my
name
is
brandon,
I'm
on
from
microsoft,
I'm
working
on
the
base,
kernel,
containers
teams.
I
work
on
a
lot
of
the
windows,
server
container
and
client
containers
that
microsoft
produces
like
windows
sandbox.
I
just
recently
joined
this
effort
to
work
on
previous
containers
and
excited
to
get
to
know
everyone
here.
I
Hi
folks,
david
justice
from
microsoft,
worked
mostly
on
a
cluster
api,
a
little
bit
of
acas
engine
and
work
with
mark
and
james.
A
lot
so
very
nice
to
see
everybody.
J
A
Great
thanks
everybody.
Hopefully
this
will
also
help
folks.
We
can
do
this
kind
of
regularly.
This
will
help
folks
kind
of
know
who
to
reach
out
to
on
for
different
issues.
Since
a
lot
of
people
have
different
areas
of
expertise,
if
there's
nobody
else,
I'll
keep
going.
Okay.
The
next
thing
on
the
agenda
for
today
is
some
announcements.
Kubecon
eu
has
well
at
every
cube
con.
A
There's
a
series
of
maintainer
talks
where
different
cigs
are
indoor
working
groups
can
kind
of
give
updates
on
their
on
on
what
they've
been
working
on
and
also
go
into
a
little
bit
of
deep
dives.
Occasionally
for
this
for
this
next
coming
upcoming
kubecon
eu,
each
sig
has
one
35
minutes,
maintainer
session
slot,
and
typically
we're
asked
to
reserve
10
minutes
for
q
a
if
anybody
has
ideas
of
what
they'd
like
to
or
if
anybody
has
anything
interesting
that
they'd
like
to
present
with
that.
A
Please
reach
out
to
me
or
any
of
the
tech
leads
and
we'll
make
sure
we
get
that
added
for
this
session
we
were
thinking
of
giving
david
shot
a
kind
of
a
10
or
15
minute
session
to
talk
about
windows,
networking,
especially
dsr
support,
which
is
pretty
recent.
I
think
a
lot
of
folks
are
are
interested
in
that
both
how
it
works
and
how
to
configure
it.
A
So
I
think
that's
currently
the
plan
but
feel
free
to
reach
out
next
announcement
is
february.
9Th
is
the
enhancement
freeze
for
121,
so
anybody
who's
working
on
a
cap.
Please
make
sure
that
you
get
that
out
and
reviewed
early,
so
we
can
merge
it
by
then,
and
also,
as
I
mentioned
last
week
for
this
week.
If
anybody
is
working
on
a
windows
related
cap,
please
let
me
or
the
tech
leads
know,
because
we
will
need
to
relay
that
information
to
the
release
team.
A
These
are
the
only
two
cups
that
I
think
we're
tracking
or
actively
working
on
progressing.
But
if
I
got
that
wrong,
yep
just
reach
out,
I
guess
we
could
spend
a
couple
of
minutes
going
into
some
of
these
caps
and
asking
folks
for
reviews.
Our
events.
Do
you
want
to
start
with
your
cup.
D
Yeah
sure
I
just
opened
a
quick
work
in
progress
cap.
There
are
some
more
details
that
need
to
be
filled
out.
I
think
deep
left
a
review
over
the
weekend.
I
haven't
had
a
chance
to
look
at
it.
I
was
on
pto
just
came
back
today.
D
The
main
thing
that
I
still
I
I'm
trying
to
work
my
way
through
is
the
design
details
for
the
cube
cuddle
command,
which
needs
to
be
admin
only
and
trying
to
figure
out.
Is
there
something
existing
there
that
we
could
just
use
and
sort
of
pattern
ourselves
on
that,
because
this,
the
new
command
that
we
are
adding
cube,
cuddle
node
logs,
should
only
be
viewed
by
admins.
So
that's
the
status
of
that.
A
Great,
so
I
think
we'll
also
need
to
take
this
to
sig
node.
I've
added
it
to
the
cygnode
weekly
agenda
because
scanning
through
that
there
are
kind
of
updates
to
the
cubelet
in
order
to
get
the
system
event
logs.
So
we'll
we'll
take
that,
and
I
can
help
drive
that
and
ask
for
viewers
there.
A
Yeah
so
there's
this
a
weekly
signing,
soda
meeting,
sorry
there's
a
weekly
signo
meeting
following
this
meeting
every
week
and
I've
added
it
to
the
agenda
they're
kind
of
in
the
same
state.
We
are
where
they're,
just
tracking
all
the
caps
and
everything
if
you
want
to
show
up
to
the
meeting
next.
That
would
be
great.
It's
on
the
community
calendar
as
well,
but
if.
D
D
That
for
me
mark
because
I
know
I
have
a
conflict
right
after
this
meeting-
that
I
can't
skip.
E
I
just
have
a
quick
question
here.
So
a
couple
of
things
on
this
skip.
There
is
a
lot
of
interest
in
this
step,
but
one
thing
I
was
wondering
is:
are
we
thinking
about
how
we're
gonna
filter,
because
the
the
event
log
is
gonna,
be
huge
and
we
will
need
a
way
to
you
know
to
filter
the
logs
when
it's
coming
out
right?
We
cannot
just
output.
Everything.
D
Yeah,
so
I
think
at
the
openshift
side
using
oc.
That
is
a
way
to
filter
and
I
was
trying
to
see
if
we
could
reuse
that
that
same
method,
but
I'm
not
sure
if
it
will
clearly
map
to
what's
happening
in
cube
ctl
or
in
cuba.
E
I
see
and
when
you
say
the
admins,
it
is
only
for
admins.
I
mean
at
the
end
of
the
day.
Anyone
who
is
using
kubernetes
cluster
should
be
able
to
to
do
this
right.
For
example,
I'm
giving
an
example
like
gmsa
could
use
that's
an
immediate
use
like
when
you're
setting
up
gmsa.
E
If
something
is
going
wrong,
like
you
could
look
at
the
events
log
to
understand,
what's
going
on
or
debug
it,
but
when
you
say
it's
administrators
only,
I
was
a
little
bit
confused.
D
Yeah,
because
we,
the
way
we
have
been
using
this
in
openshift,
is
this-
is
an
admin
only
feature
only
admins
can
see
the
cube
cuddle
logs.
We
don't
want,
like
any
any
user
on
the
cluster,
to
be
able
to
see
the
logs.
That's
the
restriction
we've
been
having
so
far,
I'm
not
sure
if
it's
actually
safe
to
give
everybody
access,
but
I'm
willing
to
hear
feedback.
I
mean
I'm
not
tied
to
this
one
way
or
the
other
as
far
as
it
being
restricted
to
admins.
Only.
A
So
one
of
the
things
that
I
mentioned-
and
I
kind
of
had
a
conversation
with
arabic
about
this-
is
another
way
to
get.
Some
logs
may
be
through
privileged
containers
once
they
land
in
121.
Hopefully.
But
after
talking
with
our
event,
I
think
the
real
benefit
for
this
is
to
have
like
if
there's
any
issues
with
the
container,
runtime
or
keyblade,
or
the
node
configuration
and
containers
can't
start.
I
think
it
might
be
very
important
to
have
some
of
this
work.
A
Just
where
cube
the
cube
is
running
and
can
relay
those
messages.
Maybe
a
middle
ground
here
is
to
have
to
restrict
the
cubelet
to
only
kind
of
the,
I
would
say,
like
systems,
critical
processes
or
any
process
is
needed
for
running
the
container
workloads
and
then
the
more
filterable
more
generic
event
log
story
could
come
through
privileged
containers
is
just
is
just
one
option,
but
we
I
currently
do
see
a
value
in
having
both
just
in
case,
especially
for
the
case
that
containers
can't
start.
C
Possibly
possibly
tangential
arvind,
I'm
just
wondering
I
would.
C
I
would
be
interested
in
just
hacking
on
a
thing
that
did
the
thing
that
we
actually
that
you're
proposing
you
know
and
just
using
it
right
like
if
and
then
like
you
know,
making
like
maybe
some
kind
of
a
a
way
of
just
running
it
and
using
some
kind
of
a
crd
type
thing
in
a
coupe,
ctl
plug-in
and
just
getting
it
working
and
then
once
it
was
working,
maybe
proposing
that
upstream
like
and
we
could
also
do
I
mean
the
kep
is
also
a
good
idea.
C
A
D
D
Correct
yeah-
and
we
have
the
same
thing
for
windows
now
in
in
openshift
and
as
part
of
getting
it
into
the
product,
is
when
this
came
about
that.
Oh,
we
are
not
upstream
this
yet
and
we
need
to
upstream
it.
So
we
tried
to
upstream
just
the
pr
first
and
then
they
said
we
need
a
cap.
So
we
add
a
capsule.
The
hacking
is
actually
just
done.
C
D
Yeah
well
in
in
we
have
a
change
in
oc
and
then
we
have
a
change
in
the
cubelet
yeah
of
cube
cuddle.
So
that's
the
change.
That's
missing!
Actually.
So,
if
there's
any
hacking
that
needs
to
be
done,
I
would
think
it's
the
taking
the
changes
that
we
did
for
oc,
adm,
node,
logs
and
moving
that
into
the
cube
or
into
cube
cuddle
yeah.
H
D
G
Yeah
to
be
clear,
what
I
want
to
add
is,
I
think,
for
the
cap.
We
need,
we
also
need
to
add
6
cli
as
one
of
the
reviewers
and
then
get
some
information
from
them
regarding
how?
How
can
we
limit
it
to
a
cluster
admin
user?
G
They
have
this
cube
cuddle
plugins
too,
through
which
we
can
add
certain
commands
or
binaries
that
we
can
use.
So
I
think
that
it
will
be
beneficial
to
include
the
6cli
team
and
then
get
the
feedback
from
them.
A
A
A
I
think
we're
getting
pretty
close
to
pursuing
implementation,
so
I've
opened
up
updates
for
with
kind
of
what's
needed
for
that.
The
biggest
changes
for
anybody
who
was
following
this
before
is:
we
are
going
to
kind
of
limit
the
the
different
scenarios
we
are
going
to
support
for
on
the
networking
side,
so
privileged
containers,
at
least
with
this
current
proposal,
will
always
be
joined
to
the
host
network
and
will
not
be
able
to
be
joined
to
pod
networks
and
also
we
are
going
to
limit
this.
A
The
privileged
containers,
so
that
all
of
the
privileged
or
all
of
the
containers
in
a
single
pod
must
either
be
privileged
or
non-privileged.
There's
not
going
to
be
any
mixing
of
the
two
and
that's
for
some
kind
of
technical
limitations
in
the
windows
os.
A
There
still
are
plans
to
pursue
joining
privileged
containers
to
pod
networks,
but
that
will
likely
come
as
a
separate
cap
as
there's
still
more
kind
of
investigations
on
the
windows
os
side
that
are
needed
there
and
with
this
one.
This
is
a
sig
windows,
news
review,
signode,
and
we
are
going
to
take
this
to
sig
api
too,
because
we're
proposing
some
api
changes
here
too
so
I'll
be
helping
to
drive
that,
since
amber,
is
no
longer
working
on
this.
A
If
you
have
any
questions
or
comments,
feel
free
to
comment
on
the
pull
request,
and
hopefully
we'll
have
be
able
to
work
towards
implementing
this
pretty
soon
dimms,
I
think
you're.
Next,
with
this,
do
you
want
to.
L
Hi
I
just
wanted
to
set
the
stage
about
one
of
the
other
kept
signatures
driving,
which
is
to
deprecate
docker
shim
and
move
to
move
into
continuity
over
a
period
of
time.
So
there's
like
four
classes
of
ci
cia
jobs,
and
I
logged
an
issue
for
one
of
them,
which
is
the
pre-summit
jobs.
L
So
we
have
pre-summit
jobs,
we
have
release
blocking
jobs,
release
informing
jobs
and
the
last
one
is
the
node
conformance
jobs
right.
I
I
know
that
you
have
already
have
some
ci
jobs
in
the
node
conformance.
I
was
wondering
if
we
could
like
make
copies
of
the
docker-based
jobs
and
basically
start
getting
it
green.
You
know,
at
least
it
will
help
other
people
who
want
to
pitch
in
give
them
a
head
start
by
at
least
starting
with
the
police
on
the
job.
L
The
new
job
doesn't
need
to
be
pre-summit
out
of
the
gate
it.
It
can
be
just
a
job
that
can
be
started
on
demand.
So
once
we
get
some
confidence,
then
we
can
turn
it
into
a
pre-summit
job
so
that
that
was
the
thought
process
that
led
me
to
create
this
issue
and
that's
the
context.
Any
thoughts
please.
A
Yeah,
I
think
that
this
is
something
great
to
work
on.
I
see
I
actually
forgot
that
I
commented
on
this
in
december.
It
was
kind
of
a
long
month
between
now
and
then
yes,
I
think
we
want
to
do
all
of
that.
We
already
have
some
jobs
that
can
be
triggered
on
pull
requests
that
do
test
container
d.
A
I
think
we
do
need
to
add
these
azure
disk
as
your
file
jobs,
and
I
I
believe
we
already
have
those
running
as
periodic
jobs
I'll
follow
up
kind
of
after
this
meeting
in
terms
of
kind
of
the
right
right
now.
Sig
windows
only
has
periodic
jobs
and
jobs
that
must
be
triggered
manually,
and
I
think
we'd
like
to
work
towards
eventually
having
some
at
least
pr-informing
jobs,
but
this
is.
This
is
a
great
step.
B
If
I'll
jump
in
for
the
pr
informing
we
have,
I
think
the
major
blocker
for
us
there
is
the
pr
sorry,
the
pre-submit,
the
pre-submit
jobs.
They
all
run
docker
and
docker
via
kind,
and
we
don't
have
that
support
from
a
windows
perspective
and
it'd-
probably
never
be
there.
So
that's
a
one
of
the
big
blockers
and
there's
an
open
issue
that
I
created
a
while
back
that
kind
of
tracks.
Some
of
that
I'll
drop
it
in
here.
B
But
I
don't
know
tim's.
If
you
have
any
thoughts
on,
I
think
we
kind
of
went
back
and
forth
and
one
suggestion
was
maybe
if
we
were
to
get
some
sort
of
vm
based
solution,
so
that
we
didn't
have
to
spin
up
cloud-based
resources
to
be
able
to
run
this.
That
would
that
might
be
a
possible
thing,
but
there's
even
bigger
blocker
around
being
able
to
do
cross,
compile
for
the
windows
image,
the
windows
binaries.
So
right
now
on
pre-submit,
I
believe
cross-compile
for
windows
isn't
actually
enabled
it
just
does
type
checking.
L
So
I
can
help
with
some
of
that
change,
so
I
just
hit
me
up
with
the
issues
that
you
have
created,
but
as
a
as
a
side
note,
the
main
thing
that
I'm
looking
for
is
as
long
as
it
runs
the
same
set
of
tests.
I
I
don't
care
exactly
which
configuration
it
is
running
in,
so
that's
the
only
thing
that
we
need
to
make
sure
like
you
know,
apple's
travels
comparison
right,
so
we
don't
break
anything.
L
But
yes,
I'm
willing
to
pitch
in
with
the
cross-compelling
problems
or
you
know
getting
hands-on.
Some
ways
of
you
know
running
these
tests
in
other
environments
is
so,
let's,
let's
get
the
first
one
in
you
know,
bible,
core
crook
and
then
we'll
be
able
to
out
of
that
we'll
be
able
to
figure
out.
What
do
we
need
to
do
for
the
rest
of
them.
Right
sounds
good.
A
G
Yeah,
so
I
just
want
to
add
one
thing
related
to
it
when
I
started
working
on
it.
The
thing
that
I've
noticed
is
like,
basically,
the
resistance
was
mostly
related
to
the
time
it
takes
for
spinning
up
the
vm
and
then
running
tests
on
it.
So
kind
and
kind
is
pretty
much
faster
in
the
sense
that
we
are
not
spinning
up
a
vm
and
then
having
the
container
run
time
and
all
those
things
set
up
on
that.
G
One
of
the
things
that
we
were
pushing
on
our
end
was
to
ensure
that
all
the
tests
go
green
and
for
the
most
part
they
have
been
going
green,
especially
with
some
of
the
release
informing
jobs,
and
we
wanted
to
make
one
of
them
as
a
blocker.
But
what
we
noticed
is
setting
up
the
vm
and
all
those
things
are
going
to
take
some
time.
Is
it
okay
to
have
like
set
up
a
vm
and
then
run
those
type
of
jobs
being
a
blocker
for
every
pr.
L
A
release
blocking
release
informing
we
have
way
more
lenient
in
terms
of
how
much
time
it
takes
to
run
compared
to
pre-summit
jobs,
because
you
know,
obviously,
when
it
comes
to
crunch
time,
p
submits,
are
the
ones
that
stop
everybody
in
their
tracks
right,
so
that
that's
why
we
try
to
have
some
guidelines
on
the
on
the
job.
So,
right
now
this
specific
job
that
is
identified
in
this
issue,
we
already
have
a
docker-based
one.
All
we
need
to
do
is
flip
it
into
container
so
yeah.
G
L
Problem
sounds
good
ravi.
I
I
can
help
with
that.
If
you
can
so
essentially
hit
me
up
with
all
the
previous
issues,
please-
and
you
know
we'll
figure
out
one
way
or
another
got.
G
It
thank
you
and
mark
you
can
add
me
as
one
of
the
guys
who
can
help
things
with
this.
A
Sure
yeah
I'll
have
to
I'll
have
to
check.
We,
I
think
we
do
already
have
pre-submit
jobs
or
periodic
jobs.
We
might
just
need
to
update
the
config
to
also
allow
this
to
be
scheduled
on
prs.
Hopefully,
okay,
that's
all
yeah.
We
have.
C
A
So
ernest
who's
on
the
call
I
didn't
hear
him
introduce
himself
but
ernest
is
very,
like
I
would
say,
he's
kind
of
he's
able
to
understand,
what's
going
on
and
manages
most
of
the
test
configs,
especially
for
some
of
the
azure
related
jobs
and
has
been
great
at
helping
out
with
windows.
If
anybody
else
wants
to
help
out
as
well,
though
go
volunteer,
I've
been
pretty
hands-on
with
that
recently
as
well,
and
I
know
adelina
who's
also
on
the
call
helped
actually
bootstrap
most
of
that
work
recently.
C
E
C
Like
if
like
is
there
one,
I
mean
not
that
we
have
to
have
an
official
title
for
this
or
anything
but
like
if
there
was
one
person
that
that
knew
that
had
worked
the
most
on
it
would
it
be?
I
know,
adelina
was
working
on
the
standards
for
the
new
conformance
test
like
who's
working
the
most
on
it
now.
A
I'd
say,
I'd
probably
say
earnest.
I
hate
to
volunteer
him,
but
I
I
know.
M
He's
pretty,
I
can
and
yeah
just
to
quickly
introduce
myself.
I'm
earnest
and
I
work
most
closely
with
mark
and
james
on
all
the
tests
emperor
needed
to
run
all
the
tests
on
windows,
so
yeah
feel
free
to
contact
me
on
slack
or
check
me
on
github.
C
A
And
unfortunately,
I
need
to
drop
for
the
signoid
meeting
if
you
guys
all
want
to
continue
feel
free
to
do
so.
I
believe
that
the
zoom
meeting
won't
close
once
I
leave,
but
if
it
does
we'll
have
to
change
that
for
next
week
see
you
all.
B
This
is
real,
quick,
just
wanted
to
give
everybody
a
heads
up
that
the
test
broke
over
the
weekend
because
ag
egg
host-
and
I
think
maybe
one
or
two
other
images
didn't
get
got
promoted
and
they
didn't
get
pushed
to
our
our
windows
repository.
B
So
ernest
pushed
those
this
morning,
and
so
we
should
see
the
next
runs
come
online.
This
is
in
progress.
The
claudio
and
adelina
both
done
a
lot
of
work
on
getting
this
promotion
process
automated
and
I
some
of
the
images
are,
are
now
being
promoted,
not
all
of
them
and
but
so
we're
that's
still
in
progress.
So
these
kind
of
breaks
won't
happen
in
the
future.
So
that's
all
I
had
there.
C
It's
actually
related
to
years,
so
we
are
seeing
some
interesting
stuff
with
agn
host,
and
so
I
got
all
the
network
policy
stuff
working
on
windows
last
week
and
perry
was
generous
enough
to
give
me
a
cluster
on
some
flaky,
like
nested
virtualized
in
hardware.
That
was
running
andrea
and
running
container
d,
and
so
there's
a
lot
going
on
here
and
it
didn't
look
very
happy
when
we
ran
those
policy
tests.
C
I
then
went
over
to
an
eks
cluster
that
had
calico
running
and
I
also
ran
the
same
tests
and
I
had
the
test
suite
finished,
but
most
of
the
network
policy
tests
failed,
and
so
I'm
kind
of
in
this
state
here
where
I'm
like.
Okay,
I
got
the
test
working,
but
in
andrea
on
container
d,
I
think,
is
the
issue:
it's
not
android.
It's
container
d,
I'm
seeing
what
looks
like
processes
being
leaked
on
windows.
C
So
I
get
these
agn
host
processes
that
just
build
up
I'm
running
2.2.1
and
when
I
run
the
network
policy
test
on
calico
docker
eks
clusters.
I
see
those
poli.
I
see
them
running,
but
I
get
a
different
set
of
problems
which
is
most
of
them
most
of
them
fail,
and
so
I'm
wondering
what's
the
gold
standard
for
network
policies
on
windows
like
what,
if
we
had
to
pick
some
configuration,
I
don't
care
what
it
is
like.
C
What
would
it
be
would
be
the
one
that
was
like
this
is
the
best
one
to
test
against
as
a
baseline
for
regression
right
or
because
I
don't
want
to
file
21
issues
against
every
single
cni
provider.
You
know:
does
any
has
anybody
else,
even
thinking
about
this
or
working
on
it
or
if
not,
I
can
try
to
start
figuring
it
out.
N
C
Let
me
it's
98123,
claudia
I'm,
going
to
put
the
link
in
here.
So
ask
the
last
part
of
your
question
again,
because
the
part
of
my
brain
that
searches
for
urls
is
different
than
the
part.
N
I
was
asking
if
those
containers
or
parts
would
actually
get
leaked
because
there
shouldn't
be
after
the
test
ends.
The
parts
should
have
been
deleted
and,
of
course,
the
containers
as
well
yeah.
Those
shouldn't
have
existed.
Oh.
N
C
F
Explain
what
was
happening
so
basically,
what
seemed
to
happen
was
that
the
the
pods
would
be
created
for
the
test
to
run,
and
then,
when
the
test,
the,
when
one
of
the
containers
was
trying
to
be
created
on
container
d,
it
was
trying
to
write
to
the
logs
directory.
It
failed
to
write
to
it
and
then
it
was
then
trying
to
clean
itself
up
and
then,
when
it
was
trying
to
clean
itself
up,
it
was
getting
access
denied,
which
I'm
guessing
is
because
it
was
trying
to
write
at
the
same
time
trying
to
delete.
F
So
then,
when
you
were
cleaning
up
the
jobs
at
the
end
of
it,
you
would
then
be
left
with
this
pod.
That
was
in
a
can't,
delete,
can't
create
state,
and
then
it
would
hang
so.
You'd
run
the
test
again
and
then
you'd
basically
get
the
same
scenario
again.
So
you'd
get
a
pod
that
just
hangs
in
this
state
and.
C
N
The
a
agent
host
image
itself-
no,
it's
not,
I
think
it's
something
to
do
with
the
process
being
run
by
itself.
I
actually
had
no
idea
that
we
supported
network
policies,
so
I
haven't
really
tested
this
before.
C
It's
I
don't
know
whether
we
support
it
or
not,
or
not.
It's
c,
and
I
support
network
policies
on
windows,
calico
and
andrea.
Both
ostensibly
do
I'm
going
to
sort
of
propose,
as
a
hypothesis
that
I
hope
can
be
disproven
soon
that
nobody
has
actually
confirmed
that
network
policies
work
on
windows
in
a
in
a
in
a
comprehensive
way,
not
even
close
to
what's
done
on
linux.
No,
I
I
wouldn't
think
that
they
have.
C
N
I
would
like
I
would
also
like
some
cubelet
logs
for
those
licks
as
well.
I
I
might
have
some
interest
on
that.
If
not,
I
can
try
to
run
those
tests
as
well
and
see
it
happening
myself.
The.
O
N
Of
the
time,
that's
the
weird
part,
I
have
a
question.
I
assume
you
have
a
linux
node
as
well,
for
the
master
is
in,
is
it
by
any
chance
cordoned
or
having
how
they
call
it
attained?
C
Yeah,
it's
fainted
yeah
yeah.
It's
it's.
Definitely
it's
definitely
a
container
d
thing.
It's
definitely
some
kind
of
weird
log
collection,
kubelet,
container
d
level
thing.
It's
very,
very
weird.
N
Yeah
I
was
trying.
I
was
trying
to
to
make
sure
that
those
times
that
it
passes,
as
you
say
there,
the
pod
doesn't
actually
get
spawned
on
a
linux
node.
So
of
course
it
passes.
Oh.
C
Yeah,
so
what
I
did
for
my
patch
to
the
ntn
is,
I
actually
set
it
up
so
that
it's
node
os
distro
as
the
argument
it
plumb.
It
reads
that
node
os
distro
and
if
it's
windows
it
runs
all
of
the
containers
on
windows.
It's
not
even
it's
it's.
So
this
the
way
I'm
running
the
test,
which
is
debatable
whether
this
is
the
right
thing
to
do.
Oh.
N
C
Yeah,
these
are
using
a
label
selector
and
I
just
delete
all
the
I
delete.
I
delete
any
other
stuff
that
interferes
with
it,
but
that's
definitely
not
the
cause
of
it.
I
mean
it's
a
good
hypothesis,
but
it's
definitely.
The
issue
is
definitely
not
a
scheduling
issue.
It's
not
like
something
like
we've
got:
half
the
stuff,
passing
and
half
the
stuff
failing
because
it's
being
split
across
nodes.
It's
everything
is
windows,
it's
very
clean
test
and
it's
only
pro.
It's
probing
from
a
windows
pod
into
a
windows.
N
C
F
I
think
the
the
thing
that
seemed
to
be
the
problem
was
the
the
locking
up
of
the
pod,
and
I
think
that
ultimately
was
the
reason.
Everything
crashed
down
failed,
but
I
could
be
wrong
and
but
it
was
just
that
was
the
kind
of
behavior
that
we
were
seeing.
I'm
quite
happy
to
go
through
it
and
you
know
if
we
want
to
try
and
reproduce
so.