►
From YouTube: Kubernetes SIG Windows 20220816
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
august
16
2022
iteration
of
the
kubernetes
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct
for
anybody
who's
new.
That
just
largely
boils
down
to
just
be
nice.
A
Let's
get
started
with
some
announcements.
First
announcement.
That's
not
on
here
is
the
target.
Release
date
for
125
is
next
tuesday,
the
the
23rd.
So
that's
coming
up
very
quickly.
A
I
think
we're
past
code
and
test
freeze,
I
think
docs
freezes
today
or
maybe
that's
just
enhancements,
related
dog
freezes
too.
There's
not
not
a
whole
lot
of
work
left
to
do
outside
of
the
release
team,
but
still
give
everybody
a
heads
up.
A
Next
announcement
is
the
kubecon
north
america
contributor
summit
registration
has
opened.
This
is
an
event
that
is
the
day
before
kubecon.
I
think
it's
the
it's
october
24th
the
monday
that
is
open
to
active
kubernetes
org
members,
if
I'm
not
sure
how,
like
I
don't
know
if
they
have
the
schedule
posted
for
this
yet,
but
so
I
attended
the
2019
contributor
summit
in
san
diego
and
found
that
to
be
valuable
and
worthwhile.
A
So
if
anybody's
interested
feel
free
to
sign
up,
take
a
look
if
you're,
not
an
org
member,
I
think
we
can
see
about
seeing
if
there's
work
to
do
between
then
to
get
somebody
to
be
an
org
member,
because
I
believe
at
this
time
it
is
only
open
to
org
members
yeah.
If
there's
any
questions
reach
out
contribex
on
slack
is
a
good
place.
A
You
can
reach
on
reach
out
to
sig
windows
too,
and
we
can
get
some
more
information
next
announcements-
and
this
is,
we
can
talk,
talk
about
this
more
at
the
agenda.
People
are
interested,
but
we
have
published
and
released
a
slim
base
image
for
host
process
containers,
this
all
the
source
to
build.
The
image
is
available.
In
this
repository,
we've
published
the
image
to
mcr,
there's
a
link
on
our
reference
to
the
tag
here,
and
this
repository
has
docs
on
how
to
how
to
consume
this.
A
How
to
use
this
in
your
build
files
and
how
to
use
this
with
build
kits.
If
anybody
has
any
questions
about
this
feel
free
to
take
a
look
file
issues
in
the
repository
ask
about
it
in
in
in
slack,
we
have
updated
a
number
of
the
host
process,
containers
that
we're
building
in
sig
windows
tools
to
be
based
on
this
and
yeah
hope.
A
Looking
forward
to
having
people
use
this
and
try
this
out,
we
have
we'll
keep
it
at
just
like
a
v0.1.0
for
a
while
until
we
get
a
little
bit
of
mileage
on
it
and
then
probably
just
bump
it
to
v1
and
see
how
it
goes
yep
does
anybody
have
any
questions.
B
Hey
mark
does
this
have
so
even
by
any
chance.
Is
this
a
scratch
image
like,
or
does
it
still
have
like
a
kernel
in
it.
A
This
does
not
have
a
kernel
in
it.
It's
like
a
scratch
image.
You
can't
use
like
from
scratch
is
kind
of
a
special
word
in
a
lot
of
image.
Builders
yeah
the
image
size
is,
I
think
it's
like
eight
kilobytes,
so
it's
much
smaller
than
the
other
images,
mostly
because
it
has
okay.
B
A
C
B
D
Very
useful
so
mark
I
tried
this
out
a
while
back
and
I
had
asked
if
kind
of
the
requirements
like
of
what
actually
is
looked
for
within
a
like
a
host
like
a
container
like
this.
Is
that
now,
like
officially
documented
and
stable.
D
Like
the
there's,
a
required
directory
structure
for
windows,
containers.
C
C
There
are
a
lot
more
requirements
around
the
the
process,
isolated
containers-
I
don't
know
if
danny's
on
the
call,
I
don't
see
him,
he
might
be
able
to
tell
you
more
about
that.
The
I
I
think
we
put
a
bunch
of
blank
files
in
to
this
one
just
so
that
the
container
like
at
the
boot
process,
it
would
actually
be
able
to
start
up,
but
we
didn't
fill
those
in
with
any
of
the
requirements
that
would
be
like
for
a
process
isolated
container.
If
that
answer
question.
D
Okay,
yeah,
because
I'm
just
curious,
I'm
talking
from
perspective
of
you,
know,
writing
code
that
builds
these
containers
and,
if
it's
actually
going
to
be,
I
just
want
to
make
sure
that,
if
we're
including
the
directory
like
the
blank
files
and
directories
that
are
required
here,
I
think
it's
like
it's
different
than
a
linux
container
image
right.
They
have
the
files
use
the
files,
it's
like
the
root
of
the
container
and
then
there's
also
the
results
like
hive.
D
I
saw
present
in
like
how
moby
was
building
these,
and
so
just
making
sure
that,
like
if
we
kind
of
follow
this
like
is
that
going
to
break
in
future
versions
of
server
2022.
C
Yeah,
so
this
again,
this
isn't
for
process,
isolated
containers
for
processing
containers.
You
need
to
have
certain
hive
files
and
other
things
in
place.
These
are
like
stub
files
that
we
just
put
there
they're
empty.
They
don't
have
anything
in
them
and
they
are
there
so
that
the
like
runtime
can
actually
generate
an
image
and
then
then,
from
there
they're
not
used
at
all,
and
so
yes,
they
should
like
it
already.
We
already
this
image
works
on
2019
and
2022
and
will
work
through
the
next.
A
All
right
did
those
questions
get
answered.
Let
me
see
if
I
can
share
my
screen
again.
D
A
Okay,
I
guess
next
we
can
move
on
past
the
announcements
we
can
give
space.
If
there's
anybody
who's
new
to
the
call
new
contributors
that
want
to
say,
hi,
ask
any
questions.
Real
quick
introduce
themselves
can
do
that
for
a
minute.
E
A
Okay
sounds
good
anybody
else
or.
B
Yeah
I'm
just
joined,
I'm
interested
in
you
know
everything
windows,
openshift
related,
so
just
here
to
listen.
A
Okay
sounds
good
if
anybody
is
interested
feel
free
to
join
the
sig
windows
mailing
list.
That
gets
you
access
to
right
to
the
doc
or
to
the
meeting
notes
and
stuff,
and
if
you
need
to
add
anything
to
the
agenda,
I
think
there's
a
link
at
the
bottom
too.
A
James
beat
me
to
it:
okay,
I
guess
next
we
can
go
to
the
agenda,
I'm
not
sure
who
added
this
brendan-
and
I
saw
this
yesterday,
but
the
minimum
configuration
for
windows
nodes.
Is
this
related
to
what
you
were
mentioning
andrew.
B
Yeah
mark
this,
is
I
added
it?
Okay,
so
it's
like
you
remember
we
were
having
that
chat
last
I
don't
know
last
thursday,
I
guess,
and
you
were
saying
if
you
could
get
someone
from
the
red
hat
15
to
speak.
So
I
added
this
just
to
kick
things
off.
Andrew
was
nice
enough
to
come
and
you
know
he
said
he
has
a
small
presentation
to
show
us.
So
I'm
gonna
actually
pass
the
ball
on
to
andrew
and
we
can
take
it
from
there
sure.
A
Before
we
start,
I
think
brandon
found
this
windows
container
requirements.
I
think
that
these
docs
are
the
requirements.
Documented
here
are
quite
low,
so
I
think
we
will
take
an
action
item
to
figure
out
like
to
update
those
or
at
least
review
those,
but
if,
if
anybody
has
any
more
practical
guidance
in
there
or
disagrees
with
the
recommendations
in
that
document,
we
can
definitely
get
it
updated.
A
E
Okay,
great
yeah
cool
yeah,
so
this
is
I'm
in
the
early
stages
of
testing
with
the
windows
worker
notes
on
aws,
and
so
this
is
going
to
be
kind
of
a
brief
overview
of
like
one
of
the
tests
that
we
run
from
perf
team
against
any
of
the
clouds
and
the
results
that
we're
seeing
for
the
windows,
workers
so
and
I'll
show
yeah
I'll,
show
one
linux
worker
and
then
the
couple
of
runs
that
I've
had
with
windows
workers.
So
I
mean
to
start.
E
We
call
this
test,
no
density,
it's
just
to
test
the
keeplet
maximum
of
the
250
pods
per
node.
So
for
that
we
just
use
a
simple.
I
mean
the
kubernetes
pause
pod,
something
that
doesn't
have
any
requirements
doesn't
actually
run
any
workload,
but
just
consumes.
You
know
a
pod
on
the
keyblade
for
that.
Our
target
is
to
have
250
pods
per
node,
and
these
are
all
going
to
be
the
m52x
large
instance
types
which
are
hcp
by
32
gig.
E
The
test
normally
runs
with
27
worker
nodes
and
will
scale
up
to
120
250,
you
know
and
so
on.
But
for
now,
since
I'm
just
starting
out,
I've
only
been
working
with
two
worker
nodes
and
eight
and
that's
just
to
see
how
it
works
and
it's
based
on
what
I've
been
seeing.
C
E
Far
so
for
reference,
this
is
a
like
a
linux
test.
Cpu
is
the
overall
node
cpu
in
memory.
E
So
just
like
the
takeaways
from
here
that,
like
idle
stays
above
six
cores
of
the
a
cores
and
memory
utilization
is
at
max
four
gigs,
but
the
user
in
the
system
cycles
are
far
below
100
or
right
around
one
core
total
and
about
all
of
27
of
these
workers
look
the
same,
and
this
is
under
the
250
pod
load
on
each
one
of
the
workers.
E
So
to
start
with
the
two
node
tests,
I
just
started
low
and
I'm
using
the
the
gcr
kubernetes
plus.
E
E
So
you
know
we
see
a
lot
of
activity,
lack
of
activity
and
the
privileged
cpu
cycles
and
the
idle
restores
to
four
or
five
cores
after
the
scale
up-
and
I
only
have
one
note
here-
the
other
note
did
look
similar
and
our
memory
is
about
half,
so
it
consumes
about
half
of
the
available
memory
on
the
node.
Just
to
run
these
150
pause
pods.
A
When
you
say
pause
pods
is
that
just
like
a
pod
spec
with
one
container
in
it?
That
is
the
pause
image,
because
I
think
that
would
translate
to
like
two
containers
per
pod.
Then
one
for
the
actual
information
one
for
the
pause
image.
B
E
E
Okay,
yeah.
Definitely,
yes,
so
that's
the
150
scale,
so
the
next
going
to
200.
You
know
this.
Our
idle
doesn't
restore,
so
the
steady
state
are
idle.
This
is
two
runs
back
to
back,
so
that
was
scale
up
and
then
delete
that
namespace
and
then
scale
up
again
and
let
it
run
for
a
little
bit
longer
and
so
yeah
I
mean
the
idle
I
for
me.
The
idle
doesn't
go
above
to
two
cores,
the
privilege
stays
very
high
and
our
memory
is
still
at
about
you
know,
half
or
less
then.
E
Another
takeaway
from
me
for
this
one
is
that
after
the
delete,
well,
whatever
memory
is
consumed
by
the
cubelet
or
system
processes
stays
there,
so
the
flat
line
in
the
idle
before
and
then
the
flat
line
after
the
the
cleanup
is
about
four
four
gigs
and
then
that
doesn't
change
after
the
subsequent
tests.
So
we
see
you
know
at
least
that
memory
stays
the
same.
E
But
yeah
privileged
cpu
privileged
cycle's,
taking
up
a
lot
of
cpu
here.
So
this
is
just
two
nodes
at
200
pods
per
node,
and
then
this
is
the
last
one
where
it
was
stable
because
if
I
go
up
to
240
parts
per
node,
this
is
about
somewhere
between
230
to
240
is
when
the
cubelet
stops
being
able
to
handle
that
many
pods
and
I
start
seeing
nodes
begin
to
go,
not
ready
and
sometimes
they
will
recover,
but
other
times
they
won't.
E
So
this
was
a
case
where
I
just
put
cpus
so
for
the
two
notes
that
I
was
testing
the
test
cancelled
itself
or
I
deleted.
I'm
sorry,
I
really
did
the
test
so
on
the
left.
You
can
see
the
baseline,
the
idle
restores
and
then,
after
the
pods
are
cleaned
up.
The
node
overall
restores
to
idle,
but
the
node
on
the
right
went
to
not
ready
and
it
stayed
not
ready.
E
While
I
cleaned
up
the
test,
and
so
the
pods
didn't
get
deleted
because
it
didn't
get
the
message
and
the
privileged
cpu
cycles
were
all
consuming
on
that
note.
So
that's
probably
part
of
the
issue,
but
I'm
sure
there's
more
to
uncover
there
and
any
higher
than
that
is
kind
of
you
know
I
haven't,
I
don't
go
higher
than
that
because
it
doesn't
make
sense
and
then
I
did
do
some.
E
I
did
it
like
one
test
with
eight
nodes
yesterday
and
it
seems
like
the
number
of
nodes
is
also
changing
the
number
of
pods
that
can
that
each
node
can
handle.
So
I
tried
200,
which
should
be
stable
and
saw
that
one
or
two
of
the
nodes
did
go,
not
ready,
and
this
kind
of
thing
so
it
seems
to
fluctuate.
E
This
is
all
on
openshift.
Do
you
know
if
that's
docker
container
d,
so
I
think
the
windows
notes
are
continuity?
Do
you
know
what
what
version
and
yeah
the
kubernetes
version
of
124.,
okay,.
B
C
And
have
do
you
also
collect
disk
metrics
for
because
I've
seen
something
similar
where
you
start
spinning
up
pods
and
the
cpu
goes
crazy
and
it's
unpacking
all
the
containers
and
and
the
disc
kind
of
spikes
up
as
well.
So
I
was
just
wondering
if
you
see
something
similar
there
too,.
E
A
Out
of
curiosity
like
on
the
the
node,
where
it
just
kind
of
went,
not
ready
and
then
the
pods
weren't
able
to
be
cleaned
up,
do
you
have
the
the
cubelet
log?
I'm
curious.
If
you
start
seeing
a
lot
of
the
cri
operations
start
failing
with
context
deadline
exceeded.
E
Okay,
I
do
have
the
cubelet
logs
for
some
of
them
I'll
look
for
that.
Yeah
see
our
app
operations,
yeah.
A
C
E
Yeah,
so
I
think
I
use
the
clanko
qps
in
burst
for
20.,
so
it
does
20
operations
at
one,
so
that
should
just
be
20
pods
per
second
they.
So
they
basically
all
come
up
within
10
seconds,
but
they're
all
fired
off
within
10
seconds.
That
could
be
a
test
too.
That's
what
I'm
doing
and
something
else
is
slowing
down
that
qps
rate
and
seeing
if
it
helps
does
that
I
don't
know
if
you're,
that's
where
you're
going
sorry
yeah.
C
Yeah,
exactly
with
windows
containers
the
being
able
to
start
so
many
positive
ones
is
more,
it's
significantly
more
cpu
intensive,
just
because
you're
starting
up,
I
don't
know
seven
or
eight
processes
per
pod,
whereas
with
linux
you're
only
starting
up
that
single
process,
so
that
that
will
likely
improve
the
part
part
of
it
at
least,
and
and
do
you
have
do
you
pre-pull
the
the
image
to
the
disk
or
are
you
is
that
happening
like?
Is
it
a
completely
fresh
node
with
no
images
cached.
E
These
all
of
these
screenshots
here
were
from
the
same,
I
believe
I
was
reusing
the
same
nodes,
so
I
think
that's
like
the
second
test,
wouldn't
reflect
right.
The
first
test
would
be
pulling
and
then
the
second
would
it
would
already
be
pulled,
but
yeah,
no,
I'm
not
pre-fetching
it
before
the
first
test.
A
Yeah,
at
least
with
the
in
the
case
where
the
node
just
completely
went
not
ready.
I
think
we
have
a
working
theory
of
what's
happening,
but
I
haven't
either
been
able
to
confirm
it
or
and
haven't
really
had
a
chance
to
fully
investigate
that.
What
we
think
is
happening
is
so
one
of
the
processes
that
james
mentioned.
When
you
start
a
container
is
there's
a
shim
process
that
gets
started
inside
of
the
windows,
server,
silo
and
container
d
talks
to
that,
and
so
there's
there's
that
process
that's
running
in
there.
A
The
working
theory
is
that,
when
the
machine
gets
starved
for
resources
that
process
that's
running
in
the
containers
isn't
able
to
get
like
cpu
time
and
at
some
point
when
it
could
happen
in
any
number
of
different
code
paths,
but
usually
probably
like
the
get
container
stats
kind
of
query
that
happens
periodically
from
the
cubelet
cubelet
has
to
contain
a
runtime
to
give
me
stats
for
the
containers,
and
that
goes
and
tries
to
query
all
the
stats
for
the
containers
and
at
some
point
it
makes
a
call
to
the
query
information
over
that
shim
and
can't
doesn't
get
a
response
back
and
that
just
kind
of
sets
up
a
cascading
set
of
failures
to
come
back
here
too.
A
So
I
think
we
need
to
continue
to
investigate.
If
there's
anything,
we
can
do
to
prevent
that
and
other
things.
But
this
is
I'm
I'm
a
little
bit
interested
too
about
why
having
more
nodes
in
the
cluster
would
have
it
so
that
there's
a
lower
threshold
of
when
that
of
when
that
tipping
point
gets
hit.
F
It's
because,
having
more
nodes,
those,
I
don't
know
20
pods
or
what
you
mentioned,
that
are
being
spawned
over
the
10
seconds.
They
are
being
spread
out
across
all
the
nodes,
which
basically
means
that
the
cubelets
might
not
choke
or
the
windows
notes
when
might
not
choke
as
much
at
any
point
of
in
time.
A
E
Yeah,
that's
that's
what
I
saw,
but
it
was
only
like
one
one
test,
those
like
eight
nodes.
I
think
I
did
well.
I'm
not
gonna.
Remember
the
exact
number.
I
think
I
only
did
200
and
I
saw
two
nodes
go,
not
ready,
but
they
did
bounce
back
so
that
was
I
mean
it
requires
more
attention.
Probably
more
runs
more
runs
than
that.
That
was
really
kind
of
early
information.
F
So,
regarding
the
not
really
part,
I
might
have
an
idea
of
why
that's
happening.
I
might
be
wrong,
but
this
is
what
I
think
so,
typically
how
cublet
works
pretty
much
detects
whenever
it
has
to
do
some
sort
of
work.
Let's
say
it
has
to
create
new
containers,
newports
and
so
on,
and
so
on
and
so
on.
F
If
the
qubit
is
quite
busy
exploring
those
parts,
it
might
actually
miss
the
life,
checking
periodic
life
thingy.
Sorry,
I'm
kind
of
having
a
brain
freeze
at
the
moment.
So
if
that
happens,
it
basically
means
that
the
cubit
was
actually
too
busy
to
actually
start
the.
F
F
Sorry,
which
then
basically
means
that
kubernetes
considers
those
nodes
as
not
ready,
even
though
they
are
still
working
now.
This
didn't
happen
for
multi
multiple
nodes,
because
per
node
there
were
fewer
poles
spawn
at
any
one
point
in
time,
which
meant
that
the
the
nodes
finished
faster,
which
meant
that
the
life
cycle
events
were
being
set
and
then
kept
alive,
which
meant
there's.
They
were
still
kept
in
a
ready
state.
F
That's
my
theory
and
as
an
assumption
from
how
I
know
cubit
is
working.
C
That
kind
of
brings
up
another
point:
do
you
run
cubelet
at
normal
process
priority
or
above
normal?
I
think
that's
one
thing
that
we
we
do
on
the
azure
side.
A
I
I'd
be
interested
if
you
could
try
this
again,
there's
a
cubelet
flag.
Let
me
I
can
just
find
the
docs
real,
quick
ravi.
Actually,
this
is
the
one
that
ravi
added.
If
the
numbers
look
the
same,
if
you
bump
the
the
process
priority
for
the
cubelet
up
higher,
I
think
that
that
might
help
with
some
of
the
issues
that
claudia
was
describing
where
the
cubelet
might
be
getting
starved,
but
I
don't
believe
that
we
have
the
same
flag
for
container
d.
A
So
if
the
issues
are
with
yeah,
let
me
let
me
let
me
just
find
the
link
real,
quick.
A
E
Yes,
so
for
me
I
mean
we
use
certain
measurements
from
this.
Node
density
is
one
of
a
handful
of,
like
you
know,
suite
of
tests
that
we
would
run
so
I
think,
just
getting
numbers
that
we
can
compare
to
other
clouds
or
you
know
so
we
can
compare
to
aws
linux
workers
like
across
the
board
and
see
you
know,
stack
them
up
side
by
side.
I
think,
is
the
overall
goal.
I
mean
you
know
yeah.
E
D
In
as
well
is
there
any
way,
you'd
be
able
to
provide
any
sort.
E
Of
like
methodology
or
a
way
to
repro
this
I'd
be
wanting
to
try
to
re-throw
this
in
our
test
infrastructure
as
well
see
if
we
get
similar
results.
E
Okay,
sure
I
can
share
the
test
suites
that
they're
out
there
on
github,
so
I
can
share
those.
E
A
F
I
have
one
small
question:
do
those
tests
also
make
sure
that
all
the
pods
are
running
and
active,
or
it
just
spawns
them
and
then
deletes
them.
B
E
E
Oh
no,
no,
I
did
see
that
so
I
guess
maybe
two
things
so
the
bench
the
loader
itself
doesn't
actually
check
that
they're
ready
it
does
just
fire
them
off,
because
it's
trying
to
achieve
the
desired
number
or
the
the
target
number.
But
I
I
have
seen
pods
stay
in
pending,
and
that
was
when
I
put
it
up
to
250
right,
I'm
forgetting
now.
E
I
I
went
to
a
certain
level
where
there
weren't
any
more
pods
one
of
the
nodes
went
not
ready
before
it
was
able
to
receive
all
the
pods.
So
then
there
were
some
pots
that
were
stuck
because
they
wouldn't
fit
on
the
other
one.
F
Regarding
why
I
was
stuck,
is
there
any
reason
they
were
stuck
like,
for
example,
in
some
scenarios
it
might,
if
you
do
a
describe
on
them,
it
might
include
information
like
hey.
Couldn't
reschedule
this
part,
because
there's
no
schedulable
note
with
enough
resources
for
this
part
to
spawn.
F
Might
also
be
interesting,
I'm
asking
this
because
again
for
the
case
in
which
the
node
went
to
not
ready
state.
If
all
the
pods
ended
up
in
the
running
state,
then
that
pretty
much
confirmed
the
fact
that
kubrick
didn't
have
enough
time
to
actually
send
the
keep
alive
left
left
left
check
to
the
qbps
server,
which
then
basically
meant
it
was
put
into
the
natalie
state.
B
F
E
Yeah
I.
E
There's
any
way
to
figure
out,
I
mean
I
to
me
the
fact
that
the
privileged
is
so
high,
I'm
assuming
that
the
privilege
cycles
are
where
cubelet
and
q
proxy,
the
other
cube.
All
the
cube
processes
are
running,
so
it
would
be
good
to
see
where
that
breakdown
is
and
why
that
privilege
is
so
high,
because
this
is
a
big
discrepancy
right
between
the
idle
cycles
of
on
a
windows
worker
and
compared
to
the
linux
worker,
where
it
stayed
up
around
six
cores.
F
E
F
Yeah,
that
would
be
great
yeah.
I
I
would
like
to
take
a
look
and
see
what
exactly
happened
there.
If
there's
some
issue
that
we
might
actually
want
to
take
a
closer
look
into.
F
But
yeah,
if
you
can,
can
you
leave
a
link
to
those
in
slack
later
on
or
something
that
will
be
fantastic.
F
Yeah
great
thank
you,
and
can
we
have
the
link
for
this
presentation?
I
think
it
might
be
useful.
I
think
we
can
even
include
it
in
the
see
windows
notes
for
anyone
who
might
actually
want
to
take
a
look
at
those
results
as
well.
B
Yeah
looks
like
both
mark
and
james
have
dropped,
claudia
so,
and
I
think
we're
we're
way
over
time,
not
sure
if
anybody
is
around
even
for
doing
the
pairing
today.
So
we
could,
you
know,
bring
the
meeting
to
a
close.
F
Yeah,
I
think
we
can
close
the
meeting
as
well.
Thank
you
so
much
for
joining
and
we'll
see
you
next
week.