►
From YouTube: Kubernetes SIG Windows 20220125
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
have
a
pretty
light
agenda
today.
I
don't
see
any
announcements.
If
anybody
has
any
announcements
feel
free
to
interrupt
or
post
it
in
the
chat.
I
did
want
to
do
a
quick
summary
of
the
enhancements
for
124..
I
believe
that
all
of
the
enhancements
that
we're
tracking
are
they're
they're,
all
good.
We
have
all
the
cap
updates
needed
merged
and
all
the
prrs
ready
to
go
or
reviews
ready
to
go.
Oh
I
checked
this
morning
and
I
didn't
see
any
issues
with
that.
A
If
anybody
does
have
any
concerns,
please
you
know
find
me
or
any
of
the
leads
and
we'll
make
sure
to
get
those
enhancements
kind
of
tracked
for
for
this
release,
as
anybody
have
anything
else
or
they'd
like
to
announce
or
share,
if
not,
you
can
hand
it
over
to
a
meme
for
a
demo
of
q
proxy
next
gen.
B
Yeah,
just
a
just
a
heads
up
that
there'll
be
a
gmsa
chart
in
coming,
hopefully
here
soon
so
there'll
be
that
pr
coming
so
I'm
gonna
get
a
gmsa
chart
for
at
least
doing
the
deployment
and
then
on
the
rancher
side,
we're
working
on
a
chart.
That's
going
to
include
the
crd
and
stuff
like
that,
so
I
don't
know
how
much
we
want
there,
but
we
can.
We
can
make
some
changes
and
stuff
and
I'll
push
that
back
upstream.
A
B
Image,
no
there's
still
some
stuff
I
got
to
do,
but
I
got
pulled
off
of
focusing
on
the
the
cloud
build
yaml.
I
have
a
rough
one
in
place.
I
just
need
to
to
test
it
and
and
work
through
like
any
odd
things
there,
but
I
think
I
got
most
things
handled
so.
Okay,.
B
B
C
A
A
Okay,
thanks
for
that,
I
think
that'll
be,
I
think
there
was.
Is
there
still
an
issue?
I
think
that
there
was
an
issue
a
long
time
ago
created
for
having
a
home
chart
for
all
that
deployment
stuff.
So
I
think
that's
that'll
definitely
be
appreciated.
A
All
right
I
mean.
Do
you
want
to
share
your
screen.
D
Okay,
let's
see
okay,
so
I
don't
have
slides,
but
I
opened
some
tabs
here
to
give
a
kind
of
overview
of
the
idea
in
this
presentation.
So
this
started
with
paris
some
months
ago,
like
in
august
and
he
and
jay
added
the
announcement
of
the
application
of
both
user
space
on
linux
and
windows.
D
So
by
the
deprecation
police,
we
have
one
release
or
six
months
to
remove
the
code
from
from
upstream
after
the
deprecation
happens.
So
before
totally
removing
the
code,
we
started
to
migrate
them
to
capping.
That's
the
next
generation
of
kiwi
proxy
and
we
have
now
an
alternative
place
to
run
this
user
space
if
the
users
are
still
interested
on
run
that
even
if
the
code's
deprecated
and
not
supported
anymore,
like
I
think
the
team
think
is
a
good
idea
to
keep
having
this
in
another
place.
D
And
since
this
is
the
next
generation
like
it's
good
to
have.
D
And
so
basically
I
wrote
a
blog
post
here.
I
put
the
link,
if
you
guys
want
to
learn
more
about
these
on
on
how
this
capping
works
on
on
windows
and
what
are
the
steps
required
to
compile
and
run
it
inside
windows?
D
So
the
summary
here
I
would
be
brief
and
go
to
the
demo,
but
the
summary
here
is:
we
have
a
docker
file
where
you
put
the
both
binaries
like
we
have
the
quark
upping,
that's
the
brain,
the
central
part
of
coping
the
server
that
listens
for
events
and
talk
to
the
the
back
ends
via
jrpc.
So
you
have
the
other
part
you
have
like
the
the
back
end,
the
implementation
that
creates
the
load
balancer
and
binds
the
real
services
and
sockets
communication.
D
D
Stole
from
james
on
calico
hosts
process,
and
things
like
that,
so
if
you
guys
are
interested
on
how
to
run
this,
there
is
a
sig
windows
tools
that
goes
in
examples
of
docker
files
as
well
with
more
details.
But
there
is
not
nothing.
Fancy
here
is
a
nano
server
with
the
powershell
and
the
file
scope
inside
of
it.
D
We
have
two
ways
to
run
these,
like
the
first,
my
first
try
with
with
nssm,
so
I
run
as
a
service
and
just
run
like
the
both
binaries
as
a
service,
and
they
are
communications
and
everything
working.
So
I
think
well,
let's
give
another
try
and
let's
go
to
user
process,
and
that's
what
I'm
going
to
demo
right
now,
so
I
think
user
process
look
at
a
cleaner
way
to
do
that.
I
am
using.
D
G
G
D
Check
it
out
so
now
we
have
this.
D
No,
no,
that's
the
sorry!
That's
the
control
plane,
the
node's,
a
windows
node,
I'm
I
log
it
in
the
control
plane
node.
This
is
a
linux
node.
D
D
G
I
B
B
G
G
A
D
Sorry,
sorry,
if
I
was
not
clear-
but
this
is
the
this-
is
the
control
plane
linux
box
on
c
windows-
devtools,
the
windows
node
has
container
d160,
so
the
pods
we
run
here
like
is
who
am
I
regular
pod
on
windows?
This
is
something
that
spits
out
the
the
some
information
of
the
environment.
This
this
container
is
running
and
we
have
one
service
here.
D
D
The
the
first
thing
that
I
I'm
gonna
show,
and
the
only
thing
I'm
gonna
show-
is
node
port,
because
I'm
using
entry
entry
uses
the
user
space
and
entry
has
its
own
implementation
of
the
proxy,
and
in
this
version
one
for
only
node
port
is
using
cube
proxy.
So
cluster
api
is
already
using
enterproxy,
so
it
will
work
only
for
node
port.
In
this
scenario,
it's
enough
for
for
to
to
show
the
state.
D
C
D
We
can
see
we
can
see
different,
the
load
balancing
happen,
one
seven
and
one
five
here
and
one
eight,
so
the
load
balancer
is
happening
right
now
what's
running.
This
is
the
the
coping
window
demo
set
that
I
that
I
are
using
here.
D
What
are
you
having
this
demon
set?
We
have
the
enablement
of
host
process
here
in
the
security
contest
and
host
network
and
the
both
binaries
set
up
and
running.
So
basically,
this
is
the
back
end.
The
first
one
is
the
back
end
and
the
second
one
is
the
core
copying.
That's
listening
to
this,
connecting
to
the
watcher
of
api
server
via
this
configuration
of
qprox.com.
D
D
G
D
G
G
C
I
don't
know
yeah
so
the
are.
We
also
going
to
be
moving
the
wind
kernel
into
kaping
and
what's
the
timelines
for
those
things.
G
Yeah,
we
are
definitely
gonna.
Do
that,
we're
ramping
up
on
it
and
there's
probably
I'm
I'm
expecting,
probably
within
a
month
we'll
have
it
working
in
the
code
base
one
way
or
another
doug's
been
doing
some,
some
prototyping
with
it
and
probably
after
a
week
or
two
me
and
amin
will
probably
start
helping
him
also
and
looking
at
the
other
ones
like
we
were
able
to
port
the
iptables
one
over
in
about
three
weeks
and
that's
a
much
more
complicated
code
base.
C
G
It's
you
know,
there's
we're
still
in
a
phase
where
yeah
like
we're
porting
everything
over
in
parallel,
but
once
that
happens
then,
then
that
timer
will
start.
I
think
that's
that's
when
the
timer
starts,
because
because
once
that
happens,
we
know
that
ipvs
and
everything
else
will
come
along,
because
iptables
is
by
far
the
hardest
one,
because
the
it's
just
it's
the
hardest
to
debug
and
it's
the
most
critical,
because
that's
what
everybody
uses
right.
G
So
yeah,
but
I
think
I
think
in
124
that's
when
we
really
start
having
this
conversation
and
then
it
becomes
really
a
tricky
conversation
or
an
interesting
one,
because
we
can
go
one
of
two
ways.
We
can
say
from
windows
perspective
we're
better
off,
not
running
the
coop
proxy
watch
on
all
of
our
windows
nodes.
So
windows
can
have
less.
G
If
we
want,
we
could
say
that
we
could
have
a
bespoke
kpng
based
windows,
proxy,
that
has
windows,
specific
kernel
options,
and
that
only
has
one
watch
on
the
api
server
and
it
really
improves
the
windows
user
experience
in
a
lot
of
ways
or
we
could
wait
for
this
whole
thing
to
go
through
the
whole
kubernetes
machinery
and
land
entry,
so
that
that's
really
our
in
some
ways.
This
could
be
our
decision
as
a
sig.
G
If
we
want
to
do
that,
or
it
could
be
something
that
we
wait
for
sig
network
to
to
fully
formalize.
A
G
I
think
you
know
I
we
don't
have
any
timelines
or
targets
or
anything
like
that.
We
just
we're
going
our
only
we're
just
going
as
fast
as
we
can
here
right.
So
that's
that's
it
I
mean
if
I
was
to
guess
what
I
would
say.
G
G
Then
there's
probably
going
to
be
two
months
of
us,
carrying
water
back
and
forth
between
sig
testing
and
sig
release
to
figure
out
what
the
hell
we
have
to
do
to
actually
make
this
an
officially
versioned
thing
right,
and
so,
like
I'm
thinking
like
two
months
of
that
and
then
two
months
of
the
other
thing
and
then
we'll
probably
be
at
an
alpha
or
something
and
wherever
that
falls
within
the
release
timeline.
I
have.
I
have
no
idea.
G
G
But
if
somebody
wants
to
help
co-own
that
with
us
and
really
go
on
that
long
journey
with
us
like
we're
totally
totally
open
to
that
or
on
the
linux
side,
if
folks
want,
you
know
we
have
plate,
we
have
places
folks
can
slot
in
also.
This
is
all
very
long
I'll
warn
new
contributors,
though
this
is
all
very
long
like
anything.
You
do
will
be
weeks
and
weeks
to
get
it
working
and
stuff
and
there's
a
lot
to
learn
and
stuff.
So
yeah,
it's
a
really
really
good
time
to
get.
G
D
So
mark
deep
work,
just
to
not
say
I'm
lying
on
this
thing
and
the
windows
is
back
again
and
then
we
can
get
our
load
balancer
back
thanks
mark.
A
Yeah
no
problem
I'll
reach
out
I'll
reach
out
to
danny
and
see
if
we
should
make
an
issue
in
container
d
or
in
kubernetes
for
for
that.
I've
seen
that
like
once
or
twice,
and
it
was
usually
when
I
had
imageable
policy
set
to
always
for
testing
so
I'll,
try
and
repair
that
again
and
get
that
done.
Oh,
but
yeah
thanks
for
the
demo,
thanks
for
all
of
that
information
folks
does
anybody
have
any
other
questions
for
jay
or
me,
or
anybody.
A
C
I
don't
have
anything
to
share,
but
oh.
C
All
right
I'll
go
ahead.
Sorry
yeah,
so
I
started
thread
in
slack,
but
I
thought
maybe
we
could
just
quickly
discuss
it,
the
so
there.
So
we
have
the
windows
server,
ci
running
for
against
cluster
api
for
azure
and
there's
a
couple
failing
tests,
there's
one
for
node
ports
and
we're
aware
of
a
issue
in
the
os
for
those
the
other
one.
C
That's
failing
consistently
is
the
windows,
server,
2022
private
image,
and
that's
because
it's
not
in
the
gcr
and
it
sounded
like
you
had
to
open
up
a
pr
to
get
this
enabled.
But
I
was
wondering
like
how
to
what
like,
what's
the
next
steps
on
getting
this
private
image
into
the
windows,
server,
2022
or
if
we
should
be
taking
a
different
route.
E
E
I
was
waiting
for
some
news
from
aaron
and
benjam
sorry
ben
elder.
Regarding
on
how
to
create
those,
I
should
ping
them
again
regarding
that.
But
it's
been
some
time,
but
one
of
the
questions
that
remained
regarding
those
images,
those
images
is
the
is,
if
it's
and
it's
a
valid
scenario
to
be
tested
in
the
first
place,
the
other
private
images
that
you
mentioned
from
what
I
saw.
E
They
mainly
try
to
do
a
couple
of
deployments
and
see
how
they
work
on
what
was
it
gce?
I
think,
and
it
is
most
specific
to
gce
itself
so
for
most
cases,
in
most
cases
it
doesn't
affect
other
test
runs,
but
from
what
I
saw
nothing
about
that
really
matters.
E
The
fact
that
it
is
a
private
image,
so
at
the
very
least
those
can
be
deprecated
to
say
so,
but
for
the
test
that
basically
pulls
from
a
private
registry
does
the
entire
scenario
that
you
can
basically
configure
a
config
map
and
then
basically
provide
that
config
map
to
the
pod
and
the
cubelet.
So
the
cubelet
can
use
that
config
map
to
authenticate
to
the
registry.
E
E
Basically,
you
know
that,
basically,
you
can
use
some
docker
credentials
to
authenticate
to
a
registry
right.
G
E
Sorry
but
yeah,
that's
basically
what
we
need
I'll
ping
again.
I
don't
know
ban
on
that
request.
I
had
opened
and
see
if
they
have
any
new
ideas
regarding
that,
but.
E
I
basically
had
to
ping
the
infra
people
for
a
couple
of
months
to
make
them
build
my
images
and
push
them
there
and
after
six
months.
I
have
to
do
it
again
and
again
and
after
that
we
basically
just
said
why
not
create
our
own
docker
hub
registry
and
just
use
that
for
as
a
com
as
a
config
option
in
it
tests
and
use
them
for
our
tests,
because
mostly
we
are
interested
in
that
test.
E
We
private,
we
can
use
that
and
we
we
do
have
in
our
pro
jobs,
pro
pro
jobs,
a
label
which
basically
will
pass
in
that
private
docker
certificate
of
docker
credentials
and
register
register
register
them
into
kubernetes
as
a
secret
and
then
will
be
used
as
politis
itself.
E
D
E
Already
doing,
we
were
already
doing
that
for
a
couple
of
tests
before,
and
I
was
pretty
sure
that
we
already
pushed
them
build
the
image
for
windows,
server
2022
unless
something
huge
changed
and
those
versions
don't
really
match
exactly
something
like
what
happened
a
couple
of
years
ago,
when
there
was
a
huge
change
in
the
in
the
2018
images,
and
you
basically
had
to
rebuild
all
the
images
after
you
updated
the
windows
server.
If
you
remember
that
thing
and
listen
to
that
that,
like
that,
unless
something
like
that
happen
again,
but.
F
E
C
Okay,
yeah
and
I
I
assume
I
have
to
update
the
repo
list
to
point
to
the.
E
Yes,
in
our
police,
we
still
have
one
one
entry
to
e3
private.
That's
the
only
thing
that
we
actually
need.
J
All
right
thanks,
everybody,
I
think
we're
at
the
top
of
the
hour
now
so
yeah.
I
have
one
quick
question
before
we
wrap
it
up.
Yesterday
we
had
that.
J
A
James
or
jay,
could
you
yeah
just
cancel
the
recording
when
that's
over
yeah
all
right
thanks,
sorry,.
K
Are
you
carrying
on
the
csa
proxy?
Yesterday
we
had
a
csv
proxy
meeting
in
that
there
was
a
discussion
about
the
high
cp
usage
when
power
cell
process
gets
triggered
and
I'm
assuming
maybe
other
components,
also
might
run
into
the
same
problem.
So
we
were
discussing
how
we
can
avoid
that.
K
Is
there
any
alternate
anybody
aware
of.
C
Anytime,
we
shell
out
in
a
tight
loop
to
powershell,
it's
going
to
cause
like
high
cpu
usage
and
so
we've
in
a
couple
places
in
cubelet,
we've
rewritten
those
to
use
the
sys.
The
cis
calls
directly
and
that's
proven
to
resolve
that.
K
Okay,
how
about
do
you
guys
by
chance
use
the
w
the
sim
based.
G
C
Oh
yeah,
so
that
if
you're
gonna
like
shell
out
to
wmic,
that's
also
gonna
cause
any
because
it
process
creation
in
windows
is
fairly
intensive
and
so
anytime
you're
doing
that
in
a
type
of
tight
loop.
It's
going
to
cause
issues,
and
so
the
best
thing
to
do
is
to
find
the
window
assist,
calls
and
then
re-implement
whatever
it
is
that
you're
doing
inside
that
windows
syscall
I
I
know
there
was
a
library
that
was
wmi.
That
claudio
was
looking
at
introducing
at
one
point
and
I'm
not
sure.
C
K
Okay,
because
I
think
from
the
csa
side,
I
think
they're
finding
difficult
to
find
the
right
system
calls.
K
B
I
posted
the
link
to
the
hcs
shim
library
that
microsoft
has
for
go
and
it's
got
some
good
examples
of
making
sys
calls
and
there's
lots
of
things
in
there
that
you
can
kind
of
gather
from
from
that,
and
I
know
that
some
of
us
on
on
my
team
anyway
have
discussed
like
the
the
high
cp
usage
and
the
powershell
stuff.
So
if
we
can
find
a
little
bit
of
bandwidth,
we
might
take
some
look
and
make
some
suggestions.
K
Okay,
will
you
guys
be
able
to
attend
the
csa
proxy
call
so
that
I
think
that
will
help
bridge
the
gap
so
that
the
csa,
that
forum
will
get
right
to
input?
I
guess
there.
K
Cool,
probably
I
will
discuss
that
to
the
cc
proxy
channel.
Then
probably
we
will
try
to
look
back.
C
Yeah
no
yeah,
I
mean
you'll,
see
a
little
spike,
but
it's
not
a
huge
issue.
It's
when
you're
continuously
calling
powershell
in
a
loop
that
that's
when
we've
seen
issues
with
that.
So.
L
C
I
don't
know
specifically
you'd
have
to
measure
it,
for
instance
with
addressee,
and
I
we
were
every
every
time
a
container
was
created.
We
were
shelling
out
to
wmi
to
get
some
some
value
and
we
re-implemented
that
in
go
and
we
cut
the
container
that
call
down
500
milliseconds.
It
went
from
500
milliseconds
consistently
to
zero,
so
that
that's
a
example
of
where
you're
doing
it
in
a
in
a
continuous
loop
that
causes
problems.
B
Yeah
I
mean
the
the
powershell
boot
up
right,
powershell.net,
so
you're
getting
that
overhead
there
and
then
wmi
has
historically
always
been
really
slow.
So
so
powershell
wmi
calls
are
just
compounding
the
the
issue.
C
Yeah
so
go
ahead,
and
just
if
you
have
an
issue
or
a
specific
line
of
code
that
you're
trying
to
re-implement,
if
you
put
it
into
sig
windows,
we'll
see
if
we
can
come
up
with
a
solution.
M
Probably
about
the
tight
loop,
so
the
issue
was
raised
by
vmware
and
they
are
the
ones
that
had
that
that
are
doing
mounting
volumes
that
operation
in
a
dilute
so
yeah.
I
don't
think
that
the
issue
is
within
within
a
power
shell
reading,
what's
run
by
powershell,
but
what
is
run
by
csi
proxy.
M
M
So
we
were
also
looking
at
running
powershell
as
a
process
that
that
is
living
next
to
csi
proxy
and
we
would
send
commands
to
it
to
this
additional
process
that
would
execute
the
actual
powershell
command.
I'm
not
sure
if
that
will
work,
but
that's
something
that
did
shirt
I'll,
also
share
the
library
in
csi
windows.
C
Well,
yeah:
once
we
identify
it,
we
can
figure
out
if
there
is
something
or
another
solution.