►
From YouTube: Kubernetes SIG Windows 20220118
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
january
18th
2022
iteration
of
the
kubernetes
sig
windows
community
meeting.
As
always,
these
meetings
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct
I'll
start
with
some
announcements.
I
don't
see
too
many,
but
I
did
want
to
anybody
who
missed
it.
Last
week
the
122
release
schedule
is
finalized,
there's
a
link
to
that
in
last
week's
meeting.
Notes.
If
you
are
curious,
please
take
a
look
and
there's
also
a
lot
of
information
about
the
release
team
members
too.
A
So
if
you
have
questions
or
want
to
reach
out
to
somebody
from
their
release
team
there's
all
of
their
contacts
there,
but
you
can
also
find
all
that
on
slack
too,
for
this
meeting,
one
announcement
they
had
is,
we
were
we've
been
asked
to
well
submit
a
talk
for
the
the
talk
outline
for
the
kubecon
eu
maintainer
series
talk
for
sig
windows
and
I
was
wanting
to
open
it
up
to
see
if
there's
any
topics
that
people
would
like
to
see
covered
in
a
little
bit
of
depth.
A
Some
ideas
that
we
had
were
a
deep
dive
into
how
are
like
the
different
container
users
and
how
they
operate.
I
think
there
was
been
a
lot
of
interest
in
that
in
the
past.
Another
one
is
possibly
demoing
the
pod
os
field
ravi,
maybe
that's
something
that
we
could
help
with.
A
I
think
people
now
that
it's
hopefully
going
to
beta
in
124,
something
that
we
should
probably
start
advertising
and
solicit
a
little
bit
more
feedback
on
if
anybody
has
any
other
ideas
or
things
that
they'd
like
to
see
a
deep
dive,
probably
topics
that
we
can
cover
in
between
five
and
ten
minutes,
either
bring
them
to
this
meeting
or
reach
out
in
slack
and
let
us
know-
and
we
can
try
and
get-
we
can
try
and
put
that
together.
A
Yeah
and
that's
just
to
submit
the
the
talk
where
they
like
to
do
it
like
the
synopsis.
I
believe
that
we
won't.
I
need
to
double
check,
I
think
they're
still
planning
on
doing
hybrids,
so
we
usually
do
the
recording-
and
that
is
usually
like
another
month
after
the
talk
submission
deadline.
A
Also,
if
anybody
is
interested,
this
probably
means
that
the
call
for
proposals
for
qcon
eu
is
open
or
will
be
opening
soon.
So,
if
you
have
an
idea
for
another
cubecontact
you'd
like
to
submit,
you
can
pitch
that
to
the
coffer
proposal
website
I'll
get
a
link
to
that
next
for
the
next
meeting
and
have
the
the
dates
and
everything
there
is
there
anything
else
that
anybody
would
like
to
announce.
A
Okay,
I
can
start
by
doing
a
quick
demo
of
cured
so
for
folks
who,
first
of
all,
we
have
a
couple
of
members
of
the
cured
community.
Here
I
don't
know
christian
or
jack
if
you
want
to
say
hi,
but
so
those
who
aren't
familiar
with
cured
it's
a
kind
of
a
project.
Let
me
open
up
a
link
here,
maintained
by
weaveworks
that
people
have
used
that
it
helps
to
schedule
kind
of
rolling
updates
or
coordinated
updates
for
your
nodes,
and
currently
it's
linux
only.
A
A
So
here's
the
project
I'll
add
the
link
to
the
agenda
and
there's
a
lot
of
information
about
configuration
and
everything.
I
wanted
to
do
briefly
kind
of
deep
dive
into
how
this
works
for
windows,
because
there
were
some
challenges
and
we'd
like
to
get
some
feedback
to
see.
If
this
is
an
acceptable
approach
or
not,
but
I
can
go
ahead
and
do
that
while
I'm
demoing
it,
let
me
find
a
different
screen.
A
Can
folks
see
this
so
the
way
that
keir
normally
works
on
linux
is
it
schedules
a
demon
set
pod
to
each
of
the
linux
nodes
and
users?
Have
the
option
of
you
know
providing
a
short
script
that
will
run
periodically
to
check
to
see
if
there's
a
reboot
pending
on
the
node
or
you
can
check
for
the
existence
of
a
file?
A
And
if
that
file
is
present,
it
indicates
that
there's
a
reboot
of
the
node
and
the
cure
demon
set
will
do
some
coordination
with
all
of
on
the
demon
set
to
make
sure
that
you
know
all
the
nodes
don't
reboot
at
once,
or
the
nodes
only
reboot
during
specific
hours
that
are
set
there
to
really
help
with
like
stability
of
the
overall
cluster
for
windows.
A
There
were
some
challenges
with
that,
because
there's
no
kind
of
uniform
way
of
detecting
if
there
is
a
reboot
and
there's
also
not
great
mechanisms
for
cleaning
up
some
of
those
files.
If
there
is
a
reboot,
so
I
can
show
a
little
bit
about
what
I
did
for
windows
to
get
this
to
work
and
see
if
there's
any
feedback
with
that
and
then
do
a
quick
demo
of
that.
So
this
is
a
cluster
that
I
already
had
set
up
and
I
just
have
some
iis
pods
running
on.
A
On
some
windows
nodes
and
then
I
have
secure
demon
set
running
so
the
cure
demon
set.
Is
there
I'll
exec
into
one
of
those.
A
And
also
for
anybody
who
is
using
host
process
containers,
tty
support
works
now,
which
is
a
huge
improvement.
Thank
you
very
much
danny
for
getting
that
set
up
so.
A
So
on
windows,
so
on,
like
a
lot
of
the
linux
based
machines,
there's
the
bar
run
reboot
required
file
that
if
you
set
it
it
kind
of
runs,
it
indicates
that
there's
a
reboot
pending
and
that
files
also
gets
cleaned
up
afterwards,
if
like
on
a
reboot.
So
a
lot
of
folks
just
use
that
to
to
kind
of
detect
if
there's
a
root
for
the
node
on
windows,
there's
no
such
equivalent.
A
So
what
we
done
or
what
I've
done
here
is
kind
of
written
a
little
script
that
checks
a
bunch
of
registered
keys
to
see
if
there's
any
reboots.
A
Pending
so
there's
a
couple
of
registry
keys
here
that
are
owned
by
different
parts
of
windows
and
if
any
of
those
registry
keys
are
set,
what
we'll
do
is
we'll
write
a
file
to
the
we'll
write,
the
equivalent
of
that
var
run,
reboot
required
file
and
I'll
explain
why
we
will
do
that
in
a
minute
and
then
there's
a
couple
different
registry
keys.
We
can
check
different
registry
keys
if
we
need
to.
These
were
the
two
that
I
found
that
were
the
most
applicable
ones
are
they're
both
from
windows
updates
here.
A
But
the
reason
why
I
decided
to
write
this
to
a
file
was
because
these,
if,
if
any
of
these
register
keys,
are
set
outside
of
the
servicing
stack,
these
register
keys
are
not
cleared
on
the
next
reboot.
So
in
order
to
avoid
getting
into
an
infinite
loop,
we
wanted
to
have
a
way
to
have
things
like
no
problem
detector
or
something
signal
that
this
node
is
having
a
problem.
A
A
That
does
a
couple
things,
but
one
of
the
things
is:
it
creates
a
scheduled
task
on
the
node
that
runs
at
startup
well
within
five
seconds
of
startup,
and
then
force
removes
any
file
that
has
that
path
there.
A
So
this
way
different,
you
know,
workloads
or
things
can
also
signal
that
there
is
a
reboot
required
in
a
kind
of
a
relatively
generic
way
if
any
yeah.
If
anybody
has
any
questions
about
that
or
has
any
better
ideas
on
how
to
signal
reboot,
please
let
us
know
the
other
thing
that
we
have
to
do
is
we
have
to
write
a
cube
config
file
for
the
service
account
to
write
back,
because
we
still
don't
have
the
unclustered
config
support
working
in
that.
So
I
can
do
a
quick
demo,
real,
quick.
C
A
C
A
The
inquest,
because
there's
this
is
kind
of
described
in
the
cap
and
I
can
point
to
an
issue
we
could
maybe
talk
about
it
after
to
give.
C
A
The
accessing
files
that
come
in
the
hostprocess
container
need
to
get
access
by
a
special
path.
A
We
need
to
prepend
the
environment
variable
for
there
and
the
ink
cluster
config
library
does
not
honor
that
right
now,
so
we
just
need
to
kind
of
wire
it
up
to
look
for
the
secrets
in
the
right
area,
but
we
can
break
out
into
that
later:
okay,
okay,
so
for
the
demo.
What
I
can
do
is
I
can
just.
A
A
A
And
then
we
can
watch
the
the
cured
pods
or
the
logs
for
the
pod,
but
in
I
I
have
it
set
to
check
every
minute
for
the
existence
or
it'll
run
that
script,
that
test
pending
reboot
script
every
minute
and
then,
after
that,
we
should
see
cured
start
to
do
its
work.
It'll
start
by
coordinating
the
node
and
then
draining
the
node.
So
we
should
see
this
one
be
getting
terminated
and
then
reschedule
to
the
other
node,
and
then
I
also
have
it
set
for
a
30
second
delay.
A
A
A
Yeah,
but
so
I
think,
the
one
of
the
main
reasons
why
I
wanted
to
bring
this
up
here
was
because
there
is
some
differences
in
behavior
between
how
this
would
work
for
windows
and
and
linux,
and
wanted
to
see
what
kind
of
use
cases
if
there
were
any
that
people
would
want
to
make
sure
we're
covered
for
windows
or
if
there
are,
you
know
better
alternatives
for
detecting
if
there's
a
pending
reboot,
as
is
this,
should
detect.
A
If
there
is
any
kind
of
pending
windows
updates
that
require
reboot,
it
should
detect
if
there
were
any
service
changes
or
like
windows,
components
that
were
changed
that
need
a
reboot
like
if
you
installed
the
containers,
feature
or
things
like
that
or
it
should
detect.
If
there's
that
file
running.
A
A
D
A
Yeah,
so
one
of
the
other
things
that
it
does
is
it
exposes
a
prometheus
endpoint
with
metrics,
for
if
there's
a
reboot
required
or
not-
and
that
is,
I
believe,
set
the
check
every
minute.
So
that's
where
these
messages
are
are
coming
from,
so
you
can
also
check.
You
know
just
check
to
see
how
many
of
the
nodes
are
pending.
If
you
don't
want
to
use
the
reboot
functionality
at
all,
but-
and
I
guess
also
real
quick-
we
have
some
of
the
the
maintainers
for
the
cured
project
down
here.
E
A
E
A
In
the
document,
so
I
have
a
pr
open
with
this
work
in
progress
and
then
I've
highlighted
and
said
like
for
wait.
In
order
for
this
to
work
on
windows,
we
do
need
post
process
containers
and
then
just
pointing
to
the
link
in
the
kubernetes
documentation
for
all
the
requirements
to
run
host
process.
Containers.
D
F
Sure
can
I
just
share
my
screen.
Will
I
have
yep
should
be
set
up
all
right,
I'm
going.
G
B
Yeah
sharing
your
whole
screen
with
slack
doesn't
work
with
wayland,
but
just
individual
windows,
but
I
think
on
zoom.
The
whole
screen
works.
F
G
A
C
Okay,
I
had
a
very
simple
question,
but
if
anyone
else
has
other
stuff
go
ahead
and
I'll
go
last,.
A
I
still
had
the
history
going,
so
what
what
you'll
see
here
is?
We've
acquired
a
reboot
lock.
So
this
is
part
part
of
the
configuration
for
the
keurig
stuff.
Is
you
know
you
can
specify
like
to
not
reboot
all
of
your
machines
at
once
in
the
case
of
a
new
windows,
update
going
live
so
it
like,
it
requires
a
lock
and
then
it
coordinates
and
trains.
The
machines
here,
you'll
just
see
the
verbose
output
of
that
runs
a
shutdown
command.
A
A
So
here's
the
history
of
what
we
saw
just
wait
watching
the
logs
is
the
the
one
on
the
zero
zero
nodes
went,
got
terminated,
got
scheduled
to
be
got,
rod,
got
rescheduled,
got
placed
on
the
next
node
and
kind
of
came
up
and
running
while
this
was
shutting
down
so
so
I'll
leave
a
link
to
the
pr
and
the
cured
project
in
the
notes
too.
So
people
can
kind
of
comment
and
take
a
look
if
you're
interested
in
urban
follow.
A
F
All
right
so
to
give
you
to
give
everybody
a
little
bit
of
background
about
this
is
this
feature
is
about
ability
to
view
node
logs
of
services
running
on
the
nodes
or
log
files
that
are
in
raw
log
folder
in
both
linux
and
windows.
F
On
the
linux
side,
what
happened
is
this
is
a
feature
that
was
actually
introduced
by
clayton
coleman,
very
early
on
in
the
openshift
development
cycle
in
4.x,
mainly
as
a
request
from
the
support
side
or
to
help
the
support
team,
the
red
hat,
support
team
in
debugging
customer
issues
where
they
have
to
have
some
access
to
the
individual
nodes.
To
look
at
you
know
the
cubelet
logs
or
q
proxy
logs
or
one
of
the
journal
logs
for
cryo,
the
runtime
or
any
service.
That's
that's
running
there.
F
F
We
have
two
two
nodes:
one
is
the
control
plane,
node
and
the
other
is
the
windows
node,
and
what
we
can
do
with
this
new
feature.
Is
you
can
do
something
like
so
here?
What
I'm
saying
is:
go
to
the
windows
node
and
show
me
the
the
log
of
the
service
called
microsoft
windows,
security
sbp.
I
have
no
idea
what
that
service
does,
but
it's
it's
it's
one
of
the
logs
that
I'm
able
to
do
and
and,
as
you
can
see,
you're
able
to
now
see
the
logs
of
that
particular
service.
F
You
can
also
do
other
things
like
you
can
look
at
everything
under
c
var.
Log
is
visible
in
on
the
windows
host,
so
you
can
do
something
like
this
it'll
show
you
all
the
logs
or
all
the
directories
that
are
available
and
say
you
want
to
see
the
cubot
logs.
All
you
have
to
do
is
is
this,
and
this
will
let
you
see
all
the
cubelet
logs?
F
Similarly,
you
could
do
the
same
thing
for
the
for
the
for
the
linux
host
in
the
linux
side.
It's
a
little
bit
more
easier.
You
can
point
cubelet
as
a
service
and
it
shows
the
logs,
whereas
on
the
windows
side
you
know
the
cubelet
just
logs
to
a
file,
and
so
you
have
to
actually
show
the
file
so
in
a
in
a
very
high
level.
F
This
is
where
christian-
and
I
are
at
this
point
in
this
feature-
there
are
some
more
features
that
we
need
to
add
to
this.
One
is
to
make
sure
that
some
of
the
command
line
options
that
we
have
added
work.
We
also
want
to
do
things
like
what
is
the
other
thing
that
we
want
to
do?
That
is
a
heuristic
thing
that
we
need
to
add
where,
if
someone
says
something
like.
F
This
services
equals
cubelet
the
end
user,
needn't
know
whether
it's
a
whether
the
service
is
opening
to
a
log
or
to
any
other
mechanism
but
to
but
for
the
client
to
figure
out
what
what
that
option
is
and
to
show
the
logs
either
from
a
file
or
from
general
ctl
or
from
the
windows
get
event.
So
those
are
some
of
the
things
that
christian
and
I
are
still
working
on,
hopefully
we'll
be
able
to
get
this
into
124
as
a
feature.
F
So
that's
the
demo
any
questions.
Folks.
A
I
think
it
might
be
worth
calling
out
that
this
does
not
rely
on
the
host
process.
Containers
has
no
minimum
would
have,
I
guess
well,
the
minimum
version
of
kubernetes
would
be
the
version
that
it's
introduced,
and
this,
I
think,
could
be
really
helpful
for
issue
for
debugging
issues
with
starting
containers,
which,
oh
and
and
other
things
which
wouldn't
work
without
those
process
containers.
So.
F
Yeah,
that's
that's,
that's
correct!
Mark.
We
we're
not
using
host
process
containers.
This
is
as
long
as
you,
at
least
in
the
alpha
phase.
You
need
the
the
feature
gate
enabled
and
ones
once
that
feature
gate
is
enabled
this
would
just
work
out
of
the
out
of
the
box.
As
long
as
you
have
the
latest
cube
cuddle.
G
As
well
and
we're
essentially
using
the
cubelet
as
a
streaming
server
to
stream
out
the
logs-
and
yes,
this
is
I'd.
Rather,
this
is
probably
our
most
used
debugging
tool,
it's
wrapped
in
a
tool
called
mustgather
and
yeah
for
for
node
debugging.
This
is
really
the
go-to
tool
to
get
logs
off
of
those
hosts,
and
this
is
the
first
time
I've
actually
seen
it
run
with
with
cubectl
here
so
yeah.
That's
awesome.
Thanks,
aaron
yeah.
A
Best
way
for
like
for
other
folks
to
try
this
out,
would
it
be
possible
for
you
to
share
the
either
the
cumulative
cube.
Ctl
builds
and
get
it
running
in
the
dev
environments.
F
One
is
to
use
our
branch
that
christian
and
I
have
been
sharing
and
just
build
cubelet
and
cube
ctl
from
there
and
get
that
into
so
the
way
I've
been
I've
been
using
windows
devtools.
I
just
point
segment,
those
devtools
at
my
local
repository.
F
I
did
have
to
go
and
do
some
extra
things
on
the
windows.
Note,
though,
on
the
linux
node
it
was
easy,
but
on
the
windows
note
I
had
to
go
and
actually
manually
add
the
feature
flags,
because
it
seems
that
the
cube
adm
doesn't
take
the
feature
flags
and
apply
it
directly
to
the
cubelet
service.
On
the
windows
side,
I
had
to
go
and
change
the
nssm
service
that
that
was
pointing
to
a
particular
powershell
script
and
I
had
to
add
the
feature
flag
to
that
particular
powershell
script.
F
It
took
me
it.
This
took
me
a
day
to
figure
out.
So
if
someone
wants
to
do
this,
just
let
me
know
now-
and
I
can
I
can
help
you
folks,
I
had
I
I
tried
debugging
why
cubadium
was
not
picking
it
up.
It
was
turning
into
be
a
rabbit
hole.
So
I
said
you
know,
I'm
going
to
focus
on
the
future
and
and
get
that
going,
but
yeah
in
cuba
dm.
If
you
mention
what
cubelet
rx
that
you
want
that
doesn't
get
picked
up
on
the
windows
side.
F
A
H
Oh
yes,
so
yeah.
I
think
this
was
a
great
demo.
I
think
I
might
start
using
it
for
csi
proxy
and
I
had
a
question
about
it,
which
is
rated
with
the
path
flag.
So
is
this
relative
to
some
part
that
is
set
somewhere?
H
F
At
the
moment,
no-
and
I
think
there
was
some
hesitancy
about
being
about
making
this
configurable
as
a
security
concern,
so
the
when
you
wrote
the
announcement,
I
think
the
agreement
was.
It
would
be
hard-coded
to
wireless
both
on
linux
and
windows.
F
But
yeah
I
mean
if
we,
if
you
decide
that
there
is
no
security
concern
which
I
doubt
there
is
as
in.
I
doubt
that
there
isn't.
The
basic
thing
is
mauricio.
Is
that
folks,
especially
tim
harkin,
does
not
want
this
to
be
a
way
for
people
to
have
a
peek
into
the
node
itself
and
and
the
other
tbd
also
on
on
my
clay
and
christmas
plate
is
to
make
sure
that
this
works
only
for
a
cluster
admin
and
not
for
any
other
user.
D
I
do
have
a
question
you
might
have
shown
it,
but
if
you
did,
I
apologize,
but
could
you
possibly
enable
like
a
more
verbose
logging
option
through
this
or
just
whatever?
It
is
configure
right
now,
because,
for
example,
if
there's
some
issue
that
can
trigger
sometimes,
I
would
like
to
enable
a
motherboard
logging
for
the
qubit
in
a
node
just
through
this
cli,
without
having
to
log
into
the
node
change
the
verbosity,
restart
the
service
and
so
on.
You
know.
G
Go
ahead
yeah.
I
would
think
this
is
essentially
just
accessing
nodes
that
are
already
stored
somewhere
either
in.
I
think
it's
the
win,
get
event
log
or
something
on
on
windows,
notes
and
the
journal
journal
d
logs
on
linux
nodes.
So
you
would
still
have
to
change
the
keyboard
config
to
actually
log
more
stuff
to
that
log.
We
can
only
really
access
what's
already
there.
G
B
F
G
That
is
possible
the
way
you
do
you
imagine
right
now.
F
Yeah
and
I
I
would
say
further
that
changing
cubelet
configuration
like
service
configuration,
is
sort
of
outside
the
scope
of
this
announcement.
So
at
the
moment
we're
not
thinking
about
changing,
because
I
I
would
think
that
that
would
become
a
more
generic
sort
of
feature
right.
I
have
the
cubelet
service
and
I
want
to
be
able
to
administer
a
node
and
update
the
cubelet
services
arms
or
something
like
that.
I
would
think
that
that's
outside
the
scope
of
this
particular
answer,
but
I
see
where
you're
going
with
with
what
you're
asking
for.
A
A
Be
possible-
and
I
think
this
was
touched
upon
in
the
enhancement
but
probably
not
not,
for
alpha-
is
if
you
do
have
more
verbose
logs
the
apis
that
this
calls
like
the
get
win.
Events
do
have
filters,
we
might
be
able
to
plumb
through
some
filtering,
so
you
could
initially
only
get
you
know
info
or
like
error
logs
and
then
have
have
it
be
exposed
to
you
and
then
get
more
verbose
logs
through
the
cli
but
yeah.
They
need
to
already
be
enabled
in
the
to
be
logged.
G
I
think
that
that
sounds
doable
right
now.
We
kind
of
compose
that
win,
get
event
call
and
you
can
specify
multiple
services
and
stuff
like
that,
but
yeah.
If
we
haven't
really
dug
into
the
all
the
flags
that
when
get
event
might
expose,
but
if
there's
a
flag
you'd
like
to
use,
it
should
be
possible
to
kind
of
add
a
flag
on
the
cubesat
on
the
cube
ctl
side.
G
To
then
make
that
flag
used
with
their
win
get
event.
F
Yeah,
I
think
the
biggest
issue
that
I've
been
running
into
is
just
how
do
you
get
given
a
windows
service?
How
do
you
get
the
logs
for
it
right?
I
just
haven't
been
able
to
figure
out
how
to
do
that.
It
only
works
for
services
that
log
to
the
application
provider
inside
windows.
F
F
A
Yeah,
I
I
need
to
to
drop
for
signo.
Thank
you
very
much
for
the
demo.
I
think
this
has
come
a
long
way.
It'll
be
super
hopeful
jay,
I'm
going
to
hand
it
over
to
you
if
you
want
to
keep
going
with
some
of
the
pairings,
but
I'm
going
to
also
stop
the
recording
now.
Thank
you,
everybody
for
attending.