►
From YouTube: Kubernetes SIG Windows 20220329
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
march
29
2022
iteration
of
the
kubernetes
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct.
A
Right,
you
can
get
started
announcements
code
freeze
is
sometime
tonight
or
tomorrow
morning,
depending
on
what
time
zone
you
are
in.
I
think
that
there
was
message
to
kdev
about
yeah
code
freeze
as
always
trying
to
get
them
in
earlier
and
everything.
I
think
we
can
spend
a
little
bit
of
time
today
going
over
some
of
the
pr's
that
we
want
merged,
especially
if
some
of
the
red
hat
folks
are
here.
I'm
interested
in
the
node
service
log
viewer.
Okay,
we
can
do
that
later,
new
contributors.
A
If
there's
anybody
on
the
call
that
wants
to
introduce
themselves,
I
think
actually
everybody
that's
on
the
call
right
now.
I've
has
been
here
before
so
or
oh,
I
think
a
couple
new
people
joined,
but
if
there's
anybody
else
introduce
themselves
feel
free
to
go
ahead,
and
we
can,
you
know,
help
answer
any
questions
you
have
and
help
out
any
way
we
can.
If
not,
we
can
keep
going.
A
Okay,
next,
I
wanted
to
talk
a
little
bit
about
open.
124
prs
looks
like
the
pod
os
that
that
enhancement,
I
think,
is
pretty
much
all
the
way
done.
The
code
changes
merged.
I
think,
sometime
last
week
and
the
the
dax
changes
just
merged
today,
so
I
think
we're
that
that's
good
arivan
did
you
want
to
give
an
update
on
the
the
node
log
viewer
cap.
B
Yeah
sure
so
we
had
a
last
minute
hiccup
regarding
that
chris
was
working
and
it
had
some
personal
stuff
to
take
care
of,
but
we
should
be
back
on
track
today,
but
we
most
likely
need
not
most
likely.
We
need
an
exception
for
this
if
it
has
to
go
into
124..
B
My
plan
is
to
work
with
chris
today
and
there's
a
bunch
of
work
in
progress,
comments
that
were
mainly
added
to
address
some
of
the
six
cli
comments
that
came
from
mache
and
mache
has
now
approved
the
the
pr
we
need
approvals
from
node
and
api.
I'm
pretty
confident
of
getting
the
node
approval.
Api
is
what
I'm
a
bit
worried
about,
but
I'm
going
to
spend
rest
of
the
day
today,
just
to
make
sure
that
the
pr
is
in
is
in
good
shape.
B
We
squash
merge
the
commits
and
things
like
that.
Okay,
so
I'm
not
sure
mark,
do
you
initiate
the
exception
request
or
does
chris
or.
A
I
can
I
can
help
with
that.
I,
the
the
process,
changes
slightly
each
each
release,
just
dude,
whoever
is
in
the
on
the
release
team.
I
need
to
look
that
up
again.
Yeah
arvin
and
I
spoke
a
little
bit
earlier
in
the
week
and
I
I'm
happy
to
help
you
know
submit
an
exception
request
for
this.
A
A
The
next
order
business
is
that
yeah
clean
up
the
pr
we'll
try
hard
to
get
the
reviews
that
we
need,
and
then
we
can
either
submit
an
exception
requests
without
those
and
say
we're
just
waiting
for
final
review
or,
ideally
we
can
come
back
and
say
this
is
here's
an
exception.
This
is
an
alpha
feature,
so
it
should
be
safe
to
put
in
and
it's
all
reviewed
we
just
need
a
little
bit
of
more
time.
A
All
right,
yeah
yeah.
I
really
would
hope
to
see
this
this
this
get
added.
I
know
folks,
were
pretty
excited
for
the
demo.
Oh
yeah.
B
And
just
just
so
folks
know
it's
when
I
demoed
it,
it
was
cute
cuddle
law.
It
was
under
cube.
Cuddle
logs
mache
wanted
us
to
move
it
from
cute
cuddle
logs
to
cucaro
alpha,
there's,
apparently
an
alpha
option
with
cube
cuddle,
so
we've
moved
it
under
there
once
once
the
code
freeze
and
all
that
is
done,
I
I
will
go
and
update
the
update,
the
enhancement
to
say
that
we're
going
to
introduce
it
under
kubecon
alpha.
A
Okay,
next,
I
think
the
we
talked
a
little
bit
about
the
pod
os
kept
that
merged.
One
thing
that
ravi
highlighted
is
that
it
looks
like
I
think
he
said
that
there
was
a
mention
of.
Maybe
the
beta
apis
are
going
to
be
off
by
default
in
124,
so
we'll
follow
up
on
that.
C
We
do
not
need
to
do
anything
in
this
case.
That's
what
I
was
suggesting
earlier,
because
if
it's
part
of
if
the
new
field
is
part
of
a
stable
api,
we
do
not
need
to
do
anything.
A
I
need
to
check,
I
think
james,
do
we
need
to
worry
about
the
like
that?
Is
this
auto
scaling
api,
so
there's
still
beta.
D
There's
like
a
subset
of
container
schedule,
hpas
that
are
in
beta,
but
I
know
that's
that's
only
if
you're
using
those
specific
ones,
okay,.
A
A
C
Yeah
and
for
the
existing
beta
apis,
it
is
still
going
to
be
the
same
like
we
are
going
to
serve
the
beta
apis.
I
believe
the
auto
scaling
is
an
existing
beta
api,
so
they
are
going
to
deserve
beta
only.
B
B
A
Okay,
yeah:
let's
still
follow
up
on
this
next
week
too
I'll
do.
A
Just
to
make
sure,
but
I
I
think
that
that
makes
sense
and
you
can
provide
a
little
bit
more
guidance
closer
to
the
release
if
needed.
A
Next
was
something
that
I
was
planning
on,
potentially
demoing
I'll
hold
off.
Now
I
think
david
did
you
want
to
talk
about?
I
see
you
adding
some
items.
E
Yeah,
can
everyone
hear
me
yeah
yeah?
I
just
wanted
to
mention
that
there
was
a
new
cumulative
update
released
actually
last
week.
I
think
it
includes,
and
the
cumulative
update
has
been
released
for
server,
2022
and
2019
and
contains
some
changes
that
address
an
issue
that
some
users
had
where
they
tried
to
create
a
service
with
over
64
back
ends.
I
tried
to
plumb
65
or
more
back-end
pods
for
one
service
and
it
was
filling
and
that
limitation
has
now
been
removed
and
the
new
limit
has
been
set
to
1024.
A
A
Okay,
maybe
in
april
then,
when
it's
in
4b,
you
can
do
another
announcement
saying
this
is,
should
be
default
now
it
doesn't
need
to
people,
don't
need
to
install
the
optional
package.
A
Thank
you
is
there
anything
else
you
wanted
to
add
about
that
or
just
letting
people
know
that
this
issue
should
be
resolved.
E
No,
I
mean,
I
guess,
just
try
it
out
and
let
us
know
if
you
run
into
any
issues.
That's
that's
all
I
had
here.
Okay,.
A
Sounds
sounds
good.
Does
anybody
have
anything
else
you
want
to
talk
about?
If
not,
I
can
give
kind
of
an
update
about
some
of
the
host
process.
Work
that
we've
been
doing
to
improve
the
experience,
while
it's
still
in
in
beta,
but
I'll
open
the
floor
up
now,
oh
jay,
I
did
want
to
ask
you.
I
saw
the
release
somebody
from
either
the
enhancement
so
they're
releasing
about
the
windows.
The
operational
readiness
cap.
Do
you
want
to?
Can
you
give
an
update
on
that?
Does
anything
happen?.
F
Yeah
I
mean
I
didn't
think
there
was
any
paperwork
to
do
yet,
but
I
know
that
there's
progress
that's
been
made
because
a
meme,
you
and
chinchis
a
meme
here,
amim
and
chinchi-
are
both
working
on
it
and
they
got
stuck
in
a
bunch
of
gmsa,
stuff
and
they're
working
on
that
with
james.
So
I
was
just
thinking
we
could
just
not
do
anything,
but
I
didn't
understand
whether
they
actually
need.
I
don't
know.
F
A
What
that's
not
right,
either.
G
A
A
I
think
the
enhancements
team
or
the
release
team
is
looking
specifically
to
see
if
any
of
the
prs
needed
in
kubernetes
kubernetes
are
going
to
merge
before
the
code
freeze,
and
I
think
that
there
are
some
test
changes
there
too,
but
so
at
a
minimum
we
can
probably
just
link
to
the
test
changes
here.
A
F
A
A
You
can
say
yes
and
then
it
might
be.
You
can
maybe
ask
for
this
to
be
tracked
out
of
tree
and
that's
usually
how
things
that
are
like
projects
that
are
primarily
in
like
kubernetes
kubernetes
csi,
those
other
github,
orgs
and
repos
are
already
oh.
A
F
A
And
then
I'll,
try
and
remember
this
too,
but
as
there
are
pr's,
even
in
other
repositories
that
go
or
they
get
open,
we
should
try
and
list
them
in
in
the
enhancements
description.
This
is
usually
what
the
release
team
or
the
enhancement
team
looks
at.
I
think
they
trust
that
any
of
the
needed
prs
to
progress
the
enhancements
are
linked
here
and
all
merged.
F
I'm
still
bouncing
around
that
that
networking
thing
I
really
want
to
do
it.
I
saw
last
week
we
have
another
meeting
on
thursdays.
That
is
a
triage
meeting.
How
much
time
do
we
spend
in
that
meeting?
I'm
wondering
what,
if
we
double
purpose
that
or
something
or
what,
if
we
extend
it
to
longer,
so
I
don't
create
a
new
meeting.
You
know
like.
F
A
A
An
extra
half
hour
also,
I
think,
just
given
where
we
are
in
the
product
cycle.
We
could
potentially
over
like
take
that
slot
one
time
and
see
if
it's
worthwhile
and
then
look
for
a
more
permanent
home
for
the
meeting.
F
F
Kubernetes
issues
are
networking
issues,
so
you
know,
after
that
we
can
sort
of
just
casually
do
some
networking
stuff
and
then
that
way,
we're
not
doing
a
bunch
of
paperwork
to
make
a
new
meeting
and
we're
just
sort
of
double
dipping
into
the
the
one
we
already
have,
and
also
because
it's
it's
very
difficult
for
me
to
always
make
this
tuesday
meeting
anyways
and
I
I
feel
like
you
know,
maybe
I
can
help
you
help
you
all
out
more
that
way,
and
maybe
some
other
people
can
start
joining
that
too,
and
we
can
make
that
more
of
a
build,
some
more
community
around
that
second
meeting,
how
many
people
are
going
to
that
right
now.
B
I
think
if
we
want
to
do
all
of
this
right,
we
need
to
sort
of
restate
what
that
meaning
is
because,
if
it,
if
we
at
the
moment,
it's
like
what
what
what
is
it
called
backlog,
refinement
the
backhog
refinement
meeting
yeah.
If
we're
going
to
do
more
like
technical
stuff
there,
rather
than
just
the
refinement,
we
need
to
state
that
somewhere,
otherwise,.
A
Maybe
for
this
week,
let's
jail
I'll
stick
with
you'll
find
one
of
us
can
send
a
mail
to
the
sig
windows,
mailing
list
and
say
hey.
We
want
to
pilot
a
new
meeting
series
or
a
new
time
to
discuss
this.
We're
going
to
use
the
sig
windows
triage
meeting
this
week
send
out
the
time
and
then
just
let
people
know
then
so
that,
if
anybody's
interested
in
the
networking
stuff,
if
they
and
doesn't
attend
this
meeting,
they
can
they
can
come.
F
F
Yeah
yeah
definitely
like
I,
I
I
have
no
intention
of
like
changing
that
meeting.
You
know
so,
let's
keep
it
as
is,
and
then
you
know
I'll
send
an
email
to
the
sig
windows
thing
that
we're
gonna.
F
A
F
G
F
The
network
policy
meeting
and
then
a
whole
bunch
of
people
had
opinions
in
europe
and
not
a
single
person
from
europe
came
and
so
we're
waking
up.
It's
seven
in
the
morning.
B
F
Sig,
you
know
what
I
love
the
fact
that
we
have
that
slack
update,
because
I
didn't
even
know
about
that.
Okay
cool,
I'm!
I
need
okay,
I'm
going
to
set
up
my
google
groups
and
I'm
going
to
make
an
announcement,
and
I
am
going
to
take
into
account
what
arvin
then
you
all
said
about.
We
don't
want
to
I'm
going
to
carefully
word
it.
So
it's
very
clear
we're
not
changing
the
existing
meeting,
we're
just
going
to
do
an
experimental
vlog.
A
B
A
A
So
one
of
the
main
kind
of
user
experience
pieces
that
we
were
trying
to
solve,
especially
before
going
to
stable,
was
how
volume
mounts
show
up
in
these
host
process.
Containers
there's
a
a
lot
of
information
about
how
the
current
behavior
is
in
the
kep
and,
I
believe
in
the
docs
on
the
kubernetes
website.
A
But
it's
not
really
ideal
so
today
that
if
anybody
is
not
familiar
with
those
process
containers,
I
think
people
should
and
are
interested
read
some
of
the
kept
there's
a
lot
of
information
in
there
and
everything
is
accurate
to
how
they
function
today,
but
basically
because
we're
starting
job
objects
on
the
host
and
there's
no
file
system
virtualization.
A
Any
mounts
that
are
at
well-known
paths
like
the
big
one
is
the
the
service
account
token
stuff.
We
had
to
figure
out
where
to
move
that,
because
otherwise,
all
of
like
right
now
all
those
process
containers
can
see,
can
potentially
see
the
volume
amounts
of
other
host
process
containers
because
it
is
just
something
running
on
the
host.
So
there's
no
way
we
couldn't
just
put
the
the
volume
mounts
at
the
same
places
that
they
appear
in
normal
windows
server
containers
because
they
would
conflict
like
the
the
secrets
account.
A
Is
that,
like
far
run
secret
service
account,
I
believe,
if,
if
all
the
container,
if
all
of
those
process
containers
were
to
try
and
mount
and
add,
put
things
on
that
path,
they
would
conflict.
So
what
we
ended
up
doing
for
the
alpha
and
the
beta
user
experience
is
there's
a
directory.
Each
container.
A
Each
host
process
container
gets
a
new
directory
under
it's
under
c
colon
c,
which
is
a
little
bit
confusing
and
then
there's
a
new
volume
that
gets
added
and
that's
the
contents
of
the
layer
or
the
of
the
container
image
at
c
colon
c
and
then
a
guide,
and
that
guide
is
the
container
image
or
the
container
id
and
then
on
and
then
any
other
volume
amounts
that
get
added
to
it
are
going
to
get
added
under
that
that
kind
of
works,
and
there
is
an
environment
variable
that
points
you
to
that
c.
A
Colon
c
guide
for
the
current
running
container,
so
the
container
workload
can
know
how
to
find
its
kind
of
its
base.
It's
it's
work,
its
directory
and
also
all
the
volume
under
that.
A
What
doesn't
quite
work
are
any
libraries
that
assume
kind
of
paths
in
container
images,
the
big
one
being
the
incline
or
in
cluster
config
way
of
authenticating
to
the
api
server
as
the
service
account
that
the
container
is
running
into
so
we've
been
experimenting
with
how
what
we
can
do
to
change
that
and
I
think
danny
has
a
branch
of
hcs
shim
open.
Let
me
let
me
open
up
the
cap.
This
might
make
more
sense.
I
know
we
only
have
like
a
minute
left,
but.
A
So
what
we've
been
experimenting
with
is
adding
in
a
layer
of
file
system
virtualization
into
into
host
process
containers,
and
there
was
there's
some
trade-offs
with
that
that
we
can
get
into
next
week.
But
the
big
thing
is:
is
that
now
we're
with
an
approach
that
we're
prototyping
there's?
Basically
a
union
of
the
host
os
file
system,
and
so
all
of
the
all
the
different
process
containers
can
see
the
host
the
host?
A
The
hosts
file
system
come
as
is,
and
then
each
each
host
process
container
has
like
a
virtualized
view
of
that,
and
then
any
of
the
payload
that
comes
in
with
the
container
and
all
of
the
volume
mounts
that
only
that
container
can
see.
So
they
do
act
more
like
how
you
would
expect
containers
to
to
look,
except
that
they
can
also
see
everything
that
the
host
sees
and
write
and
persist.
A
A
The
big
question
that
we
had,
though,
is
that
it
looks
like
some
of
the
windows
8
or
the
big
issue
that
we're
running
into
right.
Now.
Here's
here's
some
information
about
how
this
works
is
that
a
lot
of
the
apis,
the
apis
needed
to
do
that
from
the
windows
system
are
currently
only
available
on
windows.
A
They
were
introduced
in
the
windows.
I
think
1903
sac
release
and
later
so
they're
available
on
windows,
server,
2019
or
2022,
but
they're
not
available
on
windows,
server
2019,
yet
some
folks
at
microsoft,
brandon
smith,
myself
we're
trying
hard
to
figure
out
if
we
can
backboard
those
and
have
those
kind
of
become
available
in
2019,
but
that
is
still
up
like
still
tbd.
A
If
not,
then
I
think
we're
going
to
need
to
have
a
discussion
is
community.
What
we
want
to
do
here,
you
want
to
stick
with
the
do.
We
want
to
maintain
consistency
with
between
windows,
os
versions
for
2019
and
2022
at
the
risk,
I
think,
of
having
a
sub-optimal
kind
of
user
experience.
A
Do
we
want
to
update
the
shim
hds
shim
so
that
it's
smart
enough
to
know
if
I'm
running
a
windows,
server
2022,
give
me
the
good
experience?
If
I'm,
if
I'm
running
a
2019,
give
me
the
less
good
experience
that
could
cause
some
confusion
and
maybe
make
it
harder
to
author
container
images
that
work
on
both
or
what
we
want
to
do
there.
A
So
I
think
we're
gonna
keep
the
community
updated,
but
that's
what
busy
that's
what
we've
been
working
on,
and
I
also
think
that
this
is
probably
the
biggest
issue
that
we
want
to
solve
before
we
go
to
beta
or
before
we
promote
post-process
containers
to
stable,
so
we're
over
time,
I'll
stay
and
like.
If
anybody
has
any
specific
questions
about
that.
A
If
not,
we
can
dig
into
this
and
then
some
of
the
upcoming
community
meetings,
and
hopefully
you
have
you
know
planning
on
those
discussions
before
125
in
case
we
do
either
have
those
apis,
backported
or
unavailable
in
2019,
or
need
to
make
some
decisions
about
what
to
do
with
the
future
for
125.,
so
yeah.
If
anybody.
C
F
F
A
These
are
yeah,
I
guess
this
is
probably
one
of
the
more
complete
collections
of
examples
of
building
containers
that
are,
I
think,
a
little
bit
more
complicated
than
just
running.
Something
then
just
running
a
service
like
they
have
set
up
and
things
as
host
process
containers.
So
if
you
guys
are
doing
this
at
vmware
or
we're
looking
for
how
to
run,
you
know
other
cni's
in
host
process
containers,
I
would
definitely
check
this
out.
A
James
has
you
know
examples
of
running
calico
and
final
along
with
q
proxy
in
host
process
containers,
and
these
have
the
full.
You
know
build
build
script
and
everything-
and,
let's
see
to
highlight
this
see,
does
the
docker
file
there's
the
workload
that
comes
in
here's
an
example
of
extra
work
that
we
need
to
do,
since
we
don't
have
that
some
workarounds
that
we
need
to
do
because
of
the
current
way
that
volume
mods
are
set
up,
so
the
hopes
would
be
if
we
can
approve
that
user
experience.
A
This
would
go
away,
but
you'll
see
here
that,
instead
of
just
being
able
to
have
to
call
like
in
cluster,
then
cluster
config
methods.
You
know
the
libraries
for
the
kubernetes
rest
apis
in
order
to
to
authenticate
with
api
server.
We
need
to
go
a
little
bit
out
of
our
way
to
you
know,
build
up
of
to
grab
to
find
the
the
tokens
for
the
service
account
that
we
have
in
order
to
authenticate.
A
Account-
and
it's
not
that
hard,
if
you
know
where
to
look,
but
it
is
just
an
extra
kind
of
issue
that
need
like
an
extra
piece
of
work
that
each
host
process
continues
to
do
with
the
current
way.
The
volume
ups
are
set
up.
Okay,
so
if
they
basically
grab
the
secrets
from
the
the
the
service
account
volume
up
that
gets
mounted
in
and
build
a
cube
config
with
those
secrets
and
then
use
that
cube
config
for
authentication
to
the
api
server.
A
So
it's
still
authenticating
as
the
correct
service
accounts
and
everything,
but
it's
just
an
extra
step
that
needs
to
happen.
Okay,
if
you
want
to
add
anything,
feel
free
to
hop
in.