►
From YouTube: Kubernetes SIG Windows 20220426
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
april
26
2022
iteration
of
the
kubernetes
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct.
We
have
a
pretty
wide
agenda
today,
but
I
see
a
lot
of
folks
here
so
see.
If
anybody
else
has
any
topics
in
terms
of
announcements,
only
thing
that
I
have
is
that
the
124
release,
I
think,
is
still
expected
to
come
out
next
next
tuesday.
So
be
sure
to
look
for
that.
A
I'm
not
sure
exactly
that
usually
depends
on
the
new
the
release
team.
I
know
that
in
the
past
they've
done
kind
of
a
staged
opening
where
they're
like
a
lot
bug
fixed
prs
that
were
kind
of
blocked
to
go
in
and
then
waited
a
little
bit
for
feature
or
enhancement
peers
to
come
in.
A
I
will
try
and
find
that
out
for
next
week.
So
what
the
plan
is
for
the
next
for
for
opening
up
into
master
again
or
maine.
I
think
that
they're
also
trying
to
switch
over
to
to
maine
rename
the
master
branch
to
maine.
A
A
Yeah
because
I
think
we're
waiting
for
that
to
to
check
in
the
q
proxy
caching
fixes
that
you
have
right
david
and
then
get
those
merged
into
the
124
branch
for
a
124
one
release.
A
Soon,
just
like
get
it
okay
does
anybody
else?
Have
any
questions.
A
Okay,
again
I'll
open
it
up,
we'll
give
a
space
if
there's
any
new
contributors
here
that
want
to
either
introduce
themselves,
ask
some
questions
or
if
anybody
would
like
to
share
what
they're
working
on
feel
free
to
do
so
now
you
can
just
raise
your
hand
or
unmute
and
talk.
A
Okay,
I
guess
we
can
go
into
the
agenda.
Then
david.
If
you
want
you,
can
you
can
go
first,
we
can
do
the.
I
can
do
a
demo
later.
B
So,
on
april
25th
the
kb
was
released.
I
think
originally
it
was
planned
to
go
out
last
tuesday,
actually
so
a
week
ago,
but
was
delayed
for
one
week,
unfortunately
due
to
some
approval
that
was
needed
and
yeah.
So
this
contains
several
networking
fixes.
So
if
you're
using
server
2022,
I
would
highly
recommend
to
get
this
get
this
patch.
A
Okay,
yeah,
so
there's
more
information
in
the
link,
and
these
will
be
widely
or
these
will
be
distributed
with
the
cumulative
roll
up.
In
a
couple
of
weeks,
too,.
A
Does
anybody
have,
I
also
have
anything
they'd
like
to
discuss
as
part
of
the
agenda?
If
not,
I
can
demo
some
of
the
prototype
work
that
danny's
been
working
on
for
those
process.
Containers.
A
A
So
danny
you're,
on
the
call
too,
if
you
have
anything
you
want
me
to
demo
with
with
the
new
behaviors.
Let
me
know,
but
so
as
we
as
I
mentioned
before,
like
the
way
that
we
had
to
set
up
the
volumes
for
host
process
containers
was
a
little
bit
different
than
how
they
work
for
linux.
Privileged
containers,
and
this
has
caused
some
issues,
especially
around
being
able
to
use
things
like
the
in
cluster
config
functionality
of
the
kubernetes
api
client
to
automatically
fetch
the
secrets
for
authentication.
A
So
here
I
have
two
clusters
set
up
that
are
both
running
windows,
server,
2022
and
just
have
a
simple
container
running
in
those
clusters,
and
one
is
running
with
the.
I
think,
the
just
the
1.6.2
release
of
container
d
with
the
included
hcs
shim
and
another
one
is
a
shim
from
a
private
brand
or
a
public
branch
that
danny's
been
working
out
of
with
some
changes
to
how
we
do
volume
mounts
to
demo,
the
updates
and
the
behaviors
there.
A
So
I
also
have
because
we
are
changing
the
the
behaviors
on
how
the
volumes
get
mounted
and
what's
kind
of
virtualized.
And,
what's
not,
I
do
have
some
rdp
sessions
into
the
host
nodes
for
each
of
these
two
scenarios
too.
So
here's
the
old
behavior
I
just
called
the
old
post
process
container.
A
A
We
have
the
this
we
don't
actually
under
here.
We
have
this
var
I'll.
Just
do
a
tree
on
that,
so
this
container
doesn't
have
any
extra
volume
amounts
to
find,
but
it
does
have
the
service
account
token,
and
so
in
order
to
get
access
to
the
surface
account
token,
that's
part
of
this
volume
mount
you
need
to
know
the
id
of
the
container
to
be
able
to
find
it.
Now
we
do
have.
A
We
do
have
an
environment
variable
called
this
container.
Sandbox
mount
point
that
points
to
this
directory.
So
you
can
do
like
something
strange
is
happening,
but
you
can
do
like.
A
So
you
can
reference
your
your
volumes
from
this,
like
rooted
from
this
container
sandbox
mount
point,
and
so
that's
the
main
reason
why
in
cluster
config
doesn't
work,
is
it
expects
this
this
volume
to
be
mounted
in
and
just
you
know,
forward,
slash
var
run
secrets
which
on
windows
gets
translated
to
c
colon
var
run
secrets
service
account
token,
and
so
this
should
be
which
machine
is
this.
A
This
is
the
the
old
most
processed
container
machine
too.
So
if
we
do
a
you
know
there
on
that
c
column
directory
and
we
can
see
a
whole
bunch
of
different.
Can
let
me
see
if
I
can
zoom
in
here
yeah,
so
all
of
the
different
containers
that
are
running
on
the
system.
These
are
just
like
q,
proxy,
the
cni
containers
and
everything
these
are
all
accessible
to
to
any
of
those
process
containers.
A
So
if
you
wanted
to,
you
could
jump
up
a
directory
and
grab
the
secrets
from
a
different
container
or
any
of
any
of
the
data
that's
mounted
in
this
was
is
not
really
ideal,
but
since
you
get
you,
you
know
you're
running
as
the
host,
you
can
do
anything.
You
want
with
the
on
the
host,
so
there's
not
really
anything
we
can
do
to
lock
that
down.
A
The
new
behavior,
though,
is
that
there
is
some
file
system,
isolation
for
how
post-process
containers
work.
Let
me
show
that
in
a
new
container,
so
the
the
host
view
of
this
isn't
that
interesting
all
the
volumes
are
just
you
know,
set
up
here
and
anything
can
access
them,
but
with
the
new
behavior
I
can.
I
can
demo
that
too.
So
this
is
running
a
private
chin.
A
So
here
you'll
see
that
you
get
when
you
exec
into
this
host
process
container.
You
start
off
as
just
you
get
put
in
the
root
just
c
colon
or
whatever
your
I
think,
whatever
your
working
directory's
defined
as
and
this
mirrors,
the
behavior
of
regular
windows,
server
containers
as
well
you'll
also
notice
that
if
we
do
a
so,
we
do
still
have
a
merged
file
system
view.
So
many
of
these
folders
are
from
coming
in
from
the
host
like
c
colon
k,
you
know
c
colin
program
files.
A
We're
we're
able
to
look
at
the
we're
able
to
mount
the
volumes
in
the
correct
place.
So
one
of
the
reasons
why
we
weren't
why
we
have
the
existing
behavior
is
because
a
lot
of
times
volumes
like
you,
know
the
secrets
one
get
mounted
at
a
static
path.
In
this
case,
it's
just
far
run
secrets
and
before
we
didn't
really
have
any
file
system
isolation.
A
There
is
some
degree
of
file
system
isolation,
so
we
are
able
to
mount
those
volumes
at
well-known
paths
to
to
get
it
to
and
actually
in
this
case,
just
using
enclosure
config
works
because
the
volume
is
is
where
it's
expected
for,
because
the
exit,
because
of
the
old
behavior
we
did
have
a
number
of
there,
were
there
were
workarounds
in
order
to
get
the
the
the
in
order
to
authenticate
with
the
the
cluster,
and
that
usually
involves
you
know
using
this
relative
path
to
find
the
secrets
we
did
preserve
that
behavior.
A
A
So
can
you
see
all
of
the
containers,
the
volume?
Oh,
I
guess
yeah,
never
mind.
You
can't
see
the
volumes
for
all
the
containers,
because
that's
exactly
what
what
we're
doing
here
so
first
I'll
show
what
this
looks
like
on
the
host
on
the
host
machine.
A
Things
like
the
volume
mount
yeah.
I
was
just.
A
I
think
that
this
is
just
this
is
an
artifact
of
this
being
a
prototype,
but
you'll
notice
here
that
we
still
have
the
path
up
into
where
the.
A
Up
until
where
the
service
account
volume
is
mounted,
it
still
exists
on
the
host,
but
the
contents
of
that
folder
don't
exist
too.
So
this-
and
this
is
the
that
file
system
virtualization,
that
we
were
talking
about
too.
So
this
allows
us
to
schedule
many
pods
and
that
use
static
paths
in
their
their
volume
mounts
and
not
not
have
collisions
with
with
any
of
that
too.
A
All
of
that
previously
that
used
to
get
placed
in
you
know
c,
colon
c
and
then
a
container
id.
So
if,
during
your
container
build
you
added
any
files
or
scripts
to
that,
they
would
be
accessible
in
that
c,
colon
c
whatever
or
off
of
that
container
sandbox
mount
point
directory
now.
What
we're
doing
is
each
container
mounts
its
contents.
To
that
c,
colon.
A
Hpc
directory,
so
this
this
container
right
here
is
the
nano
server
based
container.
So
it
does
have
everything
based
off
of
nano
server
or
like
everything,
that's
in
the
nano
server
image
still
shows
up
in
here.
We
are
prototyping
and
have
some
successful
prototypes
of
using
like
a
slim
image
or
a
scratch
image
with
host
process
containers
that
doesn't
have
any
of
the
contents
from
nano
server
or
server
core
that
we
can
probably
demo
in
a
couple
of
in
one
of
the
upcoming
sessions
too.
A
But
this
container
image
doesn't
have
anything
there.
So
it's
not
that
interesting.
You'll
notice
that
on
the
host
to
this.
C
You
want
to
explain:
what's
going
on
there
danny,
do
they
actually
have
contents
in
them?
I
think
it's
just
the
it's
probably
an
artifact
of
the
way
that.
C
Yeah,
what
we're
doing
here
is
there's
a
driver
in
windows
that
can
handle
kind
of
exactly
what
linux
bind
mounts
are,
but
you
have
to
have
everything
besides
the
last
bit
of
the
path
exist,
so
you
can't
just
say:
like
bind,
you
know
c
path
to
see
other
path
or
see.
You
know
path
to
other
path,
so
path
to
other
has
to
exist
for
the
last
bit
of
the
path.
C
So
there's
probably
a
way
to
work
around
this
that
we're
trying
out,
but
for
the
time
being,
that's
the
artifact
that
you're
seeing
here
so
the
actual
bind
mount
point
won't
show
up
on
the
host.
It'll
only
show
up
in
the
container,
but
everything
prior
to
the
last
portion
of
the
path
exists.
A
Yeah,
so
that's
the
so
I
think
this
works
more
or
less
how
we
were
hoping
it
would
work.
I'm
gonna
stop
sharing
my
screen
now
and
turn
on
the
camera.
So
we
can
have
a
quick
discussion
about
this.
Let's.
A
Bring
up
the
agenda
again,
so
one
of
the
things
that
we
wanted
to
discuss
with
the
community
is
currently
as
danny
just
mentioned.
There
is
a
there's,
some
some
windows
apis
to
do
these
bind
mounts.
Currently
these
windows
apis
are
only
without
they're,
not
available
in
windows,
server
2019.
A
Do
we
want
to
have
the
nice
behavior
only
available
in
windows,
server
2019
and
have
the
old
behavior
on
windows
server,
or
only
have
the
nice
behavior
available
in
windows,
server
2022
and
have
the
shim
be
able
to
detect
which
operating
system
it's
running
on,
and
you
know,
do
the
nice
behavior
for
windows,
server,
2022
or,
and
the
not
nice
behavior
for
windows,
server
2019
do
or
I
guess,
yeah.
That's
that's
the
big
question.
A
I
don't
really
like
that
behavior,
because
it
does
mean
that
you
may
potentially
need
either
what
I
think
is
going
to
end
up
happening.
Is
people
are
going
to
build
their
host
process
containers
to
target
either
a
specific
os
like
windows,
server,
2019
or
windows,
server
2022,
because
of
that
or
they're
only
going
to
implement
the
windows,
server,
2019
behaviors
to
get
it
for
maximum
compatibility
and
then
the
new
functionality
is
not
really
going
to
get
used
so
yeah.
I
guess
let's
open
it
up
for
some
discussion,
see
what
what
people
people
think.
A
What
are
your
thoughts,
danny
james
claudio
anybody,
who's
playing
with
host
process
containers?
I'm
curious.
A
E
I
think
moving
forward.
It
would
be
a
lot
better,
especially
according
to
his
experience,
but
it
does
put
a
user
in
a
difficult
spot
in
making
sure
that
when
the
user
will
deploy
host
process
containers,
it
will
have
to
make
sure
that
it's
gonna
spawn
on
2019
or
2022.
E
That's
gonna
be
difficult
to
manage.
A
I'm
also
curious
where,
like
what
everybody's
thoughts
and
adoption
of
windows
server
2022
is
because,
if
there's
kind
of
critical
mass
and
moving
to
windows
server
2022,
that's
might
not
be
that
big
of
an
issue.
But
I
know
like
aks,
currently
only
supports
windows,
server,
2019,
I'm
guessing
a
lot
of
other
big
cloud
providers
or
infrastructure
providers
are
on
the
same
boat.
C
D
At
vmware,
we,
you
know
we're
our
windows
offering
is
relatively
new.
Now
I've
talked
to
when
I
talked
to
folks
like
who's.
The
last
person
I
talked
about
this
was
jamie.
Jamie,
I
think
was
saying
you
know
like
windows
on
kubernetes
is
relatively
new
when
you
think
about
companies
adopting
it
and
like
so.
D
I
guess
the
bigger
question
is
like
is
windows
on
kubernetes
like
are
more
people
going
to
be
adopting
it
now
than
before,
and
if
so
host
process
is
increasingly
relevant
or
is
it
like
everybody's
already
running
it
and
they
don't
want
to
tear
down
their
old
2019
servers?
Is
that
the
question,
like
that's
kind
of
the
question
that
I
have
for
us?
I
think
2022
is
probably
a
good
thing,
because
we
just
released
our
vmware
on
windows
kubernetes
product
recently,
so
for
tanzu
I
mean
probably
most
of
our
customers
will
probably
want
to
do
2022.
E
A
Remember
the
full
name
yeah.
Yes,
we
do
and
that
always
points
to
just
that
c:
colon
backslash
hpc
that
may
be
configurable
based
on
some
input
input
that
gets
passed
to
the
shim,
but
for
now
that
that's
just
what
we
decided
and
to
for
from
the
prototypes
to
do
this,
but
so
that
that
is
still
available.
So
if
you
still
want
to
look
for
the
volume
mounts
or
your
your
payload
content
under
sql
and
hpc,
or
under
the
like
from
that
container,
sandbox,
not
point
environment
variable,
that's
still
possible.
A
E
Things
that
I
that's
what
that
was
what
I
was
curious
about,
because
if
we
are
going
to
go
for
with
an
implementation
and
people
had
you
know
parts
with
the
old
implementation
that
it
was,
they
will
still
work
with
reimplementation.
E
But
if
that
environment
variable
is
set
to
the
hpc
path,
then
I
think
it
should
work
as
before.
So
that's
that's
good.
A
Yeah
existing
containers
that
were
built
around
the
old
behavior
should
continue
to
work
with
the
new
behavior
on
windows
server
2022.
But
if
there's
new
host
process
containers
that
just
assume
in
cluster
config
works
for
their
in
in
their
payload,
those
likely
will
not
work
on
windows,
server
2019.,
so
the
yeah.
The
other
kind
of
thing
is
like
I
think,
figuring
out
this
volume
up.
Behavior
is
the
reason
why
we're
still
haven't
tried
to
purse,
like
one
of
the
main
reasons
why
we
still
haven't
tried
to
pursue.
A
You
know
ga
status
for
for
the
future.
Why
it's
we
kept
it
in
beta
for
124.,
so
there's
I
I
mean
I
was
really
hoping
that
we
we
could
like
yeah,
I'm
trying
to
figure
out
a
nice
way
to
dance,
but
I
was
really
hoping
that
we
could
have
the
same
experience
for
windows,
server,
2019
and
windows,
server
2022
for
the
feature.
A
A
Yeah-
and
I
like
I,
we're-
definitely
not
going
to
make
a
final
decision
here,
I'm
just
starting
some
some
conversations
that
are
probably
going
to
happen.
If,
especially,
if
we
go
and
update
the
cap
to
say
you
know
and
it
cost
you
config
works,
here's
how
it
works
too.
So
if
anybody
else
has
you
know,
thoughts
feel
free
to
raise
it
in
meeting
and
slack
or
any
time.
A
A
So
all
right,
that's
the
top
of
the
hour.
If
anybody
has.