►
From YouTube: Kubernetes SIG Windows 20210629
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
we've
started:
recording
welcome
to
sig
windows.
Everyone
today
is
june,
29th
hope
everyone's
staying
cool
out
there,
and
this
is
a
cncf
meeting.
So
we
follow
the
cncf
guidelines,
encode
of
conduct.
If
anyone
has
any
questions
you
can
reach
out
to
myself
or
any
of
the
kubernetes
leads
and
yeah,
let's
go
ahead
and
get
started
for.
Today's
meeting
looks
like
there's
a
few
new
folks
on
the
call,
or
at
least
one
or
two
does
anybody
want
to
introduce
themselves
and
say
hello?
B
I
I'll
go
ahead
and
go.
My
name
is
eric
smith.
I
basically
joined
muzz
asked
me
to
join
today's
meeting
so
that
we
can
understand
a
little
bit
better.
I
guess
some
issues
that
are
being
run
into
and
see
if
we
can
clarify
some
things
or
or
go
back
and
figure
some
things
out
for
you.
C
A
A
Else:
okay,
cool!
So
just
a
quick
reminder:
next
week
is
code
freeze
so
next
thursday.
So
if
there's
anything
any
features
that
you're
trying
to
get
in
make
sure
you
get
those
in.
A
Let
us
know
if
there's
anything
we're
watching.
I
know
avraham's
got
one.
The
host
process
is
in
and
it
sounds
like
we
might
be
a
little
bit
delayed
on
the
node
log
collector.
But
if
there's
anything
else,
please
make
sure
you
drop
it
into
sig
windows
chat
so
that
we
can
get
it
reviewed
and
watching
us.
Does
anybody
know
of
anything
out
there
that
we
need
to
be
tracking.
C
C
A
Any
other
pr's
that
we
should
be
watching
or
tracking.
A
Okay
cool,
so
I'm
not
sure
if
perry's
on
the
call.
I
worked
with
perry
last
week
and
we
got
host
process
up
and
running
for
in
cap
v
and
we're
also
able
to
schedule
a
pod
that
was
similar
to
a
cni
and
so
we're
still
making
some
progress
there
just
wanted
to.
Let
people
know
that
the
pr
that
had
host
process
work
in
hcs
shim
has
merged.
A
So
now
we
should
be
able
to
build
from
the
hs
container
d
shim
from
from
the
main
branch,
so
no
longer
need
to
be
building
off
that
pr.
There
is
one.
There
is
one
item
that
we
identified
with
volume
mounts
for
the
host
process:
containers
those
aren't
being
mapped
through
properly,
and
so
we
are
working
to
get
that
updated
so
that
they
show
up
inside
the
containers
folder
that
is
created
on
the
host.
A
So
that's
one
item,
that's
still
open,
but
wanted
to
just.
Let
folks
know
that
it's
up
there
and
using
the
container
dpr
that
perry
has
open.
We
were
able
to
schedule
pods
without
the
network
setup
which
is
needed
for
the
cni's
so
making
some
progress.
There.
A
A
Okay,
great
ravi,
do
you
want
to
this
is
this
is
something
that's
come
up
quite
a
bit
on
how
we
actually
identify
nodes
or
windows
pods?
D
Sure
yeah,
so
at
a
high
level,
what
we
are
doing
in
one
point
in
a
time
frame
is
the
cigar
community
has
come
up
with
a
replacement
for
psp.
It
is
called
for
security
admission
instead
of
part
security
policies
and
the
main
concern
there
is.
How
do
we
identify
window
spots
like
do?
We
have
a
generic
way
to
identify
window
spots
during
the
api
server
admission?
D
Time
is
the
question
that
they
would
like
to
get
answer
and
at
the
same
time,
we
as
a
community
would
like
to
the
windows
community
would
like
to
get
answers.
So
what
are
the
existing
ways
to
identify
windows
spots.
E
Sorry
to
interrupt
you
so
when
you
say
just
just
trying
to
understand
a
little
bit
more
use
case,
so
when
you
say
pod
needs
to
be
identified
and
the
security
need
to
be
applied.
Can
you
explain
a
little
bit
like
what
kind
of
security?
What
what
are
we
talking
about?
Why
it's
so
important
to
have
pod
identified
as
windows.
D
Yeah,
so
the
security
constraints
that
we
apply
to
the
port
they
could
be.
Some
of
them
could
be
like
linux,
specific,
like
sldmx
or
or
comp
or
app
armor.
D
Those
are
the
type
of
security
constraints
that
can
be
applied
to
a
part,
and
previously
what
we
used
to
do
is
we
used
to
apply
to
all
the
parts
non-judiciously
like
we
do
not
know
if
it's
a
windows,
port
or
a
linux
port
we'll
go
ahead
and
apply
and
this
application
of
the
security
constraints
they
happen
at
the
admission
server
level,
which
is
at
the
api
server
level,
not
even
on
the
cubelet.
So
as
a
workaround,
what
we
have
been
doing
is
once
the
pod
lands
onto
a
node
within
the
cubelet
code.
D
E
But
so
so
when
it
lands
and
you
strip
it
out
at
that
point
you
you
could
tell
so
when
the
part
lands,
you
could
tell,
there's
a
windows,
not
no
right,
because
you
could
identify
it,
but
you
want
to
do
it
before
that,
so
you
are
able
to
do
it
after,
but
you
just
want
to
catch
it
earlier.
Is
that
is
that
correct
that.
D
Is
that
is
correct,
because
there
is
no
reason
to
apply
some
security
constraints
to
a
pod
because,
like
at
that
point
of
time,
we
do
not
know
if
it's
a
windows,
power
or
a
lms
sport
during
the
api
server
admission
time-
and
that
was
the
reason
we
have
been
applying
non-judiciously
all
the
security
constraints
that
are
some
of
them
could.
E
D
Yeah,
some
of
them
like
when
we
look
at
it,
especially
in
the
openshift
side
of
things.
We
have
another
admission
controller
which
actually
applies
sling,
specific
security
constraints.
So
what
is
happening?
There
is
some
of
them.
They
may
actually
translate
onto
the
windows
security
constraints
in
a
different
way,
and
some
of
them
may
be
needed,
like,
for
example,
run
as
user,
which
is
pretty
much
different
from
how
it
works
in
case
of
mx.
D
So
we
would
like
to
make
sure
that
we
can
authoritatively
apply
those
security
constraints
during
the
api
server
admission
time,
instead
of
after
reaching
the
cubelet,
where
we
are
like
not
sure,
like
not
100
sure
that
this
can
be
removed
or
not.
F
I
see,
okay
thanks
was,
in
other
words
right.
We
don't
want
the
cubelet
to
do
any
sort
of
policy
application,
because
that's
what
is
happening
now
right.
One
entity
is
applying
some
policy
on
some
pods
and
then
the
cube
light
is
coming
up
and
then
removing
it
saying.
Oh,
these
are
windows,
we're
trying
to
not
burden
the
cubelet
with
those
sort
of
decision
making
and
letting
this
other
entity
that's
trying
to
apply
it.
You
know
let
it
take
care
of
it
and
let
it
be
linux
versus
windows,
aware.
E
A
Yeah-
and
it
also
is
becomes
an
issue-
it
became-
is
becoming
an
issue
or
became
an
issue
with
the
new
pod
security
policy
that
is
coming
out
in
122..
A
Currently,
I
believe
the
restricted
policy
wouldn't
actually
apply
to.
It
won't
really
work
with
windows
containers
and
especially
when
we
start
to
talk
about
the
host
process
coming
in.
We
need
a
way
to
be
able
to
restrict
that
field
for
just
the
windows,
components.
D
Yeah,
like
that
is
the
main
reason
that
we
have
started
this
work
upstream
in
the
kubernetes
community
so
like
at
this
point
of
time.
We
are
distinguishing
window
spots
from
linux,
ports
using
the
labels,
technically
the
node
selector
and
the
part
and
the
runtime
classes.
There
are
two
different
ways
where
we
can
say
this
is
a
windows
part.
I
would
like
to
get
it
scheduled
onto
windows.
Note
one
of
them
which
is
like
using
node
selectors.
It
perhaps
is
not
good
enough
in
terms
of
security.
D
The
main
reason
is,
anyone
can
apply
a
label
like
like
say
if
I'm
submitting
a
pod,
even
though
I'm
not
a
clustered
man,
I
can
specify
the
node
selector,
which
says
I
would
like
to
get
landed
onto
a
windows
phone.
Similarly,
I
can
apply
toleration
which
says
I
can
I
can
tolerate
a
windows
node.
Let
me
get
scheduled
onto
an
investment,
so
what
the
auth
team
has
suggested
is
go
with
the
runtime
classes.
D
D
D
So
what
I
have
proposed
in
this
cap
is
rallying
behind
those
runtime
classes
like
can
we
use
the
existing
runtime
classes
at
the
api
server
admission
time
and
say
that
hey
this
is
a
windows
sport.
I
would
not
like
to
have
the
labels
plus
toleration,
just
labels
and
toleration,
without
notes
without
a
runtime
class
associated
with
it.
D
And
the
other
option
that
I
have
mentioned
is
having
a
field
in
the
pod
spec.
I
would
say
the
operating
system
is
windows,
but,
as
I
mentioned
in
the
in
the
alternatives
considered
the
main
problem
that
I
see
with
it
is
we
have
two
different
ways
of
specifying
the
same
thing.
There
are
other
entities
in
the
cluster
which
are
actually
using
runtime
classes
to
do
something
similar.
Why
can't
we
pick
it
back
from
that.
A
D
Yep
yeah:
this
is
this
for
123.,
so
the
the
port
security
replacement,
the
port
security
policy
replacement
it
merged
yesterday,
which
is
an
alpha
stage,
so
in
123
they
are
going
to
go
beta.
D
So
by
that
time
we
need
to
have
like
at
least
a
direction
which
says
this
is
how
we
are
going
to
identify
windows
spots
like
if
we
go
with
the
runtime
classes.
Perhaps
we
can
say
that
because
the
cigar
folks,
they
have
mentioned
that
identifying
windows
pro
windows
parts
is
a
problem
for
them
and
if
they
have
an
identifier,
it
would
be
easier
for
them
to
do
it
in
their
implemented.
So
we
can
actually
pick
back
on
their
admission
controller
too.
A
Great,
thank
you
yeah.
I
I
had
the
same
thing
and
I
know
it's
come
up
several
times
in
in
the
past
year
or
so
on,
various
pr's
on
how
do
we
actually
identify
windows,
windows,
pod,
and
so
that
was
one
of
the
things
I
I
was
kind
of
suggested
and
I
need
to
read
through
and
figure
this
out
think
about
it,
a
little
bit
harder,
but
yeah
thanks
thanks
a
lot
robbie.
This
is.
A
A
Okay,
so
next
one
is
projected
volumes
and
container
user
average
you,
you
had
figured
this
out.
You
had
identified
this
bug
and
we've
been
kind
of
going
back
and
forth
and
I
talked
with
moz
and
we
brought
eric
along.
So
maybe
quickly
do
a
quick
recap
of
where
we're
at
and
then
eric
can
help
answer
some
questions,
yeah
sure.
So
the.
F
Way
the
way
we
hit
this
problem
was,
it
was
like
what
ravi
was
talking
about.
You
know
openshift,
having
these
admission
controllers
that
apply
a
bunch
of
security
context
onto
pods,
and
they
do
it,
whether
it's
a
linux,
port
or
a
windows,
but
it
it's
not
available.
F
So
one
of
the
security
context,
fields
that
was
being
applied
was
run
as
user
and
in
the
latest
release.
There
is
also
a
feature
that
results
in
a
projected
volume
being
attached
to
every
pod,
and
this
projected
volume,
you
know,
has
a
bunch
of
secrets,
some
certs
that
can
then
be
accessed
within
the
within
the
container.
F
However,
for
windows
pause.
If
I
bring
up
a
windows
pod-
and
this
happens-
the
run
as
user
capability
is
is
set,
even
though
this
is
very
linux
specific
and
what
happens
is
there
was
a
a
kept
introduced?
That
said
for
projected
volumes,
I
want
the
correct
permissions
set
within
the
container
for
the
files
that
are
being
projected
or
the
resources
that
are
being
projected,
and
so
what
that,
what
that
feature
does
is
it
tries
to
go
and
say,
okay
for
this
user,
I'm
going
to
you
know.
F
Do
a
bunch
of
csch
mod
commands
and
a
bunch
of
ch
owned
commands,
or
just
a
single
one,
except
of
course,
if
you
do
this,
for
a
windows
pod
where
the
runner's
user
is
set
and
you
try
to
do,
chon
chon
is
not
implemented
for
windows
in
the
golang
library
that
causes
the
end
result
is
the
pod
doesn't
come
up,
and
so
I
started
looking
into
it
and
then
what
I
realized
is
the
you
know
for
one.
F
You
know,
similarly
to
the
way
it's
being
done
on
linux
and
that's
what
I've
been
trying
to
figure
out
and
and
try
and
do-
and
you
know
I've
run
into
I've
discovered
a
bunch
of
things
like,
for
example,
unlike
linux,
with
windows,
the
users
have
to
be
created
when
the
container
image
is
itself
being
created.
So
you
can't
attach
like
a
random
user
to
a
particular
container
image.
They
need
to
have
been
created
as
part
of
a
container
image
creation,
and
there
are
some
interesting
things
like
you
know,
container
user.
F
If
I
go
inside
the
container
like
exec
into
the
container
and
try
to
query
container
user,
I
don't
see
it
listed
when
I
do
get
local
users
so
long
story
short.
What
I
think
we
are
trying
to
achieve
here
is
to
ensure
that
the
correct
permissions
are
being
applied
with
respect
to
projected
volumes
being
used
with
the
windows
box.
B
Okay
yeah.
Thank
you
for
that
description.
Muzz
had,
you
know
shared
me
a
little
bit
of
context
and
that
aligns
with
what
my
understanding
was.
Let
me
paraphrase
it
real
quick
just
so
you
can
confirm
that
I
that
I
understood,
but
I
think
this
is
what
you're
looking
for.
I
think
what
you're
looking
for
is
a
way
based
on
a
set
of
users
that
will
exist
inside
the
container
to
be
able
to
go
apply
some.
B
You
know
in
windows,
world
apple
settings,
but
you
know
security,
descriptors
on
the
files
or
resources
for
that
projected
entity,
so
that
you
can
control
which
of
those
users
has
access
to
which
file
etc.
Is
that
basically
right.
F
That
is
correct
and
in
fact
I
have
a
pr
open
where
I'm
actually
trying
to
do
this,
and
that's
where
I
started
to
run
into
trouble,
because
in
the
in
the
linux
world
right,
you
could
do
ch
own
on
a
particular
user
and
it'll
get
applied
within
the
container,
whereas
in
windows.
If
you
try
to
get
the
sid
off
that
say
container
user
it,
the
host
is
unaware
of
that
user
right.
It's
it's
all
inside
the
container
context.
F
B
F
So
I'm
trying
to
think
somebody
can
jump
in
here.
I
think
that
not
completely
sure
if
the
container
goes
to
fully
running
at
this
point.
F
But
I
know
that
the
mount
points
have
been
created.
Somebody
can
can
correct
me
about,
at
which
point
does
the
container
go
to
fully
running?
I
don't
think
it
is
definitely
not
gone
fully
running
right.
It's
in
that
container
creation
process,
where
it's
trying
to
mount
these
volumes
and
and
ensure
everything
is
exactly
so
it's
it's.
At
that
point,
I
think.
B
F
B
No,
I
I
actually
what
you're
saying
I'm
sitting
there
wondering
crap.
That
would
be
an
issue
so
so
I'm
gonna
have
to
go
talk
to
some
folks.
On
our
end.
A
Okay,
all
right
so
go
ahead,
so
I've
done
some
tests
where
we
specify
the
run
as
username
as
container
administrator,
and
I
can,
if
I
mount
in
the
host
volume
the
host
directory
like
to
see
the
c
root,
I
can
modify
and
mess
with
all
the
different
files.
So
I
like,
I
can
change
my
ssh
key
and
all
sorts
of
things.
If
I
run
as
a
container
user,
I'm
not.
I
can
see
those
files,
but
the
apples
are
blocking
me
from
being
able
to
edit
them.
A
I
think
I
can't
even
cap
them
and
so
my
question
that
I
put
on
the
thread
there
was
if
we
had
user
one
in
container
container,
one
and
user
two
and
container
two
and
these
volume
these
projected
volumes
get
mounted
in
and
then
we
map
in
the
host
volume
into
container
one
and
container
two
can
container
two
now
go
to
where
that
volume
is
entered
and
see
that
file.
My
guess
is
that
it
can't,
but
I
think.
B
I'd
have
to
hear
that
set
up
one
more
time,
but
but
let
me
just
say
one
thing
real
quickly.
You
know
you
mentioned
we'll
get
past
this,
and
then
you
can
describe
that
again,
but
you
know
you
mentioned
that
as
campaign
container
administrator,
you
could
kind
of
have
access
to
whatever
that
was
mapped,
but
as
container
user,
you
couldn't
in
both
cases
that
that
those
container
accounts
kind
of
only
exist
in
the
container.
They
don't
exist
on
the
host.
B
But
what
is
true
of
the
container
administrator
account
he's
a
member
of
the
administrator
group,
which
is
essentially
the
administrative
group's
just
a
well-known
sid
itself.
So
he
still
is
a
administrator
effectively
from
the
perspective
of
the
host.
So
if
he
can
see
some
host
resource,
he
could
manipulate
it,
which
is
why
I
think,
when
you
mapped
the
volume
the
container
administrator
had
access,
whereas
container
user.
B
He
wasn't
part
of
that
administrator
group
and
the
host
is
like.
I
don't
know
who
you
are
so
any
you
know
apple
check
is
going
to
fit.
He
might
look
like
everyone,
he
might
be
a
member
of
the
everyone
group,
so
we
might
have
read
access
to
some
things,
but
I
think
that's
the
difference
you're
seeing
there.
But
can
you
explain
that
setup
again
where
you
were
yeah.
A
Essentially,
hey
james,
can
I
give
it.
F
B
F
And
then
also
attach
like
c
colon,
slash
as
not
as
a
projected
volume,
but
as
a
as
a
mounted
volume
to
each
of
these
containers
then
go
to
container
one
which
belongs
to
user
one
and
try
to
access
the
projected
volume
for
user
two
and
see
if
user
one
has
access
to
user
two's
projected
volumes.
Is
that
correct
james.
A
B
Okay,
in
that
case
I
agree
with
with
james.
I
I
mean
it's
worth
testing,
but
I
agree
I
think
he
said
he
did
not
believe
they
would
have
access
unless
my
understanding
of
what
these
projected
volumes
is
is
wrong.
I
I
should
warn
you
guys
that
you
know
I
kind
of
work
more
at
the
kernel
level,
so
some
of
these
higher
level
concepts-
you
know
kubernetes
names
for
things
and
stuff,
I'm
not
super
familiar
with,
so
I
may
be.
F
Yeah,
maybe
yeah
the
tldr
version
of
a
projector
volume
is
there
was
always
this
requirement
or
people
are
asking
hey.
I
want
to
map
multiple
things
onto
the
same
like
directory
inside
a
container,
and
I
don't
want
stuff
to
be
overwritten,
so
what
the
projector
volumes
does
is
it
says?
Oh
for
these
lists
of
resources,
there
are
certain
only
certain
resources,
like
secrets,
downward
apis,
and
I
forget
the
yeah
secret,
download
apis,
config
maps
and
service
account
tokens
you
can
you
can
collect
all
of
that
and
project
it
onto
the
onto
a
container.
F
B
B
Okay,
okay,
so
they
might
not
be,
but
but
I
think
the
question
would
be
if
they
were
I'm
assuming
that's
the
question
if
they
were
coming
from
the
same
backing
resources
or
backing
files
and
they're
projected
into
both.
Is
there
a
way
to
limit
you
know
container,
you
know
user
one
from
seeing
it
from
one
container,
but
not
user.
Two
from
the
separate
container
was
that.
B
Yeah,
okay,
yeah-
and
I
don't
know
enough
about
how
that
projection
works
to
to
speculate
on
what
the
the
result
would
be.
Muzz
do
you
know
is
that
are
those
just
bind
felted
on
our
end,
find
felt
mapped
for
the
projected
stuff.
A
Okay,
well,
thank
you
for
helping,
I
think
at
least
the
container
administrator
sid
helps
make
sense
on
why
we
can
see
that.
So
I
think
all
right
report
back
and
we'll
continue
to
to
research
it
as
we
go.
F
Yeah
I'll
give
you
that
I'll
try
to
try
this
out
today
and
tell
you
what
happens
james
on
the
same
thread
so.
A
Cool
and
with
that,
I
think
we're
at
time
so
unless
anybody
else
has
something
pressing
we'll
make
sure
you
get
it
onto
the
agenda
for
next
week.
Thank
you.
D
A
Bye
folks
bye
and
we
have
sig
windows
after
hours
coming
up
here.
I
think
I'm
gonna
be
running
it
today.
So
if
you're
still
hanging
out,
you
can.