►
From YouTube: Kubernetes SIG Windows 20210720
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
get
started
all
right,
first
introductions.
If
anybody
wants
to
introduce
themself,
maybe
in
a
couple
of
weeks
we
can
do
another
full
round
of
introductions.
At
least
the
leads
to
see
just
to
keep
that
fresh,
but
if
not
or
until
then
anybody
can
just
kind
of
introduce
themselves
if
they
want.
A
If
not
we'll
get
into
the
announcements,
so
we
are
kind
of
nearing
the
end
of
the
122
release
cycle.
Today
is
the
deadline
for
any
doc
reviews
to
be
marked
as
reviewable,
so
the
requirements
for
this
are
just
to
have
your
sig
labeled
and
have
technical
reviewers
from
your
sig
assigned
to
review
any
doc
prs
and
also
to
make
sure
the
pr
is
not
in
draft
status.
A
A
B
No
yeah
so
there's
just
a
back
port
to
121
because
it
was.
It
was
only
an
issue
from
121
plus.
A
Yeah
thanks
for
that
that
was
an
interest.
I'm
surprised
that
took
so
long
there's
an
interesting
set
of
issues,
okay,
so
for
docs.
Mainly,
I
think
these
are
the.
I
was
looking
through
the
website
and
or
the
yeah
kubernetes
website
repository,
and
these
are
what
I
was.
This
is
a
combination
of
what
I
was
already
tracking
and
reviewing,
and
what
had
the
sig
windows
label
on
it.
A
There's
a
couple
of
pr
updates
for
the
host
process,
containers,
there's
the
general
feature,
promotion
or
introduction
pr,
and
then
there's
also
a
blog
post
that
brandon's
been
working
on.
Thank
you
for
offer
all
that
brandon
anybody
who's
interested.
Please
review
the
more
eyes
we
can
get
on
it,
the
more
I
guess,
general
and
open.
We
can
make
it
the
better.
A
C
Just
any
sort
of
general
content
would
be,
you
know,
fine
feel
free
to
add.
Whatever
you
think
is
pertinent,
there's
still
some
content
that
I'm
going
to
be
adding
in
the
next
day
or
two.
So
just
let
me
know
if
you
have
anything
else,
you
want
to
add.
A
Yeah
I've
been
starting
reviews
on
that
too.
I
think
I
have
most
of
mine
still
in
a
like
the
github
draft
review,
but
it
looks
it
looks
like
it's
in
good
shape,
we're
getting
there
all
right,
cool
james,
also
added,
so
we
do
have
a
package
that
we
are
producing
here,
that
has
updated
container
debits,
hds
shin
bits
and
everything
that
are
needed
to
run
host
process
containers
coming
out
of
this
branch
just
to
help
get
everything
together.
A
I
think
we
are
going
to
plan
on
moving
this,
the
github
action
that
makes
this
package
into
sig
windows
tools
just
to
make
it
more
accessible
james.
Did
you
have
anything
you
want
to
add
for
that?
I
think
you
just
added
that.
B
Yeah,
so
the
the
job
components
merged
into
hts
shims,
so
you
just
have
to
build
off
hcs
gem
for
the
main
branch
of
hsm
to
build
those,
and
then
the
container
d
part
is
based
off
of
perry's
pr
which
we're
just
waiting
for
the
cry
components
to
be
merged
into
container
d.
The
122
changes,
but
just
wanted
to
share
this.
If
anybody
wants
to
try
it
out,
there's
a
package,
that's
in
the
same
format
as
the
upstream
container
d,
and
it
has
everything
built
for
you.
A
Yeah
and
to
expand
on
that
a
little
bit
the
container
d
project
generally
vendors
in
all
of
the
cry
updates
after
the
kubernetes
release
is
cut.
So
we
are,
I
think,
we're
waiting
until
august
4th
potentially
before
those
cry
changes
will
get.
Int
will
get
ingested
into
continuity,
and
after
that
we
can
get
various
updates
merged,
which
use
all
those
new
api
fields,
so
that'll
be
exciting,
and
then
that
will
make
it
even
more
accessible
for
people
to
run
and
test.
These
host
process
containers.
D
A
quick
question
about
that
sure
the
the
stuff
that's
being
built
off
of
your
branch
mark.
Eventually,
we
won't
need
to
do
this
right.
A
Yeah,
so,
in
order
to
have
this
that,
like
these
changes,
not
take
many
many
many
kubernetes
releases,
we
have
instrumented
or
we
were
controlling
these
host
process.
Containers
with
annotations
that
are
passed
along.
The
cry
calls
and
those
changes
are
like
working
that
all
of
the
cry
calls
pass
in
all
the
all.
The
needed
cry
calls
pass
in
the
annotations
that
they
get
from
all
the
way
down
to
the
hts
shim
layer
to
control
all
of
this,
the,
but
at
the
same
time
we
added
proper
fields
to
the
cry
api
to
control
this.
A
A
So
once
those
merge,
then
yeah,
we
hope
to
just
be
able
to
build
well
so
after
that,
we'll
vendor
in
a
new
hts
ship
chain,
well
vendor
in
all
the
hts
gym,
changes
into
container
d
and
the
container
d
package
itself
does
come
with
the
shim
binaries
needed.
So
eventually,
hopefully,
just
a
couple
like
the
next
release.
After
perry's
changes,
merge
all
you'll
need
to
do
is
grab
the
container
d
package
from
the
container
d
container
d
release
and
run
that
and
then
enable
anything
in
the
various
kubernetes
components.
D
A
Hopefully
so
container
d,
like
the
continuity
package
today
works
without
the
like,
if
you're
not
relying
on
these
host
process
containers,
so
that's
already
kind
of
up
to
date
and
you
can
just
grab
that
and
run
with
cry
container
d
for
the
timing
of
the
host
process.
Containers,
specifically,
I
think
it's
a
little
bit
unknown.
A
Some
folks
started
taking
a
look
the
other
day
about
the
the
release
process
and
release
timelines
for
a
new
container
d
release,
and
it
looks
like
they're
working
towards
a
1.6
release,
which
would
be
the
first
container
d
release
that
would
contain
these
changes.
I
don't
think
that
they'd
they'd
be
willing
to.
You
know
backport
these
or
add
these
to
a
a
patch
release,
because
it
is
my.
D
So
we
should
just
prepare
for
having
once
we
move
to
continuity,
at
least
from
a
release
perspective
not
have
host
process
support
until
they
release
a
new
version
that
doesn't
require
all
these
other
bits
to
be
included
or
other
changes
to
be.
You
know,
sort
of
backfilled
not
from
the
main
repos
is
what
I
think.
I'm
hearing.
A
Yeah
not
not
from
a
continuity
release.
Well
part
part
of
the
the
website
updates
that
we're
going
to
do
are
is
going
we're
going
to
highlight
that
that
there
is
that
we're
still
waiting
on
the
container
d
bits
to
be
released
well,
so
that
that
will
have
a
bigger
impact
once
we
remove
the
annotations.
D
A
And
I
think
that'll
be
some
motivation
to
move
this
like
this
github
repository.
That's
linked
here
is
super
simple.
It's
just
a
github
action
that
enlists
in
a
couple
of
repositories
builds
the
binaries
from
the
head
and
then
releases
a
zip
package.
With
that
we
there's-
I
guess,
moving
this
into
one
of
the
kubernetes
sig's
repositories
might
make
it
a
little
bit
more
official
and
easy
to
consume,
but
it
will
still
be
you'll
still
need
to
build,
like
the
tip
of
container
d,
to
get
the
functionality
for.
A
And
we
will
we're
going
to
have
some
discussions
around
feature
promotion
for
this
when
we
do
some
cap
updates
for
123.
For
this
definitely
we're
going
like
it's
not
going
stable
until
all,
like
all
the
continuity
components
are
released
and
and
well
validated,
but
we'll
have
to
have
some
discussions
around
the
beta
time
frame
too
or
going
to
beta.
A
Any
other
questions
about
the
current
status
of
process.
G
A
So,
generally,
how
the
the
features
progress
is
in
alpha
it's
off
by
default
and
then
in
beta
it's
on
by
default.
So
I
think
we'll
follow
unless
there's
unless
there's
a
need
to
change
that,
I
think
we'll
probably
just
follow
that
guidance
and
then
once
it's
stable.
A
Yeah
yeah
we
can.
We
can
have
some
of
those
conversations
during
the
next
set
of
cap
updates
as
well.
We're
talking
about
this,
but.
A
Okay
yeah,
if
anybody
has
any
other
questions
or
anything,
just
feel
free
to
ask
in
any
of
the
usual
channels.
Next
was
the
csi
proxy
support
is
going
to
to
stable.
I
believe
that
I
saw
all
of
the
necessary
components
are
had
their
v1.0
release,
and
this
is
just
a
quick
update
to
the
website
with
a
little
bit
more
details,
some
insulation
instructions
and
just
stating
that
this
support
is
now
stable,
and
then
there
is
a
just
a
general
updates.
Pr
that
I
I
started.
A
Comment
all
right,
so
next
or
the
only
other
thing
on
the
agenda
was
I
wanted
to
revisit
the
kept
that
that
ravi
is
bringing
forward
for
introducing
or
for
identifying
windows
pods.
At
admission
time,
I
had
a
chance
to
take
a
look
through
this.
I
think
a
couple
of
other
folks
did
so.
The
background
here
is
that
there
is
the
new
pod
security
policy
replacement
that
was
put
in,
I
think
in
alpha
in
122.
A
There
were
a
lot
of
open
questions
about
how
do
we
limit
policy
enforcement
for
os
specific
policies
to
pods
when
there
is
not
a
kind
of
a
canonical
way
of
doing
that
in
the
podspec?
There
were
a
couple
of
ideas
that
were
floated
around
and
one
of
them
was
to
use
runtime
classes
so
because
runtime
classes
generally
will
have
an
os
like
a
node
selectors
on
them
that
will
restrict,
which
or
that
will
guide
the
pods
to
a
particular
machine,
but
also,
more
importantly,
those
are
more
or
less
validated
on
at
the
cubelet.
A
So
before
the
containers
are
started,
there
were
some
concerns
about
just
using
node,
selectors
or
other
mechanisms
from
the
sig
security
that
it
would
be
possible
to
bypass
the
scheduler
and
not
have
any
kind
of
os
level
enforcement
for
scheduling
those
pods.
A
So
this
this
way
like
the
whatever
runtime
you
target
has
to
have
a
an
entry
in
like
has
to
be
associated
or
installed
with
the
container
runtime.
In
order
to
to
start
that
up,
so
I
took
a
look
at
this,
the
I
think
in
general.
The
idea
makes
sense
and
it
doesn't
seem
like
it's
going
to
be
prohibitively
expensive,
especially
in
the
api
server
or
at
the
other
er
the
other
kind
of
critical
points
in
time.
A
A
Will
they
just
kind
of
go
through
and
be
subject
to
any
linux,
specific
or
global
pot
security
policies
that
that
are
in
flight
there,
and
mainly,
I
think
that
that's
what
a
lot
of
the
other
kep
reviewers
from
other
cigs
are
going
to
want
to
see.
A
So
there
were
also
a
couple
of
other
good
ideas
that
were
in
that
came
up.
I
think
in
the
discussion
of
this
pr
one
that
was
more
recent
was
so.
The
one
of
the
big
concerns
was
that,
even
if
you
have
a
node
selector
that
says
you
know
or
kubernetes
os
equals
windows,
there
are
mechanisms
for
just
placing
the
containers
or
pods
on
a
machine
and
not
going
through
the
scheduler.
A
I
won't
go
into
those
here,
but
one
good
thing
to
one
one
thing
that
came
up
was:
it
would
be
good
to
add
a
check
in
the
cubelet
to
say
that
hey,
if
this
pod
had
an
os
node
selector
on
it,
make
sure
that
the
os
matches
the
the
the
arch
that
you're
running
on
or
those
that
you're
running
on
and
things
like
that.
That
can
help
prevent
people
from
maliciously
saying
hey.
A
E
Tim
who
reviewed
it
from
the
cigar
side
who
authored
the
bot
security
policies
now
which
are
replacement
for
psps,
so
he
he
suggested
that
the
approach
say
if
you
go
with
the
runtime
classes.
E
E
So
the
point
he
made
was
say:
if,
if
we
go
with
the
runtime
class,
it's
actually
a
mutating
admission
plugin
and
the
validating
ignition
plugin,
the
port
security,
one,
the
latest
one,
it's
a
validating
one,
so
it
will
be
run
after
the
runtime
class
has
already
been
run.
E
What
are
we
going
to
do
with
them
the
runtime
class,
because,
obviously
we
need
to
make
some
changes
in
the
runtime
class,
as
well
as
in
the
pod
security
plugin
in
the
pod
security
plugin.
What
needs
to
be
done
is
something
that
we
still
need
to
figure
out
because
say
if
the
runtime
class
admission
plugin
has
already
been
run,
we
would
have
both
a
runtime
class
as
well
as
the
node
selector,
updated
to
include
the
whole
stores.
E
E
I
was
initially
thinking
that
on
the
cubelet
side,
the
changes
would
be
sort
of
minimal
like
instead
of
rejecting,
we
would
strip
the
pod
security
policies
which
we
are
doing
currently,
but
I
think,
after
looking
at
tim's
points
where
we
can
bypass
the
the
node
spec,
we
do
not
have
this
in
openshift,
because
we
have
an
admission
plugin,
which
blocks
people
to
directly
update
the
node
link.
So
I
did
not
think
from
those
lines.
E
So
we
perhaps
need
to
include
the
sig
node
folks
in
reviewing
this,
because
we
are
going
to
reject
the
part
during
the
admission
time
on
the
cube
left
side
again
instead
of
stripping
those
security
policies.
So
those
two
are
additional
points
that
I
still
need
to
work
through
and
mark.
I
have.
E
With
updated
sample
implementation,
pr
which,
which
shows
how
to
use
like
which
admission
plugin,
is
actually
going
to
use
the
runtime
classes
authoritatively,
so
I
have
used
the
runtime
class
admission
plugin
as
an
example.
E
E
No
he's
not
saying
that
okay
he's
trying
to
make
this
like,
say
if
you
go
with
runtime
classes
and
we
use
note
selector
as
the
authoritative
field
to
say
that
it's
a
windows
part.
I
would
like
to
have
the
same
logic
applied
at
the
cubic
level,
saying
that
I'm
using
node
selector
save
the
node.
E
Selector
is
kubernetes
dot,
io,
slash
os
equal
to
windows,
and
if
the
runtime
says
the
operating
system
is
not
windows,
I
would
like
to
ensure
that
the
part
itself
gets
rejected
before
it
reaches
a
stage
where
it
goes
to
the
runtime.
And
then
we
try
to
bring
up
the
container.
D
Okay,
so
it's
not
like
yeah,
so
so
we
would
yeah.
So
we
would
go
away
from
doing
what
we're
doing
today,
which
is
where
we're
stripping
linux
specific
stuff
on
a
windows
pod.
We
just
strip
it
away
before
it's
added
to
the
node
right
or
it's
not.
E
What
we
would
do
is
we
would
check
for
the
existence
of
runtime
class
and
we
would
check
for
the
node
selector
if
both
of
them
exist
and
if
runtime.boost
is
not
equal
to
windows,
deny
the
admission
itself
in
the
cubelet
code,
yeah.
D
So
the
other
question
I
have
is
this
might
be
coming
from
lack
of
knowledge.
There
is
no
way,
can
you
bring
up
an
api
server
and
somehow
not
have
these
validation
and
admission
plugins
running
or
that's
just
not
doable
like
people
have
to
go
really.
You
know
compile
code
out
and
things
like
that
to
make
that
happen.
E
E
D
So
that's
what
I
was
hoping
for,
like
somebody
needs
to
actually
explicitly
try
and
disable
this.
It's
not
like
a
add
some
config
somewhere
and
it'll.
Stop
working
right.
F
D
E
That's
all
I
have.
I
just
want
to
reiterate
that
if
you
go
with
the
runtime
classes,
we
are
sort
of
going
to
force
people
to
use
runtime
classes
in
future,
like
if,
if
people
have
just
the
node
selector
and
no
runtime
class,
it
is
going
to
cause
a
problem.
So
I
just
want
to
reiterate
that
if
people
have
issues
with
that,
please
let
us
know
you
can
react.
E
D
Hey,
could
you
repeat
your
concern?
I
couldn't
fully
understand
that.
E
So
the
existing
users
say
if
they
have
node
selector.
We
are
going
to
take
a
phased
approach
and
say
that
in
24
time
frame
say
perhaps
if
everything
goes
fine
and
in
124
we
are
going
to
graduate
to
beta
with
this
particular
plugin.
E
We
may
actually
throw
a
warning
in
the
first
release,
saying
that
you
just
have
a
node
selector,
but
you
do
not
have
a
runtime
class
which
has
kubernetes
dot
io,
slash
os
as
a
node
selector
and
in
the
subsequent
releases,
when
we
ga
and
we
plan
to
align
this
particular
code
change
to
come
in
with
the
ga
and
beta
of
pod
security
admission
plugin.
A
That's
yeah
that
that
is
one
of
my
biggest
concerns
here
too.
If,
if
possible,
I'd
like
to
get
to
a
point
where,
if
they
aren't
running
with
like
pod
security
policy
or
or
other
things,
they
don't
need
to
specify
the
runtime
class
just
to
make
it
a
little
bit
easier
for
people
to
get.
You
know
up
and
running
with
windows.
A
B
If
the
yeah
so
go
ahead,
if
the
runtime
class
is
a
mutating
web
hook,
is
it
possible
since
the
node
selector
of
windows,
os
is
well
known
to
kind
of
translate
that
to
and
create
a?
I
guess
you
can't
create
during
that
time
period,
but
all
right
number
one.
I
was
just
kind.
E
We
can,
and
that's
that's
one
of
the
things
that
I've
mentioned
in
the
in
the
cap
like
what
we
are
going
to
do
is.
We
are
going
to
have
an
example,
admission
plugin,
a
mutating
web
technically
which
modifies
like
it
looks
at
the
node
selector
and
save
the
kubernetes
or
io
host.
Sorry
os
exists
and
no
runtime
class
exists.
E
It
is
automatically
going
to
create
one,
but
it's
not
going
to
be
enabled
by
default,
and
this
is
just
going
to
be
an
example,
one
like
showing
people
how
to
do
that.
E
Because
the
main
problem
with
creating
those
runtime
classes
is
it's
sort
of
a
cluster
admin,
specific
activity
from
a
security
standpoint,
if
we
create
those
runtime
classes
for
them,
I
do
not
know
how
well
it
would
fly
with
yeah.
A
So,
like
I
keep
going
back
and
forth,
should
we
try
and
at
like
just
in
my
mind,
do
we
try
and
add
that
default
container
d
run
time
or
like
the
runtime
class?
That's
in
the
the
default
container
d
install
it
like
in
this
case,
or
what
what
do
we
do?
I
think
that
this
is
probably
the
most
important
thing
that
we
need
to
get
consensus
on
out
of
the
whole.
The
whole
cap
like
do
we
like?
A
Are
we
okay
with
having
requiring
essentially
requiring
runtime
classes
in
order
to
get
your
windows
pods
scheduled
several
releases
down
the
line
and
my
personal
feeling
is
like
if,
if
we
can
avoid
it,
I
would
like
to
do
that
because
runtime
classes
are
very
cluster
specific,
but
I
don't.
I
don't
think
that
we
really
have
a
better
approach
or
like
a
better
path
forward.
Here
too.
So.
E
Yeah,
I
think,
like
the
main
problem
is
from
a
security
side.
We
have
always
suggested
that
do
not
use
not
selected
directly
because
it's
it's
unsecured,
but
in
the
windows
documentation
right
from
the
start,
we
have
told
that
node,
selector,
plus
all
distance
combination,
is
good,
whereas
we
know
that
these
problems
existed
for
a
long
time.
So
if
we
had
gone
with
the
runtime
classes,
it
would
have
been
okay,
but
the
runtime
classes
came
in
later
right.
I
mean.
D
E
2018,
that's
when
these
this
actually
sort
of
happened,
so
I
think
from
a
user
experience.
Yes,
perhaps
it's
going
to
cause
a
problem,
but
in
the
long
term
I
think
it's
better
for
the
customers
to.
D
A
Yeah,
so
I
think
that
I
need
to
check
the
the
docs
again,
but
I
think
that
they
do
say
in
order
to
run
multiple
in
order
to
have
a
cluster
that
has
like
you
know,
windows,
server,
2019
and
then
like
2002
or
20
h2
nodes
in
it.
At
the
same
time,
it
is
preferable
to
use
runtime
classes
to
target
the
windows
nodes,
so
it's
not
a
new
concept,
but,
as
ravi
mentioned,
like
the
initial
guidance
or
the
the
first
place,
people
are
going
to
see
talk.
E
That's
the
notes
yeah
so
which
actually
brings
me
to
the
next
point
like,
since
we
know
that
we
know
that
this
is
going
to
cause
a
problem,
and
we
know
that
node
selector
plus
tolerations
are
insecure.
A
Probably
not
for
122,
but
for
123.
Yes,
I
think
I
mean
maybe
for
122
also,
but
I
don't
think
we
have
a
whole
lot
of
time
to
get
that
done.
But
yes,.
E
I'm
sorry
I'm
just
talking
about
the
docs
changes
where
we
tell
that
this
is
perhaps
the
recommended
way
and
we
just
give
a
hint
that
in
future
they
may
they
may
be
taken.
If
we
have
this,
that
you
want
to
go
with
the
runtime
chances.
A
Yeah,
I
think
we
can.
We
can
do
that,
the
other,
the
other
thing
that
was
actually
suggested
in
some
of
the
pod
security
policy
kind
of
talks,
and
you
you
did
mention
it
that
in
openshift
you
guys
just
have
a
plug-in
that
will
block
anything
that
or
will
that
will
block
a
lot
of
you
know
admission
that
has
no
name
already
set.
A
A
E
Yeah,
like
I
think
we
proposed
it
in
the
past,
but
I
do
not
remember
the
exact
reasons,
because
this
was
beyond
when
I
started
working
on
this
is
before
I
started
working
on
openshift,
so
I
did
not
have
history
why
it
got
rejected
or
was
it
never
proposed?
I
am
unclear
on
that,
but
is
that
something
that
you
want
to
combine
in
this
effort
or
you
see
it
as
a
different
thing.
E
Well,
like
the
node
name,
is
fine,
but
the
tolerations.
That
would
still
be
a
problem.
A
E
So
unless
we
have
our
background
tolerations
and
node
selector
that
again
is
sort
of
a
problem.
A
If
you
want-
let's,
if
you
have
a
chance
to
do
that
today,
yeah
go
ahead
and
do
some
of
that
and
we
can
review
it
if
not
I'll,
try
and
add
them.
I
can
try
and
add
them
to
the
general
updates,
pr
that
I
opened,
but
I
might
not
I'm
pretty
meeting
heavy
today,
so
I
won't
be
able
to
get
to
that
until
at
least
later
this
afternoon
or
tonight.
E
Yeah
same
thing
like
today
may
be
too
hard
for
me,
but
I
think
I'll
be
able
to
do
it
by
tomorrow.
A
A
F
Quick
announcement
for
folks
that
are
here,
we
do
have
calico
now
working
in
the
dev
environment.
So
if
you
want
to
test
the
latest
version
of
calico
319
on
kubernetes,
1.22
or
andrea,
we
support
both
of
those
and
if
anybody
wants
to
add
other
cni's
like,
let
me
know
we
can.
We
can
look
at
him,
but
I
think
that's
probably
good
enough
for
now.