►
From YouTube: Kubernetes SIG Windows 20210810
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
august
10th
2021
iteration
of
the
kubernetes
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct.
A
Let's
get
started,
we've
got
bigger
than
usual
agenda,
which
is
great.
First
of
all,
you
can
do
some
introductions
if
people,
if
anybody
is
interested,
we're
starting
the
123
release
cycle
so
now
might
be
a
good
time
to
just
reintroduce
every
like
myself
and
some
of
the
other
leads,
and
we
can
see
if
there's
anybody
here
who
just
wants
to
introduce
themselves
or
ask
any
questions
for
us.
So
I'll
start,
I'm
mark
rossetti.
I
work
at
microsoft
and
I'm
the
sig
windows
co-chair.
A
I
work
in
the
azure
org,
but
primarily
focus
on
upstream
or
kubernetes
projects
with
these
james
and
jay
and
claudio.
Do
you
guys
want
to
go
and
introduce
just
say,
hi,
real,
quick.
B
Yeah,
I'm
jay,
I
I'm
claudio's,
my
friend
and
and
yeah.
I
I
I
I'm
one
of
the
leads
windows
lead,
sig
windows
of
stream
elites.
C
Go
ahead,
james
hi,
I'm
james,
I'm
one
of
the
leads
as
well
welcome
to
anybody
else's
name.
D
A
All
right
sounds
good
if
there's
anybody
else
who
wants
to
either
introduce
themselves
or
ask
any
questions,
either
raise
your
hand
or
just
go
ahead
and
say
hi.
If
not,
we
can
get
going
with
the
agenda.
A
All
right
I'll
continue
on
with
the
announcements,
so
we're
just
entering
the
123
development
phase
so
wanted
to
take
some
time
now
to
just
start
figuring
out
like
what
keps
we're
planning
on
advancing,
because
getting
those
kind
of
registered
early
and
having
eyes
look
at
them
early
is
always
helpful.
These
are
the
three
that
I
that
I'm
aware
of
that
we're
probably
going
to
be
pursuing.
There's
the
privileged
host
process,
containers
which
went
to
alpha
in
122.
A
we're
going.
I
think,
we're
going
to
try
and
see
if
we
can
get
this
to
go
to
beta
so
that
we
can
get
more
widespread
adoption
they'll
be
I'll,
probably
be
submitting
a
updates
to
the
cap
within
the
next
week,
or
so
we
can
figure
out
exactly
what
all
the
requirements
are
and
if
that's
ready
the
big
potential
risk
for
that
that
I
see
is.
A
A
The
next
one
was
the
the
cube
ctl
log
viewers
is
ravi
or
arvind,
or
anybody
from
red
hat
on.
Do
you
want
to
quickly
comment
on
this?
Is
this
something
that
we're
still
trying
to
pursue.
F
Sorry,
the
two
two
five
eight.
A
A
The
there
was
the
kept
the
node
log
viewer
kept.
There's
a
oh,
no
shoot.
I
got
the
links
all
wrong.
There
was
the
node
service.
A
F
Yeah,
I
know
everybody's
been
sending
a
lot
of
time
on
that.
We,
if
that's
not
sorry,
I'm
not
100
sure
about
the
status
on
that.
Okay.
A
We
can
follow
up
on
slack.
I
believe
that
one
got
accepted
for
alpha
in
122,
but
the
implementation
didn't
make
it.
So
we
can
probably
just
need
to
update
the
milestones
on
the
cup
for
that
and
continue
work
on
that.
F
E
F
Oh
yeah
no
worries.
I
was
trying
to
figure
out
what,
if
that
was
what
I
was
thinking
of,
and
it
was.
D
E
Yeah
we
can
sync
up
with
erwin
and
then
perhaps
we
will
let
you
know,
mark
and
rest
of
the
folks.
A
Okay
sounds
good
yeah,
I'm
I'm
assuming
that,
since
it
was
approved
for
the
122
release
and
there
was
kind
of
implementation
prs
that
we're
going
to
continue
to
pursue
that
for
the
next
release,
but
we'll
we
can
confirm
later.
Okay,
the
next
one
is
the
one
that
ravi
have
been
working
on
for
the
identifying
windows
pods
at
api
admission
time.
E
That
that
is
correct,
and
today
morning
I
had
a
discussion
with
jordan
offline,
so
I
think
just
to
give
everyone
some
some
rewind
of
what
has
happened.
So
the
main
points
were.
We
should
have
backwards,
compatibility
with
the
node,
selector
and
tolerations
being
accepted,
even
for
like
whatever
number
of
releases
we
are
going
to
have
for
kubernetes.
So
that
is
number
one
and
number
two
is
the
workload
templates
say
if
they
have
a
node
selector?
E
How
are
we
going
to
pass
that
information
on
to
the
port
so
that
so
that
at
the
validation
stage,
we
can
directly
validate
the
workload
templates,
not
the
ports.
So
those
two
are
the
points
that
jordan
has
raised.
E
E
If
the
pod
is
already
having
a
node,
selector
plus
toleration
within
the
runtime
class,
it
would
automatically
be
merged,
whereas
if
we
have
a
runtime
class
at
the
port
template
level,
it
will
not
be
immediately
available
within
the
the
workload
template
for
for
the
validation,
the
port
security
admission
plugin
to
validate
so
like
I'm,
I'm
sort
of
still
thinking.
But
what
what
I
feel
at
this
point
of
time
is.
Linux
is
already
doing
something
similar
like
say.
E
If
you
have
a
runtime
class
in
the
pot
template
of
a
deployment
or
a
daemon
set,
unless
it
hits
the
pod
stage,
nothing
will
be
updated
in
the
pod
template
level,
because
runtime
class
mutating
plugin
would
act
only
on
the
pods
even
today.
So
that
is
something
that
that
already
exists.
We
are
not
going
to
touch
so
that
you
that
inconsistent
user
experience
already
is
there
and
in
order
to
prevent
or
in
order
to
preserve
the
backwards
compatibility,
we
are
still
going
to
have
it.
E
So
I'm
still
thinking
like
if
you
folks
have
any
suggestions,
I'm
all
ears,
let
me
know,
but
this
is
the
current
stage
of
things
like
jordan.
If
we
put
the
note
selector,
he
he
in
the
workload
template
he's
fine
with
it.
A
Okay,
some
background
for
anybody
who's
new
to
the
call-
or
this
has
been
talked
about
most
meetings
in
the
past,
but
this
is
really
there's.
There's
no
definitive
way
of
identifying.
A
E
A
E
I
think
next
week
should
be
good,
like
monday
or
tuesday,
I'll
set
something
up,
okay,
but
at
high
level.
I
think
jordan
has
given
his
his
opinion,
so
I
need
to
find
out,
if,
like
I
need
to
think
through,
if
there
are
any
implications
like
at
this
point
of
time.
The
only
thing
I
can
think
of
is
say:
if
we
use
node
selector
for
at
the
validation
stage,
would
it
be
okay
to
use
the
node
selector
in
the
mutating
stage
of
the
api
server
admission?
E
The
main
reason
is
like:
in
the
downstream
openshift
side,
we
have
a
mutating
admission
plugin,
and
previously
we
were
thinking
we
would
not
use
node
selector,
because
anyone
can
apply
a
node
selector
now,
with
with
the
validation
stage,
approving
node
selector
as
a
valid
way
to
say
it's
a
windows
port
using
node
selector.
Would
it
be
okay
to
use
the
same
fee
at
the
mutating
stage?
Is
something
that
we
need
to
think
through.
E
So
I
need
some
more
time,
but
I
think
I'm
fine
with
having
a
discussion
like
say
if
I
come
up
with
some
other
counter
points
which
I
cannot
think
of
now.
I
will
set
up
that
meeting
next
week.
A
Okay
and
whatever
we
do,
we
should
just
have
somebody
from
sigath
also
take
a
look,
even
though
I
think
does
jordan
kind
of
talk
for
sigoth
as
well.
Yeah.
A
Yeah:
okay,
thanks
for
driving
all
of
this
ravi,
you
can.
E
I'll
update
the
comments
with
the
discussion
that
I
had
with
jordan
offline
today.
F
A
The
last
one,
I
think
that
you
created
this
one
jay,
there's
our,
but
I
know
claudia,
was
working
on
this
on
and
off
too
there
I
saw
an
issue,
that's
still
active
or
open
for
windows.
Conformance.
A
B
Thinking
to
coincide
it
with
the
removal
of
docker
some
of
the
set
yeah
well,
I
was
thinking
of
a
few
things.
Well,
I
was
thinking
of
1.22.
I
was
thinking
of
the
removal
of
those
apis
and
then
looking
at
those
and
then
looking
at
what's
going
on
with
the
proxy
stuff,
because
there's
certain
things
that
aren't
supported
by
the
user
space
proxy
that's
still
in
tree
and
then.
B
But
I
yeah
so
I
I
do
want
to
do
it.
When
is
the
date
for
having
a
having
those
finished.
A
B
B
That
you
know
supports,
for
example,
active
directory
that
doesn't
support
active
directory
that
supports
network
policies,
for
that
doesn't
support,
network
policies
that
supports
like
just
so
that
we
have
a
our
a
definition
of
windows
that
actually
makes
sense
for
customers,
and
I
think,
there's
so
many
ways
we
can
do
it,
and
I
think
we
can
do
a
first
pass
at
it
for
123.
A
Okay
for
the
122
release,
the
enhancement
freeze
was
week
end
of
week
three
of
the
development
cycle.
So
that's
not
a
whole
lot
of
time.
B
B
You
know
what
I
mean,
because
I
think
we
all
have
an
intuition
of
of
what
what
it
means
to
run
windows.
So
I
think
it's
just
a
matter
of
us
writing
it
down
right
like
so
that
there's
something
in
place.
I
don't
think
that
this
is
has
to
be
super
hard.
There
may
be
some
back
and
forth
about
certain
things
like,
for
example.
Maybe
people
don't
think
topology
services
should
be
in
the
conformance
definition,
but
like
or
maybe
ravi's
stuff
right
so.
A
D
D
B
B
Yeah
we're
not
going
to
get
a
windows
conformance
tag.
I
don't
even
think
that
other
cigs
will
necessarily
yeah
like
want
those
tags,
and
I
don't
even
know
if
we
think
we
want
those
tags
right.
I
I
agree
with
you.
It's
I
think
that's
a
nightmare
to
go
down.
I
think
it's
coming
up
with
the
definition
and
and
as
long
as
the
definition
is
objective,
I
think
that's
what
matters.
A
B
Okay,
I
mean,
I
suppose
it
could
include
whatever
we
think
it
should
include
right,
we're
the
ones
that
are
running
windows
clusters
right
so
like
who
all
wants
to
be
involved
with
that
sounds
like
you're
interested
claudio.
I
can
set
something
up
claudio.
A
H
The
rancher
team
would
definitely
be
interested,
but
it's
just
me
here.
I
don't
think
jamie
and
luther
were
able
to
make
it.
B
A
All
right,
I'm
gonna,
continue
for
interest
of
time.
I
need
to
double
check
if
they're
still
doing
the
enhancement
liaison
role,
but
assuming
they
are.
I
will
just
solicit
volunteers
to
be
the
sig
windows
enhancement
man,
for
this
release.
A
This
role
is
something
that's
just
for
the
kind
of
duration
of
one
release
and
it's
mainly
just
a
way
to
facilitate
or
to
help
with
some
non-coding
contributions
by
facilitating
the
different
enhancements
that
a
sig
is
putting
forth
and
interacting
with
the
the
release
team,
I'll,
post
and
see
windows
also
asking,
if
there's
anybody
who
would
like
to
volunteer
for
that,
assuming
that
they're
still
doing
that
in
123
release
and
we
can
kind
of
continue
from
there
there's
a
couple
of
issues
that
looks
like
we
want
to
discuss
so
I'll
jump
into
that
james.
G
Yeah,
so
in
122
a
bunch
of
things
broke,
some
apis
were
broken,
so
I
updated
all
of
the
gms
crds
and
upgraded
a
bunch
of
dependencies
and
things,
and
in
that
I
noticed
that
the
crds
are
still
in
a
v1
alpha,
and
this
came
up
with
the
customer
as
well,
and
they
they
asked
why
it
was
still
in
alpha
gmsa
itself
and
kubernetes.
G
Kubernetes
went
to
stable,
I
think
in
1
18
119
time
period,
and
this
web
hook
hasn't
changed
in
for
a
very
long
time
for
a
couple
years
now
other
than
the
upgrades
that
I
did,
and
so
I
think
we
should
maybe
bump
this
to
either
a
beta
or
a
v1.
But
I
wanted
to
open
up
an
issue
and
bring
it
up
here
and
see
if
anybody
had
any
things
that
they
need
to
get
in
here
before
we
go
to
to
a
more
stable
version.
A
Yeah
I
was
taking
a
look
and
it
it
sounds
like
that.
So
the
way
that
this
works
for
anybody
who's
new
is
that
there
is
a
field
on
the
pods
back
windows,
security
context,
credential
spec
or
it
might
be
gms
a
credential,
spec
or
just
credential
spec,
and
that's
a
json
blob
that
gets
passed
to
for
for
docker.
It
gets
written
to
a
file
and
then
the
path
to
that
file
gets
passed
on
the
container
activation
call
and
for
container
d.
A
It
gets
passed
along
the
cry
api
and
my
understanding
is,
though,
that
the
syntax
of
that
json
is
kind
of
locked
in
place.
There'd
need
to
be
changes
on
the
to
the
windows
os
or
like
the
hcs
layer
to
update
those,
and
then
this
is
a
way
that
cluster
admins
can
keep
track
of
these
credential
specs
as
kubernetes
objects,
and
then
there
is
a
an
emission
web
hook
that
will
kind
of
blast
the
contents
of
this
into
that
json
field.
A
Since
I
believe
that
this
would
like
this
was
just
an
oversight
as
we
promoted
the
featured
beta
because
it
was
tracked
in
the
out
of
tree
sig
thing:
does
anybody
have
any
thoughts
or
objections
to
doing
that?
A
A
D
A
A
So
this
this
crd
is
a
way
of
specifying
what
the
contents
of
what
that
json
string
should
be
in
in
yaml,
and
then
it
just
gets
serialized
right
in
to
that
json
string
and
put
in
this
the
crd
spec.
So
my
my
understanding
is
this
has
not
changed
even
since
the
feature
was
in
alpha,
like
the
the
actual
structure
of
that
json.
G
Yeah
these
this,
this
particular
version,
is
independent
of
the
kubernetes
versions.
We
own
this
in
the
gmsa
component.
A
All
right:
well,
we
could
take
that
offline.
I
wanted
to
I
added
these
notes
about
the
container
dc
and
I
support,
but
since
sebastian
looks
like
there's
an
issue
here,
do
you
want
to
discuss
this
and
then,
if
there's
time
we
can
go
back
to
the
container
dcni
updates.
F
Yeah
sure
I
just
came
across
this
yesterday
and
this
morning,
so
I'm
still
trying
to
fully
understand
what's
going
on,
but
my
understanding
is
that
on
linux,
if
as
a
fallback
for
or
excuse
me,
if
there's
no
cloud
provider
associated
with
the
node
that's
being
joined
to
the
cluster,
then
the
linux
kubelet
is
going
to
be
picking
a
network
interface
with
a
default
gateway
and
that
behavior
is
not
seen
on
windows.
Instead,
it's
just
picking
any
interface
so
seems
to
me
like
this
should
be
happening
on
windows
as
well.
F
So
I
put
a
bunch
of
links
in
the
code
to
kind
of
the
culprits
as
to
what
this
is
happening
and
basically
it
comes
down
to
on
all.
Is
it's
checking
the
proc
file
system
and
proc
slash
net
slash
route
to
try
and
find
an
interface
with
a
gateway,
so
this
seems
like
it
needs
to
be
fixed
if
people
agree,
I
can
put
some
time
into
looking
into
this
some
more
and
see
if
I
can
come
up
with
something.
A
Out
of
curiosity,
do
you
know,
is
it
always
just
picking
the
first
interface
that
powershell
like
get
net
adapter,
what
it
would
enumerate
so.
F
It
uses
goes
net
library,
dot
interfaces.
A
F
B
Yeah,
this
is
really
important
to
us
for
sure.
I'm
really
glad
you're
working
on
this
sebastian.
Thank
you
because
it's
really
confusing
to
figure
out
what
the
I
I
was
just
looking
at
this
with
friedrich
yesterday
and
and
a
meme
and
we're
just
like
trying
to
figure
out
like
what's
the
right
way
to
set
up
the
note
ip,
and
why
is
the
note
ip
wrong
in
our
clusters
and
stuff.
F
Okay,
yeah,
I'm
glad
that
I'm
not
the
only
one.
That's
run
into
this
issue,
yeah
I'll,
take
a
crack
at
it
and
see
what
I
can
do
and
maybe
reach
out
on
segwindos
and
then,
if
I
can't
do
it
in
time
or
if
I
get
too
busy,
then
maybe
someone
else
can
pick
it
up,
but
at
first
I'll
try
and
do
something
about
it.
B
Okay
sounds
good
thanks
for
bringing
that
up.
I
I
will
say
that
it's
kind
of
weird,
though,
because,
like
it's
not
always
the
case
that
you
want
the
default
gateway
one.
Sometimes
you
don't
want
the
one
that
has
the
default
gateway.
I
think,
but
yeah
definitely
thanks.
Just
let
us
know
how
it's
going
when
you're
working
on
this
and
stuff
and
for
sure
keep
keep
us
posted,
even
if
you're,
just
like
hacking
on
it
and
stuff.
A
Okay,
I
had
wanted
to
spend
a
minute
or
two
to
kind
of
explain.
The
current
state
of
the
container
d
updates,
mainly
around
the
host
process,
supports
because
we
do
want
to
try
and
get
more
people
to
use
and
give
feedback
to
host
process
containers
right
now,
it's
kind
of
in
a
weird
state
where
you
have
to
build
a
couple,
different
components
from
a
couple,
different
branches
to
get
that
support,
but
we're
working
on
kind
of
centralizing.
All
of
those.
A
I
believe
that
the
details
of
what
to
do
like,
where
to
build
currently
are
in
the
the
docs
that
went
on,
went
out
with
the
122
release.
So,
if
anybody's
interested
in
giving
this
a
spin,
you
can
follow
those.
But
in
order
to
go
to
beta,
I
think
we
need
all
of
these.
We
need
all
the
support
just
in
container
d
and
in
a
release,
so
people
can
grab
the
like
release
bits
and
just
set
the
host
process
fields
and
get
going
with
that.
A
A
So
those
are
here
we're
working
on
vendoring
those
in
and
then
there
is
also
a
couple
of
changes
to
the
hcs
shim
that
I
don't
believe
went
in
with
this
pr
I'll
find
the
pr
for
that
too.
But
we
need
those
to
go
in
and
then
the
other
big
one.
Is
this
pull
request
from
perry
about
adding
like
wiring
up
the
those
calls
to
the
palming
those
through
getting
pulling
the
fields
off
of
the
cry
api
and
into
container
d,
or
getting
passing
those
over
to
hcs
gym?
A
If
anybody
is
interested
on
how
all
this
works,
please
feel
free
to
review
these
like
any
of
these
pr's
there's
a
lot
of
interesting
kind
of
work
here
and,
if
you're
interested,
especially
in
contributing
to
sig
node.
These
are
some
good
pr's
to
look
at
for
now.
Let's
for
now,
yeah
we're
still
recommending
that
people
build
from
various
branches,
including
this
one
of
perry's
branches.
To
to
test
the
container
d
processes,
I
see
peter
horniak
said:
are
there
any
edd
tested
some
exercise,
processing
nodes
protected?
A
Yes,
we
do
have
one
ete
test
in
the
we
do
have
one
ete
test
in
the
kubernetes
repository
that
we're
running
it's
on
the
sig
windows
dashboard
right
now,
that's
a
very
simple
test,
and
what
that
test
does.
Is
it
echoes
the
I
think,
like
the
hostname
or
computer
name,
and
make
sure
that
it
matches
the
the
node
name
and
not
the
container
name?
A
A
Okay,
so
we
do
have
a
package
that
gets
built
nightly,
that
does
build
container
d
and
the
hdsm
from
those
branches
and
bundles
them
up
in
the
same
format
as
the
get
or
as
the
container
d
releases,
and
this
is
what
is
linked
to
this
should
be
what
is
linked
to
on
the
kates.io
page,
for
how
to
run
with
the
host
process,
containers
and
we'll
be
updating
this
to
build
from
the
container
d
branches
as
the
different
changes
merge.
A
But
so
we
we
do
have
a
kind
of
a
way
to
do
this
and
the
tests
that
do
test
the
host
process
container
page
do
target
this.
This
github
release.
Where
is
it
it's
right
there?
This
github
release
to
get
all
of
the
container
d
and
http
bits
as
needed.
G
D
D
A
All
right
we're
a
little
bit
over.
Does
anybody
have
any
other
questions
or
want
to
bring
up
anything
if
not
feel
free
to
add
items
to
the
agenda
for
next
week
or
just
reach
out
and
seek
windows
on
slack.