►
From YouTube: Kubernetes SIG Windows 20210216
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
february
16th
2021
instance
of
the
sig
windows
community
meeting.
As
always,
please
remember
that
these
videos
are
uploaded
to
youtube,
are
recorded
and
uploaded
to
youtube.
So
please
make
sure
to
adhere
to
all
of
the
cncf
code
of
conducts
and
standards.
A
All
right
looks
like
we
have
a
pretty
sparse
agenda.
I
think
that's
because
a
lot
of
folks,
including
myself,
were
probably
off
yesterday
for
president's
day,
but
we
can
go
right
ahead.
We
get
into
this.
I
think
we
have
enough
topics
to
discuss
that.
We
can
just
fill
this.
B
A
We
go
the
first
thing,
these
announcements.
There
is
one
announcement
that
I
will
make
and
that's
that
over
the
next
month,
or
so
all
of
the
sigs
need
to
need
to
author
an
annual
report
for
the
sig.
This
is
part
of
the
I
think
cncf.
A
I
will.
I
already
sent
a
message
to
the
the
leads
jay
james
and
deep
that
will
need
to
get
started
on
this,
but
if
anybody
will
we'll
share
that
in
a
draft
form
with
everybody
here,
once
that's
ready
and
folks
can
comment
the.
I
think
that's
really
the
only
announcement
that
I
have.
If
anybody
else
has
an
announcement
either
raise
your
hand
or
just
feel
free
to
interrupt
me
all
right.
A
If
not,
we
can
get
going
with
the
cap
status,
so
I
actually
haven't
been
online
since
friday.
So
I
haven't
checked
my
mail.
I
know
that
I
did
ask
the
release
team
for
an
exception
and
on
thursday,
the
last
time
I
checked,
I
didn't
hear
back
from
the
release
team.
I've
been
having
issues
getting
onto
mail
this
morning.
Arvid.
Do
you
do
you
know?
Did
anybody
respond
to
that?
Okay,
I.
A
I
will
ping
them
again.
I
was
waiting
until
we
hopefully
had
the
caps
merged
in
order
to
respond
because
that
would
make
or
in
order
to
prod
them.
That
will
make
us,
I
think,
a
stronger
case
with
the
sig
or
with
the
the
privileged
containers
kept.
A
I
think
that
there
was
one
outstanding
question
with
sigoth,
and
that
was
if
we
needed
to
if
we
should
enforce
that
pods
with
privileged
containers
set
the
host
pid
and
host
ipc
flag.
A
Just
for
completeness
and
there's
a
pretty
lengthy
comment
thread
in
the
kep
and
the
sigoth
folks
were
saying
that
we
should
probably
set
those
and
that
would
be
kind
of
more
descriptive.
I
was
arguing
against
that
for
a
couple
of
reasons,
the
big
one
being
that
when
we
support
mixed
mixed
pods
that
contain
both
privileged
and
non-privileged
containers,
it
would
be
very
unclear
what
that
what
those
flags
mean
because
linear
linux
and
windows
handle
you
know
the
process
name.
A
Space
and
process
isolation
a
lot
differently,
and
I
think
that
the
last
comment
that
I
saw
was
jordan
leggett
saying
that
he
wanted
to
raise
that
and
make
sure
that
we
document
it.
Clearly,
but
he
would
defer
to
sig
windows
and
signode
for
the
final
decision.
A
C
Yes,
so
what
happened?
Is
we
needed
a
api
reviewer
for
the
node
log
cap,
mainly
because
we
are
also
having
to
introduce
an
api
struct
for
node
log
options?
So
jordan
said
he
cannot
review
this.
We
need
to
pull
in
clayton,
because
clayton
is
more
familiar
with
that
side
of
things.
So
clayton
reviewed
the
cap
and.
C
Extend
there's
no
easy
way
to
do
this
sort
of
checks
in
in
cube
cuddle
is
what
I
understood
so
I've
pinged
marche
again
about
this.
The
other
thing
that
clayton
wanted
was
he
wanted
someone
from
from,
I
guess
six
cli,
to
look
at
the
cube,
cuttle
source
code
and
to
see
if
there
are
any
further
issues.
So
I've
sort
of
pushed
back
on
that
saying
that
hey,
we
already
got
an
lttm
from
6cli
and
shouldn't
this
come
out
as
part
of
the
implementation
rather
than
when
you
are
writing
the
cap.
C
The
third
thing
that
clayton
wanted
was
he?
What
was
the
third
thing?
Let
me
quickly
look
at
the
cap,
but
these
are
the
main
two
items:
the
third
one
I'm
trying
to
remember.
There
was
one
more
item
that
he
wanted.
C
It's
flipping
my
memory
at
the
moment,
but
at
this
moment
I've
also
pinged,
mache
and
said:
hey
you
know,
clayton
is
asking
for
this.
Can
you
again
take
a
look
at
the
cap?
Marcia
said
he
will,
but
I
don't
think
he
has
mentioned
anything
on
the
cap
yet
so
I'm
waiting
on
on
him
to
say
something
the
rest
of
the
things
that
clayton
asked
for.
I
hope
I've
addressed,
but
I
haven't
heard
anything
back
from
clayton.
I
also
think
clayton
on
the
api
review
channel,
saying:
hey
I've
addressed
your
comments.
C
A
Okay,
so
yeah,
I
think
the
takeaways
we're
kind
of
just
waiting
for
six
cli
to
hear
or
to
comment
and
respond
to
some
of
the
feedback
that
we
can't
answer,
hopefully
we'll
be
able
to
get
that
today
and
can
start
looking
at
that
yep.
Thanks
for
helping
to
drive
all
of
this,
I
think
that
the
I've
definitely
experienced
some
frustrations
with
the
cap
process
before,
but
I
think
it
does.
It
ultimately
does
help
make
sure
that
everything
is
well
thought
out
too.
So,
just.
C
Sorry,
I
I
remember
the
other
thing
that
clayton
wanted
was
he.
He
was
saying
that
some
of
the
features
I'm
adding
should
also
be
expanded
to
being
enabled
for
pod
logs,
and
I
sort
of
push
back
on
that
saying.
That's
outside
the
scope
of
this
kip
and
you
know
that
should
be
handled
separately
and
not
as
part
of
the
skip.
That
was
the
other
thing
I
forgot.
D
D
The
note
endpoint
is
there
some
intermediate
version
of
this,
where
we
can
do
the
foundational
work
and
figure
out
what
to
do
with
the
node,
slash
logs
and
point
later
or
maybe
ask
if
we
can
merge
it
as
provisional
with
and
because
you
can
put
tags
about
the
stuff.
That's
to
be
right,
you
could
put
tags
around
the
outside
saying
this
is
going
to
be.
A
Yeah,
I
think
one
option
is
sometimes
you
can
say
like
we're
going
to
implement
this
like
we
want
to
implement
this
in
alpha
and
we'll
either
reevaluate
f,
like
after
alpha,
like
keep
it
on
alpha
for
another
release
and
reevaluate.
If
we
need
to
or
or
explicitly
say
this
will
be
implemented
for
beta,
I'm
not
sure
how
that
works
with
kind
of
some
fundamental
changes
here,
but
that
may
be
an
option
as
well.
A
I
know
I've
requested
one
or
two
things
for
the
privileged
container
kept
saying
you
know
we're
gonna
wait
until
we
get
more
feedback
before
doing
before
kind
of
settling
on
the
design.
For
this
for
and
we'll
do
that
for
beta.
A
A
C
C
My
take
away
from
this
is
his
main
sticking
point
seemed
to
be
the
node
log
endpoint
needed
to
have
higher
r
back
when
compared
to
the
pod
log
endpoint,
not
just
cluster
admin,
but
cluster
admins
who
have
access
who
have
been
explicitly
granted
access
to
that
node
log
endpoint,
I'm
not
sure
if
we
can
get
him
to
back
off
on
that,
but.
A
I
wonder
if
we
could
say
that
this
feature
is:
can
be
disabled,
with
both
a
cubelet
flag
and
and
the
feature
flag
that
was
mentioned
in
there
and
say
that
we
can
and
ask
to
re-eval
if
we
can
reevaluate
that
as
keep
maybe
keep
it
in
alpha
for
one
more
release
and
say
you
know
what
this
is
going
to
be
off
by
default
and
we
have
a
there's
a
number
of
different
flags
that
need
to
be
enabled
to
turn
this
on.
A
D
C
C
So
there
are
already
all
that
is
mentioned,
so
I'm
not
sure
what
else
to
do.
I
could
maybe
add
another
comment
and
they're
saying
suggesting
this
approach
to
move
forward
and
and
see
how
clayton
responds.
A
A
Okay,
so
that's
that,
hopefully
we
can
kind
of
keep
working
to
get
that
in.
If,
if
not,
we
should
keep
pushing
to.
I
think
that
we
can
still
merge
the
cap
as
implementable
and
target
a
future
milestone.
That
would
probably
be
I
mean
it
unfortunately
won't
help
with
the
121
release,
but
we
can
get
a
lot
of
these
conversations
out
of
the
way
or
a
lot
earlier.
That
way,
if
I
think
that's
one
possible
fallback
option
all
right
next
thing
is:
I
know.
A
Last
week
we
shared
a
way
to
with
container
d
to
explicitly
opt
that
process
and
I
believe,
all
processes
spawned
from
that
process.
Out
of
the
windows
defender
scans,
I
know
that
there
was
a
couple
folks
who
said
that
they
were
going
to
try
it
and
report
back.
I
was
wondering
if
anybody
had
had
tried
that
and
was
still
experiencing
the
issues
being
reported.
A
I
have
a
pr
that
I
I
think
I
have
changes
to
update
the
the
docs
on
kate's
dot.
I
o
to
include
this.
In
the
kubernetes
install
section
for
container
d
or
the
container
d
install,
I
was
waiting
to
open
that
pr,
until
I
heard
back,
I
believe,
was
possibly
jeremy,
who
wanted
to
take
a
look
at
this.
If
not,
we
can
keep
going
on
the
agenda.
A
D
A
Check
on
slack
to
see
if
anybody
had
that
resolved
all
right
does
anybody
else
have
any
other
topics.
If
not,
I
think
that
it
might
be
good
to
spend
a
little
bit
of
time
to
discuss
some
of
the
test
infrastructure
updates
with
the
community,
especially
the
test
images
repos.
I
know
we
were
talking
about
that
quite
a
bit
in
the
15
minute
crci
signal
meeting
before
this.
A
So
I'll
just
give
everyone
a
minute.
If
there's
anything
else
to
discuss
here,
and
then
we
can
jump
into
that.
E
A
E
Done:
okay,
yeah!
That's
what
I
want
to
know.
The
other
thing
that
I
wanted
to
ask
was
whatever
jay
brought
up
earlier
related
to
the
test
again,
but
we
wanted
to
switch
to
runtime
classes
instead
of
using
just
the
node
selector.
The
runtime
class
would
include
a
node
selector
and
a
toleration
for
windows.
E
A
Oh
so
the
way
that
that
ends
up
working
is
you
have
well,
oh,
no,
okay,
so
that
should
be.
That
might
be
okay.
I
forget
there's
additional
functionality
that
you
can
only
support
with
runtime
classes
if
you're
using
container
d
and
that's
by
specifying
the
runtime
handlers
and
the
configs.
I
think
that
yeah.
E
Yeah,
so
I
am
I'm
mostly
talking
about
as
if
I'm
referring
to
the
the
scheduling
section
of
the
runtime
classes,
where
we
we
tell
this
is
the
node
I
would
like
to
run
on,
and
this
is
the
toleration
that
I
would
like
to
expect
the
port
to
have.
So
I'm
not
talking
about
the
runtime
handler
configurations
within
the
runtime
classes,.
A
A
Issue
for
this,
I
think
this
might
be
a
good
topic
to
create
an
issue
with,
so
we
can
have
a
little
bit
of
a
discussion
there,
but
this
seems
seen
like
a
good.
D
Yeah
overall,
the
context
I
was
talking
to
robbie
about
this,
was
we
really
you
know
when
when
running
the
e2es
on
different
clouds
in
different
environments,
I
found
wildly
different
results.
Like
some
people
taint
some
people,
don't
you
know
some
people
use
runtime
classes,
some
people,
don't
so
I'm.
I
was
running
on
eks
aks
and
a
few
other
places.
D
So
so
that's
the
idea
was,
I
was
talking
ravi
and
then
he
had
this
concept
of
like
what
if
we
just
made
the
tests
robust
so
that
you
know
they
obey
tolerations,
even
though
you
may
not
have
them,
they
try
to
create
runtime
classes.
All
that
stuff
right,
yeah.
A
Okay,
yeah,
I
think
that
I
think
that
that's
a
good
goal,
I
know
with
windows,
specifically
we've
always
struggled
with
how
to
kind
of
definitively
know
that
a
pod
is
intended
for
a
windows
machine
and
this
kind
of
plays
into
this
as
well.
Some
some,
I
think,
there's
recommendations
to
use
the
node
selector
for
the
os.
But
not
everybody
follows
that,
let's,
let's
get
an
issue
created
and
then,
if
every,
if
that
looks
good,
then
we
can
use
that
to
help
drive
like
some
of
those
changes.
E
Sure
I
can
open
animation.
There
is
one
last
item
that
I
want
to
talk
about.
E
E
So
that's
where
I'm
wondering
if
you
should
remove
it
from
the
documentation
and
have
the
jobs
test
only
on
20h2.
As
of
now.
The
way
I
see
it
is
only
gc
folks
have
have
like
20
h2
jobs,
so
that
is
something
that
we
need
to
have
in
our
aks
container
d
and
then
roll
the
but
roll
them
up
into
our
test
grid.
C
Hey,
I
have.
E
C
A
I
think
the
note
images
so
for
for
azure.
We
can
expand
that
as
part
of
the
the
deployment
through
the
pro
job
with
some
flags.
But
yes,
that
is
a
potentially
a
concern.
A
Think
that
we
do
do
that,
but
we
should
definitely
allow
my.
F
All
of
our
other
staff
releases
expand
that
base
image
to
100
gigs.
I
think
it
is
so
I
I
missed.
I
was
out
last
week,
so
I
missed
the
conversation
is
so
are
we
dropping
1903
support,
even
though,
like
I
think
it's
supported
on
118?
C
A
A
couple
of
maybe
nine
months
ago,
at
this
point
I
want
to
say
we
we
had
a
discussion,
there's
there's
an
issue
about
the
support
and
what
we
decided
on
was.
We
would
tie
each
kubernetes
release
and
to
a
list
of
versions
of
windows
that
we
said
were
supported
and
keep
rolling
that
forward.
I
think
that
we
just
forgot
to
update
the
dock
for
the
120
release
for
that.
A
But
so
what
we
were
saying
is
like
at
each
near
when
each
kubernetes
minor
release
is
getting
cut,
we'll
evaluate
and
say
like
add
the
support
to
that
table.
That
says
you're
at
the
most
recent
or
the
most
recent
ltsc
and
the
two
most
recent
sac
releases
and
we
wouldn't
go
and
re-update
those
already
released
minor
versions
based
on
new
sac
releases
for
windows,
so,
for
example,
the
118
release.
A
We
wouldn't
necessarily
say
that
hey,
you
know:
we've
done
validation
on
the
20
h2
images,
but
we
would
each
for
each
new
minor
release,
update
that.
So
for
me,
given,
given
that
we
could
either
re-evaluate
that
decision
or
we
can
just
update
the
one
we'll
definitely
update
the
121
release
docs.
As
that
as
those
come
out
and
say,
the
the
most
recent
supported
versions
are
2019
ltsc
and
20,
h1
and
or
2004
and
20
h2.
For
that
I
don't
think
we'll
touch
118
or
119..
A
A
Yeah
and
part
of
that
was
in
the
issue
which
I'll
have
to
pull
up
and
link
here,
we
kind
of
outlined.
What
we
were
saying
was
support
in
for
for
the
the
minor
versions
and
one
of
the
big
criteria,
for
that
was.
These
are
the
one.
A
Is
that
we're
going
to
publish
test
images
and
a
pause
image
and
any
infrastructure
images
for
those
those
for
those
sac
or
for
those
windows,
os
versions
and
the
other
one
was
those
were
going
to
also
help
inform
which
os's
we
ran
for
the
release,
tests,
the
release
and
forming
tests,
and
we
didn't
want
to
necessarily
roll
those
forward.
As
each
new
sac
release
came
out,.
A
So
it
like
here
this
page
is,
is
what
I
think
we're
referring
to,
and
this
is
what's
in
question
this
page
does
get
updated
with
each
release.
So
if
we
go
to
like
the
older
version
here,
we
actually
dropped
a
bunch
of
we
dropped
about.
We
started
to
clean
this
up
and
dropped,
saying
we're
not
supporting
these
older
versions
of
kubernetes
between
119
and
120..
A
A
All
right
go
ahead.
Leave
me.
B
A
A
Okay,
yes,
that
that
is
a
good
idea.
Does
anybody
else
have
any
comments
about
that?
I
think
as
long
as
it's
in
the
sig
windows
or
the
windows,
testing
or
sig
windows
tools
that
should
be
fine
to
link
to
and
reference
it.
E
Yeah,
like
regarding
v120,
should
we
have
20
h2
tested
there
or
not
yeah.
I
think
we
should
like
at
least
at
least
for
the
one
one
or
two
releases,
that
we
say
that
we're
going
to
support
like,
as
of
now,
we
support
1,
18,
19
and
125.
E
So
if
there
is
like
microsoft,
not
supporting
that
particular
sac
version,
people
may
still
go
ahead
and
use
it.
They
may
raise
a
bug
and
we
would
not
have
any
infrastructure
to
test
against
and
then
tell
that
hey.
This
is
going
to
work,
or
this
is
not
going
to
correct.
That's
where
I'm
coming
from.
E
E
A
I
think
that's
one
of
that's
kind
of
the
issues
that
we
were
running
into
is
that
the
the
overlap
between
the
support
life
cycle
for
the
windows,
os
versions
and
the
and
the
kubernetes
versions
are
kind
of
hard
to
stay.
To
keep
in
sync.
C
Yeah
so
so
at
the
other
point
deep,
I'm
going
to
ask
you
define.
A
Linux,
it
looks
like
we're
mainly
testing.
I
don't
actually
see
any
test
unless
I'm
looking
in
the
wrong
spots.
Any
tests
grid
reports
that
are
doing
that
are
testing
20h2.
Today
I
know
peter
was
working
to
set
that
up.
F
A
New,
so
this,
I
believe,
is
yeah.
This
is
testing
master.
I
don't
think
we
have
any
that
are
testing
the
old
like
on
the
release
branches.
So
we
have.
A
B
E
Yeah,
I
agree
I
was
more
or
less
concerned
on
like
people
creating
bugs
against
a
version
that
we
did
not
test.
If
you
support,
if
you
say
that
in
the
support
statement
that
we
are
going
to
support
two
sac
questions,
are
we
going
to
support
for
the
last
three
releases
that
kubernetes
in
general
supports?
Are
we
going
to
say
that
we
are
just
going
to
test
the
latest
sac
releases
just
for
the
last
release,
but
for
the
previous
ones?
E
We
do
not
care.
A
A
But
yeah:
let's,
let's
continue
to.
Let's
continue
this
conversation
and
we'll
figure
out
what
we
can
do
to
support
the
widest
number
of
configurations
here.
A
All
right,
I'm
gonna,
drop
now.