►
From YouTube: Kubernetes SIG-Windows 20210831
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
for
the
123
release
there
is
a
soft
deadline
coming
up
for
enhancements.
The
enhancements
team
has
asked
that
people
try
to
get
their
prr
from
the
production
readiness
reviews
done
by
this
thursday.
A
That
is
because
there's
a
pretty
small
pool
of
prr
of
the
production
readiness
review
reviewers
and
they
want
to
make
sure
that
people
get
adequate
time
to
look
and
address
feedback.
A
I
am
currently
tracking
three
enhancements
for
first
windows:
there's
the
host
process
containers
to
beta
there's
the
node
service
log
viewer
to
alpha,
and
there
is
the
new
os
field,
slash
pod
admission,
identifying
the
os
and
pod
admission
going
to
alpha.
If
there's
any
other
enhancements,
please
let
me
know
so
we
can
get
those
added
and
tracked.
A
I
haven't
seen
prr
reviews
for
any
of
those.
I've
been
I'll
start
pinging
people
more
aggressively
on
coming
up.
I
don't
think
that
the
node
service
log
vr1
needs
any
other
reviews,
but
it's
we
we've
got
david
eads
as
reviewer
for
both
of
the
other
two
caps,
so
I'll
just
make
sure
that
he's
there
in
his
queue
to
look
at
one
other
announcement
is
that
later
this
week,
we're
going
to
be
recording
the
sig
windows
maintainer
track
session
for
kubecon
north
america.
A
If
anybody
has
any
kind
of
announcements
or
highlights
to
add,
please
let
me
or
jay
james
or
anybody
else
know
and
we'll
make
sure
that
those
get
added
and
called
out
share
the
slides
in
a
couple
of
days
before
we're
ready
to
record
just
if,
if
anybody's
interested
as
well
the
grammy
to
record
on
friday
so
september,
3rd
yep,
just
let
us
know
if
there's
anything
to
call
out
jay's,
been
doing
a
good
job,
keeping
track
of
kind
of
shout
outs
and
announcements
too.
A
Does
anybody
else
have
any
announcements?
Otherwise
we
can
get
into
some
of
the
topics
on
the
agenda.
A
Ravi
is
going
to
rejoin,
but
I
think
you
can
continue
all
right.
James
you've
got
the
first
agenda
item.
B
Yeah,
so
we
run
a
nightly
job
for
container
d
and
the
container
d.
There
was
a
change
that
was
checked
in
that
passes,
the
run
as
user
name
to
the
pos
image
and
the
test
that
we
have
that
verifies
that
the
container
doesn't
start
never
gets
marked
as
failed,
and
so
I
dug
into
it
a
little
bit
and
adeline
also
looked
at
this
as
well,
and
we
noticed
that
the
pause
or
the
if
the
sandbox
container
never
goes
to
or
if
it
fails
to
come
online.
B
For
some
reason,
in
this
case,
where
the,
where
the
runes
user
name
doesn't
actually
exist
or
the
another
case
would
be
when
the
os
image
mismatch,
is
there
the
pause
stays
in
pending
state?
B
So
the
test
is
failing
because
we
check
for
failed
state,
but
I
I
wanted
to
know
if
anybody
had
any
thoughts
on
whether
or
not
a
pod
should
go
into
a
failed
state.
If
the
pause
container
starts
up.
I
guess
they-
maybe
don't
see
this
very
often
on
the
linux
side,
but
this
is
our
second
use
case
from
windows,
side.
A
So
yeah
our
event,
I
think
you
had
an
issue
with
this.
A
couple
like
maybe
two
releases
ago
too,
where
there
was
a
pause
image
that
didn't
have
the
matching
os
for
what
you
were
running
on,
and
I
think
you
remember
saying
was
very
difficult
or
confusing
to
troubleshoot.
C
B
Yeah,
I
think
we
said
that
the
the
language
between
a
pause
image
and
then
for
container
and
all
those
types
of
names
all
was
kind
of
strange,
and
so
maybe
it
was
like
something
that
we
could
potentially
improve,
but
yeah.
I
don't
think
at
the
time.
B
I
noticed
that
it
stayed
in
a
pending
state,
but
I
was
going
back
and
looking
at
it
and
the
pod
stays
in
a
pending
state,
even
though
the
pause
container
has
failed
to
start,
and
so
it's
and
that's
what's
happening
here,
and
so
I
wanted
to
before
we
go
fix
the
test.
I
wanted
to
see
if
we
should
be
actually
marking
these
pods
has
failed.
This
may
not
be
an
answer.
We
it's
something
we
can
answer
here.
We
might
need
to
go
sig
node
and
ask
there
as
well.
D
Yeah
so
once
the
pod
has
landed
on
the
node
the
stage
or
the
phase
in
the
life
cycle
of
the
pod,
that
the
controllers
would
look
at
this.
Is
it
in
the
pending
state
or
not,
because
the
admission
has
happened
and
cubelet
is
saying
that
I'm
not
able
to
hold
it.
It
could
be
for
various
reasons,
but
that's
the
life
cycle
state
that
we
are
going
to
look
at,
but
you
think
it
should
be
in
the
failed
state.
Is
that
your
concern
gains.
B
Yeah
so
it
looks
like
like
the
error,
gets
reported
back
up
and
you
can
like
describe
the
container
and
you'll
see
that
there's
an
error
being
reported
as
an
event,
but
then
there's
a
there's,
a
section
at
cubelet
where
it
goes
through
and
looks
at
all
the
statuses
of
all
the
containers,
but
it
doesn't
include
the
infrared
container
as
one
of
those,
and
so
in
this
case
it
goes
through
and
looks
at
all
the
containers
and
says
they're
all
pending.
B
But
it
didn't
look
at
the
pause
which
was
which
did
have
an
error
state,
and
so
it
never
gets
reported
as
a
failed
pod.
D
Okay,
yeah
yeah.
We
usually
don't
look
at
the
the
pause
pod
state
because
we
assume
that
it
is
always
going
to
be
available
and
running,
because
that
that
is
not
something
that
we
put
in
the
pod
spec
as
well.
Right
because
it
looks
at
the
init
containers
and
the
containers.
So
we
assume
that
it
is
always
available
to
be
up
and
running.
C
A
Yeah
we
actually
ran
into
this
because
we
were
updating
the
behavior
for
how
the
pos
image
runs
to
be
more
linux
like
where
you
can
specify
if
you
set
the
run
as
username
on
the
the
pods
back
in
the
windows
security
context.
Now
the
pause
image
runs
as
that,
and
that
was
a
request,
because
if
you
set
like
a
run
as
user
in
the
linux
security
content
or
in
the
pod
security
context
for
linux
pods,
I
believe
that
the
pause
image
will
or
the
sandbox
container
will
run
as
that
user
too.
C
E
E
I
remember
having
to
look
at
that
once
but
like
it
or
it's
like
not
something
that
is
guaranteed
to
be
honored
like,
for
example,
I
I
don't
know
you
might
know
cryo
if
you
put
a
user
in
the
docker
spec
for
cryo,
and
then
you
start
a
pod
in
the
kubelet,
it
probably
runs
as
root,
not
as
the
user
ended
in
the
docker
image
right,
but
maybe
in
docker.
If
you
run
that
same
container,
maybe
it
actually
honors
that
metadata
and
we'll
run
it
as
user
arvind
right.
A
Yeah
I'll
have
to
follow
up
on
that.
I
know
that
there
are
different
cry
objects
for
like
sandbox,
config
and
just
regular
container
config,
but
I'm
not
sure
what
the
exact
behaviors
are.
C
Yeah,
I'm
sorry
I'm
distracting
from
the
conversation,
but
I
think
coming
back
to
this
so
ravi
you're
saying
that
in
the
linux
side,
even
if,
if
the,
if
the
pause
image
doesn't
come
up,
the
container
will
the
underlying
container
still
will
be
marked
as
pending.
D
Yeah,
and
as
far
as
I
understood,
the
pod
is,
is
sort
of
an
intra
image.
The
user
should
not
have
any
say
in
it
at
least
the
end
user.
It
should
be
at
the
it's
sort
of
a
privileged
container
is,
is
how
I
see
it,
because
that's
an
inflammation
that
needs
to
be
available
for
all
the
parts
to
run
on
the
node.
D
D
Should
be
passed
from
the
user
to
the
infra
image
as
well?
That's
where
I'm
like
a
bit
confused!
Well,
I
I
think
what.
C
A
Yeah,
I
think
james
just
hit
a
use
case
where
the
there
there
was
a
failure
that
was
observable
with
other
cube,
ctl
commands
and
the
pod
itself
never
kind
of
acted
on
those
around
those
failures.
So
the
pod
was
never
going
to
come
up
and
it
was
stuck
in
a
pending
state
and
the
cube
ctl
described
would
show
the
error.
So
should
the
pod
just
be
marked
as
failed
at
that
point?
A
I
think
I
think
that
that's
reasonable.
I
think
we're
probably
going
to
need
to
discuss
this
with
sig
node
to
see
if
there
was
intention
behind
doing
that
or
if
ravi
had
said
mentioned,
like
they're.
It's
just
kind
of
expected
that
the
pause
container
the
sandbox
container
starts
and
runs
with
that
issue.
So
maybe
it
was
just
an
oversight
again
james
from
your
youth
gift
perspective.
F
Go
ahead,
maz
I
was
just
asking
james
how
how
it's
impacting
the
use
case?
Can
you
explain
that
a
little
bit
more.
B
Well,
so
it
I
think
the
the
pot
itself
never
goes
into
a
failed
state,
and
so
when
you
look
at
the
pod,
it's
impending
and
you
have
to
go,
do
more
investigation
to
figure
out
why
it's
in
a
pending
state.
I
think
it's
more
of
like
a
user
experience
kind
of
thing.
From
my
perspective,.
C
Well,
I
I
think
the
we
could
see
it
on
the
linux
side.
Hypothetically
is
what
I
think.
Ravi's
point
is
say:
the
pause
image
on
the
linux
site,
for
some
reason,
is
not
available
at
all.
I
think
we'll
run
into
the
same
scenario
is
what
I
think
ravi's
point
was-
and
I
agree
with
him
that
yeah
on
linux
also
we'll
see
pending,
and
I
think
it
might
be
a
good
idea
to
talk
to
the
node
folks.
C
F
A
C
And
you're
right
james,
I
don't
remember
looking
at
the
pod
status
when
we
had
that
mismatched
container
mismatched
past
image
with
the
host
issue
that
happened.
I
was
just
trying
to
find
the
error
message.
I
don't
remember
the
status
that
could
have.
It
could
have
been
most
likely.
It
wasn't
pending
forever.
A
A
Oh,
I
took
a
look
at
this.
I
think
jay
was
just
suggesting
that
we
had
more
tests
around
the
scenario,
so
it
looks
like
there
was
an
issue
that
was
discovered
where
static
pods.
If
they
were
created
and
deleted
within
before
some
cleanup
could
happen,
they
would
not
come
up
correctly,
so
we
can
just
add
that
test
coverage.
It
looks
like
jordan
and
other
people
are
tracking.
This
issue,
though,.
A
Does
anybody
have
anything
else
they
would
like
to
discuss
matt?
We
can
end
this
and
wait
for
jay
to
get
back
for
the
pairing
session.
C
I
have
one
question
mark
this
is
about.
I
remember
us
talking
about
updating
the
docs
for
windows
20x2.
Did
we
ever
do
that?
C
A
C
Is
adding
support
in
the
in
the
docs
for
20
h2?
I
thought
somebody
took
an
action
item
to
to
update
that
in
our
in
the
user-facing
docks.
What's
that
20.
F
F
F
Confirm
it
with
him.
Yes,
I
know
brandon
was
working.
If
he
doesn't
get
a
chance,
I
can
I'll
hopefully
do
the
pr
but
yeah
good
point.
Alvin.
Okay,.
C
C
C
F
F
F
F
C
F
1809
is
another
name
for
2019
is
just
the
licensing
difference,
but
1809
from
a
sac
perspective
is
already
deprecated.
So
at
this
point
the
only
two
from
a
sac
perspective,
the
short
term
ones
are
20
h1
and
20
h2
that
are
not
they're
still
supported,
and
they
should
be
supported
for
kubernetes
as
well.
They're,
both
tested
from
an
ltsc
perspective,
the
only
one
that's
being
supported,
windows,
server,
2019.
C
A
It's
so
that
there's
a
little
bit
of
confusion
there
so
at
when
server
2019
was
released.
It
was
sim
released
with
a
with
windows,
server
version
1809,
which
was
a
sac
release,
and
so
they
were
the
same
kernel
version
and
shared
the
same
bits
and
everything
and
they
shared
the
same.
A
The
reason
why
we've
referenced
it
so
much
was
that
at
the
time
that
they
would
only
they,
they
were
only
planning
on
supporting
the
nano
server
image
for
the
life
cycle
of
the
sac
release,
which
is
18
months,
not
the
lt
sc
release,
which
is
the
five
years,
but
I
think
that
was
just
an
oversight,
because
the
nano
server
image
is
so
useful,
but
there
was
no
nano
server
container
image
that
was
based
on
that.
Had
the
ltsc
net
2019
tag,
it
was
only
the
1909
tag.
A
So
a
lot
of
I
think
a
lot
of
people
used
it
pretty
like
interchangeably
there.
You
can.
You
can
start
the
container
images
on
either
one,
but,
for
example,
there
are
some
differences.
You
can't
upgrade
a
host
from
once
you
install
either
an
the
windows
server
on
the
host
with
either
the
sac
version
or
an
ltsc
version.
You
can
only
update
within
the
same
kind
of
licensing
channel,
so
you
can't
update
from
like
windows,
server
2019,
which
is
on
the
ltsc
train,
to
like
2004,
for
example,.
C
Oh,
I
see,
but
you
could
do
this.
You
can
do
1809
to
2004..
Yes,
okay,
I'm
just
wondering
how
we
should
be
referring
to
this
when
we
you
know,
are
talking
to
customers,
it's
a
little
confusing
when
2019
versus
1809
okay.
But
I
think
I
understand.
A
C
E
D
So
yeah,
if
that
is
done,
I
would
like
to
talk
about
the
cap
that
I'm
working
on
I'm
in
the
implementation
stage,
and
so
there
are
three
pieces
that
I'm
that
I've
broken
down
that
kept
into
one
is
now
the
cubelet
change,
where
we
have
to
reconcile
the
os
and
arch
labels.
So
currently,
if
there
is
a
windows
node
and
if
you
have
the
os
label
on
the
node
object,
you
can
change
it
to
whatever
you
want.
D
So
I
have
proposed
it
as
a
independent
change
to
the
current
code
base
so
that
that
can
go
in
quickly.
We
do
not
have
to
wait
for
the
the
cap
to
get
merged.
I've
packed
all
the
interested
folks
on
it.
D
There
is
second
part
where
I
need
to
introduce
the
api
and
I'm
working
on
those
changes,
but
what
I've
noticed
is
jordan
wants
to
remove
the
the
code
within
the
cubicle.
We
are
stripping
down
unnecessary
fields
for
the
secs
because
he
he
wants
the
defaulting
not
to
happen
for
at
the
aps,
admission
level
itself.
D
So
if
someone
has
the
knowledge
where
these
defaultings
are
happening,
especially
for
the
security
context
or
linux,
specific
constraints
that
are
that
are
actually
getting
defaulted,
even
for
windows
spots,
I
can
make
the
change
as
part
of
the
pr
that
I'm
creating.
D
Yeah,
so
the
side
effect
of
it
is.
We
do
not
need
the
stripping
unnecessary
fields
in
the
cubelet
code.
D
A
Okay,
yeah:
we
can
look
at
that.
I'm
trying
to
find
your
pr
right
now
to
put
it
in
the
chat.
D
D
A
cubelet
would
look
at
the
os
field
on
the
part
and
say
if
it
exists,
and
if
it
mismatches
the
node
label,
it
is
going
to
reject
the
part
during
the
admission
time
on
the
cube,
so
there
are
going
to
be
three
moving
pieces
in
it.
The
first
and
third
changes
are
related
to
cubelet.
The
second
one
is
related
to
api.
D
A
B
D
Well,
the
change
that
I
introduced
is
because
we
have
an
admission
controller
in
openshift,
which
is
there,
but
it's
it
is
not
available
on
core
kubernetes,
but
jordan
says
that
there
are
certain
fields
that
are
getting
defaulted,
even
derek
mentioned
the
same
right,
so
we
can
remove.
Those
is
what
both
of
them
suggest,
but
I'm
not
sure
where
that
is
happening.
C
Dave
ravi:
are
you
aware
of
that
there
is
that
one
as
user
and
run
as
username?
Do
we
know
that?
C
So
I
I
think
in
openshift
for,
for
example,
I
think
in
4.8
for
sure,
for
a
windows
pod
we
would
be
applying
both
of
them.
I
A
D
Yeah,
like
once,
we
have
the
os
field,
I
mean
I
also
need
to
find
the
source
where
that
is
actually
getting
applied.
If
it's
at
the
admission,
like
our
admission,
plugin
or
it's
something
that
kubernetes
distribution
itself
like
kubernetes
itself,
is
doing
open
source
itself,
I'm
not
sure
of
that.
D
A
Think
what
arvin
just
mentioned
was
what
I
was
thinking
of
or
something
there
was.
Some
users
had
some
admission
plugin.
That
was
put
it
like
adding
fields
to
all
the
pod
specs
and
it
was
breaking
windows
pods
so
that
we
did
so.
Then
we
updated
the
cubelet
to
strip
those
fields
if
it
was
on
windows,
but
I'm
not
sure
of
any
defaulting
in
the
cube
like
if
those
fields
aren't
kind
of
coming
in
as
part
of
the
pod
already.
C
D
D
Yeah
and
the
defaulting
is
not
at
the
cubelet
side,
the
defaulting
might
happen
at
the
api,
at
the
emission
plug-in
level
or
or
within
the
validation
center.
Like
say,
if
there
is
some
field,
there
is
a
default
value
that
is
getting
applied
to
that
field
in
the
pod
spec.
So
those
two
are
the
places
that
I'm
thinking
of.
A
A
I
Can
I
have
a
question:
can
I
ask
a
question
yeah,
so
this
is
jim
from
microsoft,
windows,
container
team
and,
as
all
of
you
know,
we
are
going
to
announce
2022
ga
very
soon,
so
we
just
want
to
ask
the
community
we
intended
to
support
continuity
only
as
a
continued
run
time
for
server
2022.
I
So
we
just
want
to
give
a
heads
up
to
the
community
and
see
if
there's
any
concerns
or
comments
on
on
this.
This
kentucky
the
only
runtime
supported.
C
I
I
think
depends
on
the
the
the
test
right.
The
the
test
that,
right
now
we
are
doing
for
components
test
is
it's
one.
120
is
the
the
most.
The
latest
121
on
122
right
is
that
the
case
yeah.
We
need
to
decide.
I
J
Hello
yeah:
this
is
brahimia
from
gk
windows
team
as
well,
so
I
think
we
have
container
d
starting
from
1.21,
I'm
not
sure
jinx
asking
about
windows
server
22
here
is
this
the
case
or.
I
Yeah,
this
is
a
separate
I
think,
or
for
continuity
support.
It's.
A
I'm
also
a
little
bit
like.
Can
you
clarify
what
exactly
mean
by
the
support,
because
I
think
it's
up
to
each
kind
of
infrastructure
provider
or
to
determine
what
they
want
to
support
like
what
what
they
want
to
like
support
in
their
in
their
offerings
here?
Are
we
saying
that
docker
won't
work
on
windows,
server,
22
or
that
it's
just
not
going
to
be
a
focus
or
what?
What
exactly
are
you
grouping
out
as
support
here.
J
I
believe
jen's
question
is
more
on
yeah,
with
windows:
server
22.
Can
we
just
yeah
have
only
container
d
being
the
only
supported
runtime
or
are
we
gonna
keep
maintaining
yeah
both
streams
that
we're
doing
now
but
yeah?
She
can
correct
me
if
I'm
wrong.