►
From YouTube: Kubernetes SIG Windows 20191008
Description
Kubernetes SIG Windows 20191008
A
Hello,
everybody
and
welcome
to
another
sig
windows
myth
it's
the
8th
of
October
and,
as
always,
please
adhere
to
the
same
staff
code
of
conduct.
This
is
a
recorded
meeting
all
right.
Well,
let's
dive
in
first
into
the
document
that
Patrick
created
I
think
if
I
Patrick
for
basically
doing
all
the
work
on
basically
capturing
the
different
problems
around
multi
architecture,
multi-platform
contemporary
images
on
how
you
can
make
them
run
better
in
kubernetes,
particularly
yeah.
B
B
Another
object,
that's
part
of
the
kubernetes
api
that
was
promoted
to
beta,
but
it
gives
you
the
ability
to
put
node
selectors
and
toleration
Zahn
based
on
the
runtime
class
that
was
requested,
and
so
that
means
that
you
can
simplify
a
deployment
such
that
instead
of
needing
to
put
all
the
node
selectors
and
toleration
x'
in
it,
you
can
just
put
the
runtime
class
name
within
the
container
spec
itself,
and
so
that
way
the
cluster
admin
can
basically
say.
Here's
a
list
of
runtime
classes
that
we
have
on
the
cluster.
B
You
know
one
of
those
and
then
that's
how
those
configurations
get
get
applied.
But
all
the
specific
things
like
the
memory
size
number
of
CPU
cores
used
by
the
sandbox
are
not
stored.
Part
of
the
runtime
class-
that's
actually
configured
on
the
note
itself,
and
so
the
first
step
I
did
was
I've
got
a
PR
to
the
docks
that
we
had.
They
were
already
there
on
the
windows,
scheduling
guidelines.
B
B
So
moving
forward,
one
of
the
key
questions
that
I'm
kind
of
working
around
is
when
we
want
to
start
enabling
isolation
like
hyper-v
and
use
that
to
solve
some
of
the
difficulties
that
we
have
with
dealing
with
when
with
multiple
versions
on
Windows
using
runtime
class.
With
that
handler
is
a
way
that
we
could
address
that
solution
without
API
changes
to
the
pod
object
itself
and
the
reason
that
that
works
is
because
we
could
basically
specify
a
runtime
handler
that
says,
enable
hyper-v
and
then
create
a
sandbox
using
a
particular
Windows
version.
B
B
This
would
mean
that
the
concerns
they
raised
are
still
valid,
but
it's
something
that
a
cluster
admin
would
opt
into
by
deploying
an
admission
controller.
So
they
could
say
you
know
if
I
want
to.
You
know,
have
some
magic
to
basically
make
it
easier
for
customers
to
or
fur
that
you
know
the
developers
and
the
users
that
are
deploying
applications
to
you
get
the
best
fit
on
a
node.
B
You
know
the
one
kind
of
mitigation
to
that
is
that
do
the
cluster
admin
could
still
say,
here's
a
list
of
best
practices
and
if
you
follow
I,
follow
all
these
node,
selectors
and
stuff
appropriately.
Your
workload
schedules
faster.
Otherwise
it
might
get
slowed
down,
and
so
it's
kind
of
a
I
mean
at
least
it
takes
a
trade-off
out
of
out
of
the
kubernetes
code
and
into
another
project,
and
but
you
know,
create
some
extra.
B
The
third
option
is
basically
continuing
to
take
advantage
of
the
node
selectors
that
we
have
there
today
and
then
basically
taking
some
extra
things
are
needed
for
isolation
and
basically
doing
that
in
an
opaque
manner,
using
using
the
handler
field,
and
so
what
this
would
mean
is
an
admin
would
go
create.
You
know
a
plethora
of
runtime
classes.
B
You
know
you
might
call
it
something
like
you
know,
it
might
be
done
on
a
per
Windows
version
and
then
they
might
multiply
that
based
on
some
other
hypervisor
parameters
like
the
number
of
cores
they
want
to
allocate
to
that
VM,
and
so
you
could
quickly
get
a
very
big
matrix.
That's
difficult
to
maintain
and
not
portable.
B
A
Pathetic
on
that,
I
do
agree
that
you
know
you
know
in
a
nightmare
scenario:
they
might
have
multiple
windows
versions
with
hyper-v
or
without
hyper-v,
and
it
could
be
complicated,
but
any
realistic
production
scenario
they're
very,
very
likely.
Gonna
have
a
little
bit
of
homogeny,
homogeneous
environment
right.
So
they're
gonna
have
three
windows:
OS
versions,
production.
They
might
have
four
a
single
cluster
one
or
two
maximum
OS
versions
and
very
likely
just
one.
So.
B
So
the
problem
comes
when
you
go
to
deploy
a
new
Windows
version,
you
basically
have
to
update
these
runtime
classes
and
it's
going
to
change
the
behavior
of
where
things
run,
and
so,
if
you're
not
getting
new
runtime
classes
as
you
as
you
do
that
then
you're.
Basically
in
a
situation
where
you're
going
to
change
the
behavior
of
how
things
are
getting
deployed
and
then
hope,
nothing
breaks.
The
no
reply.
B
B
So
the
other
problem
we
need
to
solve
is
that,
based
on
the
Windows
versions,
there
we
still
don't
have
any
sort
of
standard
way
to
to
tag
those
on
Windows
nodes.
You
know
we've
proposed
using
something:
that's
not
under
kubernetes
IO.
As
a
label,
an
admin
could
go,
you
know
just
choose
to
apply
on
their
own,
and
so
one
of
the
things
that
I'd
like
to
consider
doing
is
actually
having
to
cubelet
report
the
kernel
version
in
a
new
in
a
new
standard
field.
B
B
The
way
that
the
node
selectors
work
I
believe
you
can
only
match
on
a
specific
version,
and
so
that
means
that
we
would
need
to
basically
like
if
you
wanted
to
move
from
running
a
particular
version
container
like
17
or
like
1809
onto
using
hyper-v
isolation
on
a
new
version
of
Windows
you'd
have
to
go
change
on
your
runtime
class,
which
means
nothing
would
be
scheduled
to
your
old
nodes
anymore.
Every
new
deployment
would
go
on
from
there,
because
these
labels
would
sorry
these
note.
B
B
And
so
to
avoid
that
problem,
we'd
basically
have
to
go
back
into
the
runtime
class
expansion
and
then
use
some
other
field.
That's
there
to
sort
of
be
able
to
steer
workloads
around
on
that
on
a
per
app
basis,
and
so
it's
not
like
you
know
we
could
work
around
it,
but
it's
something
that
you
know
continues
to
explode
the
number
of
runtime
classes
that
are
needed.
B
B
B
So
it
gives
us
something:
that's
deterministic
all
the
way
through,
but
lets
us
figure
out
what
the
right
set
of
things
are,
that
we
want
to
control
at
the
container
D
layer
without
having
to
go
and
break
kubernetes
api,
while
we're
doing
that
and
if
we
do
want
to
do
some
experiments
around
a
mutating
in
Mission,
Control
or
then
that
gives
us
the
ability
to
do
that
as
well.
And
then.
B
So,
if
we're
happy
with
what
the
way
that
works
and
we've
got
a
reasonable
set
of
runtime
classes
that
are
going
to
work
for
most
scenarios,
then
we
could
just
go
ahead
and
say
when
we
graduate
to
stable
here's
the
runtime
classes
that
that
are
recommended
as
defaults,
and
then
you
know,
work
to
make
sure
that
those
are
are
kind
of
used
uniformly
across
across
multiple
distributions.
And/Or
cloud
providers,
and
so
I
mean
I-
think
that's
probably
the
easiest
thing
and
1.17.
B
If
we
realize
that
the
runtime
class
expansion
is
not
a
clean,
is
too
difficult
and
not
clean
enough,
then
you
know
we
could
still
go
back
with
cig
node
and
and
look
at
the
alternate
approaches
of
actually
moving
the
OS
version
and
architecture
into
the
pod
spec
itself.
So
this
doesn't
preclude
that,
but
I
think
it
at
least
gives
us
a
path
forward
that
we
can
that
we
can
make
progress
on
on
1.17
without
getting
blocked.
B
B
The
other
thing
is
that
I
don't
have
the
people
to
do
it
and
so
we'd
have
to
so
in
addition
to
getting
the
spec
done
to
do
that
between
sig
windows
and
sig,
node
we'd
have
to
find
a
few
more
volunteers
to
step
up
and
make
those
changes,
because
we
would
need
to
update
like
when
we
update
the
API.
That
means
we
do
need
to
update
API
server.
B
A
Okay,
so
I
guess
the
next
step
is
for
everybody
from
sick
windows
to
if
you
haven't
reviewed
the
spec,
this
document
to
go
ahead
and
do
that
and
then,
depending
on
how
the
conversation
goes,
is
sick
note,
after
that,
you
you'll,
add
some
more
comments
and
edits
to
the
dog
and
all
fifteen.
If
signal
is
not
sitting
behind
this
in
terms
of
amending
the
pots
back
with
the
new
fields
for
OS
architecture,
then
you'll
move
forward
with
the
runtime
class
option
just
for
117,
and
then
you
can
reevaluate
this
well.
B
So
we
can
move
forward,
we're
gonna
move
forward
to
runtime
class
either
way,
but
if
we
don't
get
the
update
to
the
CRI
pull
action,
that
means
that
hyper-v
is
out.
I
mean
we
wouldn't
be
able
to
get
in
it.
The
benefit
of
running
multiple
versions.
On
the
same
cluster
and
I
I
mean
I
had
the
only
other
workaround
we
had
before
was
using
an
annotation,
but
there
is
a,
but
that
was
done
as
a
hack
in
dr.
shim,
because
at
that
point
it
had
still
had
a
copy
of
the
deployments
back.
B
A
Guarded
so
yeah
I
won't
be
able
to
attend
the
signaled
after
this,
but
let
us
know
how
how
about
there's
anything
with
I'm,
assuming
they
also
have
issues
in
the
resources.
But
let's
see
what
they
say
and
if
it's
something
small
that
you
can
make
an
investment
in
and
and
it's
just
a
matter
of
convincing
and
I'd
say
write
a
project.
I
can
have
all
that.
B
B
C
I,
just
looked
at
the
two
GMS
APRs
looks
like
for
the
testing
one
Adelina
provided
some
feedback
and
it's
been
LG
GM,
so
I
think
you're,
good,
right,
yeah,
okay,
excellent,
so
it'll
be
a
matter
of
just
enabling
the
ete
test
doing
some
more
sand.
He
runs
and
enabling
it
for
the
other
dogs
appear.
That's
been
languishing
for
a
while.
I
just
saw
some
updates
just
before
this
meeting,
though
from
one
of
the
reviewers
I'll
address
that
and
get
it
moving.
C
The
other
thing
I
had
after
that
was
so
we
are
working
on
the
CSI
proxy
stuff
and
we
were
wondering
like
all
this.
So
basically,
the
context
here
is
all
the
CSI
projects
are
driven
through
brow
as
part
of
their
release
and
testing
and
I
know.
We
have
discussed
this
a
little
bit
before,
but
was
there
any
guidance
around
like
how
to
enable
Windows
notes
for
prom.
D
B
So
today,
prowl
with
the
azure
provider
is
basically
spinning
up
as
your
VMs
and
that
works
for
test
passes
that
are
orchestrated
through
test
grid.
But
it
we
don't
have
anything
that,
during
the
time
of
doing
a
kubernetes
bill
old,
that
a
Windows
node
is
provisioned
for
either
building
containers
or
doing
unit
tests.
So.
C
D
B
Mean
we
could
definitely
do
that,
but
the
does
the
thing
here
is
that
we
were
trying
to
get
something
that
was
all
triggered
and
orchestrated
through
prowl.
So
that
way
things
like
pushing
the
test
images
and
promoting
them
into
the
GCR
repositories
and
stuff
like
that
all
just
worked,
and
so,
if
we
have
a
separate,
you
know
decent
desynchronized
process
that
doesn't
get
rid
of
the
manual
steps.