►
From YouTube: Kubernetes SIG Windows 20191015
Description
Kubernetes SIG Windows 20191015
A
One
of
the
most
important
ones
is
today
is
the
enhancements
freeze
for
1.17.
So
if
you
have
any
items
that
need
to
be
graduating
from
alpha
to
beta
or
beta
to
stable,
please
make
sure
that
they're
tracked
appropriately
and
just
add
a
note
on
the
list
there,
because
I'll
be
double-checking.
Those
later
today.
A
All
right,
well,
first
up
on
a
list,
is
a
couple
questions
that
Peter
added,
and
so
the
question
was
around
what
I
guess.
Maybe
I
should
give
all
that
background
behind
this
real
quick.
So
today,
Windows
has
two
different
release:
types:
there's
the
long-term
servicing
Channel
releases,
which
are
supported
for
five
years
of
of
what
they
call
mainline
support,
plus
five
years
of
extended
support
and
Windows
Server
2000.
Nineteen
is
one
of
those
LTS
releases.
A
Where
are
these
two
things
collided?
Is
that
when
Windows
Server
2000
nineteen
was
released,
the
nano
server
container
image
was
published
only
as
a
semi-annual
channel
image
with
1809
the
kernel
and
everything
inside
is
the
same,
but
from
support
standpoint.
The
intention
was
to
only
support
the
nano
server
image
for
18
months,
under
the
assumption
that
people
will
just
move
to
the
next
semiannual
Channel
release.
You
know
the
next
one
being
1903
or
presumably
there
will
be
one.
This
fall,
I,
don't
know
what
the
final
name
is,
but
people
can
move
forward
to
that.
A
The
problem
is
that
the
container
image
can't
be
newer
than
the
host
that
you're
running
it
on,
and
so,
if
you're
running,
Windows
Server
2000
19,
you
can't
just
deploy
the
nano
server,
1903
or
1909
image.
It's
not
going
to
work,
and
so
I
summarized
this
in
a
in
a
doc.
That's
that's
listed
here
and
I
was
trying
to
list
out
the
you
know:
potential
impacts
that
we
had
from
that,
and
so
I've
got
that
open.
But
since
Peter
opened
this
as
a
sign
under
the
agenda,
just
does
that
make
design.
B
A
B
B
A
B
A
The
other
thing
I'm,
not
real,
clear
on
and
Claudia
asked
me
this
in
the
slack
Channel
as
well,
is
whether
or
not
to
know
server
image
was
going
to
be
deleted
or
just
simply
not
supported,
because
if
it's
just
for
the
purpose
of
you
know
running
a
test
case
and
there's
not
really
any
security
exposure,
maybe
we
could
get
a
little
bit
longer
out
of
it.
But
it's
it
would
kind
of
just
be
would
be
a
bit
of
a
gamble
there.
A
And
then
I'm
going
to
try
to
get
someone
to
look
at
the
CPU
spikes
that
they
talked
about
on
server
core,
that
we're
affecting
the
auto
scaling
tests,
and
so
the
background
on
that
one.
Is
that
the
way
that
the
auto
scaling
tests
are
written
today,
they've
got
some
containers
that
generate
load
pretty
quickly
after
they're
started
and
as
the
server
core
image
comes.
A
It
brings
up
the
windows
service,
Control
Manager,
which
launches
background
processes
and
that
itself
causes
a
short
CPU
spike,
which
was
enough
to
trigger
the
autoscaler
and
so
I
need
to
see
if
there's
any
way
that
that
can
be
mitigated.
Otherwise,
the
test
cases
would
need
to
be
updated
to
have
a
higher
auto
scaling
threshold
to
avoid
scaling
based
on
the
service
Control
Manager,
starting
up
as
opposed
to
the
the
actual
workload
itself.
A
Anyway,
so
within
that
dock
I've
got
a
couple
options
outlined,
you
know
what
one
of
them
you
know
the
obvious
one
is
well.
We
can
move
over
to
server
core.
That's
got
a
few.
You
know
issues
that
we
just
talked
about
that
claudia
called
out,
or
you
know
we
could
potentially
move
towards
dropping
support
for
Windows
Server
2000
19,
but
that
would
be
well
I.
Don't
know.
My
assumption
is
that
people
would
not
like
that.
D
A
B
A
C
A
Okay,
so
moving
on
okay.
So,
as
I
mentioned
earlier,
the
the
trim
race
enhancement
freezes
today.
So
last
week
we
went
through
a
doc.
I
had
I
was
talking
about
some
of
how
we
could
address
multi
arc,
multi
architecture
and
multi
OS
version
support,
and
so
I
had
some
some
great
comments
on
that
from
different
people
in
this
sig,
as
well
as
in
sake
node
and
then
so
I've
done
based
on
that
was
I
kind
of
took.
A
The
main
thing
that
I
wanted
to
clarify
were
that
these
were
still
consistent
with
what
we
with
what
what
we
agreed
and
if
there
were,
if
you
want
to
had
any
other
concerns
about
it.
So
that
kind
of
the
three
key
points
were
that
I
wanted
to
add
an
OS
version
label
automatically
from
the
cubelet.
So
that
way,
Windows
notes
could
be
identified
by
version
without
the
admin
having
to
do
some
extra
work.
I
went
back
through
the
previous
proposals
around
node
label
filtering
and
there
was
sort
of
a
I
guess.
A
A
The
second
thing
was
updating
this
CR
I
pull
API
to
also
include
runtime,
Handler,
and
so
today,
if
we
look
at
the
run,
pod
sandbox
request
today
that
takes
into
pod
sandbox
config
and
also
the
runtime
handler.
The
runtime
handler
is
what
corresponds
to
the
container
D
or
whatever
the
runtimes
configuration
is,
and
each
one
those
going
to
have
a
different
set
of
runtime
parameters
applied
that
may
be
specific
to
that
runtime.
So
that
way,
not
every
single
change
has
to
be
made
in
a
kubernetes
api.
A
And
so
I
think
that
that'll
work
from
a
consistency,
standpoint
initially
I
want
to
put
it
directly
into
pod,
sandbox
config,
but
because
all
the
other
uses
had
already
added
it
as
a
new
field.
My
assumption
is
that
they
had
done
that
for
some
backwards.
Compatibility
reason,
because
it's
assumed
that
when
you,
when
you,
when
you
add
a
new
field,
if
it's
not
present,
you
should
just
use
a
reasonable
default
that
way
for
getting
a
request
from
an
older
caller.
A
A
A
So
here's
an
example
of
how
that
could
be
used
to
create
two
different
runtime
classes
that
would
use
the
runtime
class
scheduler
to
automatically
apply
selectors
based
on
the
Windows
OS
version
so
like
what's
outlined
here,
would
actually
work
in
1.16
today,
but
you'd
have
to
manually
set
these
labels
on
each
note,
and
so
this
part
of
the
proposal
would
be
to
automatically
set
those
on
those
and
1.17.
So
that
would
be
easier
for
things
like
this
to
just
you
know,
work
without
any
additional
changes
said
the
second
example
I
had
in
there
was.
A
Then
you
know
we
could
have
a
separate
runtime
class
for
that,
but
then
instead
constrain
it
to
the
new
Windows
OS
version,
and
so
that
would
make
it
where
you
could
basically
test
this
side
by
side
along
with
this
version
up
here.
So
your
work
either
have
a
runtime
class
of
Windows,
1809
or
Windows
1809
hyper-v,
where
the
hyper-v
one
would
run
on
the
new
version
with
hyper-v
isolation.
A
And
then,
if
we
wanted
to
remove
the
old
18:09
nodes,
then
you
could
just
go
update
the
existing
runtime
class
18:09
to
match
these.
Then
everything
you
schedule
from
that
point
on
would
just
pick
up
the
new
node
selectors
in
the
new
runtime
handler.
So
that
way
they
could
steer
it
under
the
new
nodes.
With
the
new
configuration.
D
A
Yeah,
because
the
runtime
handler
we
need,
we
need
to
differ,
and
it's
like
this
example
here
like
if
we
said
you
said,
use
the
default
handler
there.
If
we
set
the
default
handler
in
container
D
to
match
the
same
OS
with
process
isolation,
then
this
would
basically
select
1903
and
then
place
the
workload
on
it.
A
Yeah
yeah,
you
can
only
you
can
only
specify
one
handler,
I
mean
the
other
alternative
is
we
could
use
annotations,
but
the
annotations
would
need
to
be
applied
on
the
on
the
pod
itself,
and
so
then
you
wouldn't
be
able
to
get
the
then
the
rest
of
the
benefits
of
runtime
class
would
no
longer
work
there,
because
we'd
have
to
do
something
different
yeah.
So
here's
here's
an
example
here.
A
Yeah
and
then
there's
also
guidance.
If
you
go
and
look
at
the
CRI
API
that
that
annotations
I
mean
the
guidance
was
whenever
possible,
runtime
authors
should
consider
proposing
new
type
fields
instead
of
using
annotations
and
they
sort
of
say
you
know,
annotations-
should
not
influence
runtime
behavior.
A
And
so
that's
why
I
mean
my
belief
was
that
this
joined
a
runtime
class
instead
and
if
you
look
at
other
usage
today,
this
is
how
people
are
switching
between
running.
You
know
container
D
with
the
normal
shim
and
running
container
D
with
with
with
cauda
so
like.
If
you
go
look
at
the
kata
documentation.
This
is
this
is
the
same
same
way.
They're
doing
it.
A
So
for
cube,
ADM
I
think
the
open
question
was
around
testing.
Were
there
any
other
things?
I
need
to
be
changed
on
that
one?
Have
you
had
a
chance
to
look
at
that
Kalia.
E
Sorry
I
was
yeah,
so
I
talked
to
Michael
about
it
and
I
haven't
tested
this
out,
but
and
I
talked
to
Luba
Meerut.
We
think
the
cube,
ATM
upgrade
part
might
just
work
because
pretty
much
all
that
does
is
such
the
new
can
take
map
for
qubit,
but
the
scripts
would
have
to
be
updated
for
upgrade,
which,
if
cube
ATM
works,
should
be
trivial
and
then
there's
the
question
of
how
you
would
handle
different
arguments
from
version
to
version
and
how
the
scripts
would
handle
that.
A
E
A
B
F
A
A
E
A
Okay,
all
right
I
think
that's
all
the
ones
that
we
need
to
track
for
for
updates.
So
I'll
make
sure
those
items
are
marked
appropriately
later
today
and
get
that
going
and
then,
if
anyone
can
review
that
cap
I'd
greatly
appreciate
it.
So
we
can
get
a
couple
LG
TMS
on
it
and
get
it
get
emerged
I'm
going
to
go.
Take
that
over
to
sig
note
right
now,
so
all
right!