►
From YouTube: Kubernetes SIG Windows 20191210
Description
Kubernetes SIG Windows 20191210
A
Hello,
everybody
and
welcome
to
another
sick
windows
meeting
it's
the
10th
of
December,
we'll
have
only
one
more
meeting
this
year
before
we
kind
of
close
for
the
break.
Thank
you
all
for
attending
and
as
always,
this
is
a
recorder
meaning.
So
please
adhere
to
this
in
CF
code
of
conduct
and
let's
get
started
so
first
order
of
business.
We
submitted
the
the
release
notes
for
1.17.
Yesterday,
we
kind
of
highlighted
only
a
couple
of
things
from
sick
windows.
A
We
highlighted
support
for
1903
M
1809
we
have
highlighted
the
run,
is
using
me
moving
to
beta
the
runtime
class
documentation
that
Patrick
worked
on
that
I.
Don't
know
if
it's
mercy
at
Patrick
when
I
checked
yesterday
wasn't,
but
maybe
this
today,
but
that's
also
highlighted,
and
the
last
thing
is
the
label
on
notation
that
we're
adding
with
the
build
number
for
the
Windows
nodes
are
part
of
the
kubernetes
cluster.
A
B
Yeah,
so
we
had
a
more
in-depth
discussion
with
this
internally
I
worked
with
with
someone
that
went
and
actually
dug
into
the
windows
source
code
a
bit
more,
and
then
we
found
that
some
of
the
info
that
was
needed
was
actually
documented
publicly,
and
so,
if
we
skip
down
to
the
Java
object,
CPU
rate
control
hard
cap.
Basically,
what
we
found
is
for
the
windows
scheduler
when
you're
dealing
process,
isolation.
B
B
And
for
some
reason,
I'm,
not
sure
I,
don't
know
what
the
reason
was,
because
I
didn't
take
the
far
enough
into
the
docker
source,
but
the
way
that
that
docker
is
implemented
today,
if
you
specify
both
a
CPU
weight
and
a
CPU
hard
in
a
cpu
maximum
in
the
docker
API
it
logs
a
warning
and
applies
the
weight,
not
the
maximum
I.
Don't
think.
B
That's
the
right
thing
to
do
for
kubernetes
users,
because
the
API
says
that
it
is
a
CPU
maximum
and
so
I
think
the
behavior
that
we
should
implement
for
users
is
is
to
implement
it
in
such
a
way
that
when
a
user
set
to
limit
on
CPU
for
that
container,
that
that
translates
to
a
Windows
hard
limit.
And
so
if
they
say
you
know,
250
ml,
of
course,
that's
25%
of
a
CPU.
B
But
I
don't
think
that
we
actually
need
to
even
take
a
dependence
on
that
new
docker
API,
because
behind
the
scenes
it
does
same
thing
as
CPU
maximum,
which
is
already
bare
and
CPU
maximum
is
defined
in
cry
container
D
as
well
as
docker.
So
if
we
change
a
cubic
behavior
to
only
set
CPU
maximum,
then
I
believe
it
should
work
for
both
container
D
and
for
docker,
and
so
that's
I
opened
up
an
alternative
PR
for
that
and
planked
attested
but
I.
Guess
what
I'm
looking
for
from
a
sig
is,
you
know?
B
B
C
B
D
B
So
today
the
code
tries
to
translate
that
limit
into
setting
both
a
CPU
maximum
and
a
weight.
The
problem
is
that,
because
both
are
set,
the
weight
takes
precedence
and
the
maximum
has
no
effect,
and
so
I
think
the
right
change
is
to
actually
not
set
weight.
That
would
cause
the
limit
the
cpu
maximum
to
take
effect
instead
of
a
weight.
B
D
C
B
C
E
E
D
For
you,
the
scheduler
will
take
it
into
account
right,
yes,
but
maybe
that
I
think
that's
what
he's
saying
like
okay
good,
but
not
actually
for,
like
you,
know,
kill
it
like
it
doesn't,
kill
the
process,
it
just
throttles
it
right.
So
yeah
probably
wouldn't
notice
it
unless
you're,
actually
monitoring
CPU
usage
or
something
I.
D
E
B
If
you
do
set
a
limit
with
no
reserve,
the
scheduler
will
still
subtract
those
it'll
assume
the
reserve
is
equal
to
the
limit
and
it
will
subtract
that
from
allocatable,
and
so
the
behavior
you
could
see
is
that
a
node
could
be
fully
scheduled
to
where
all
cores
and
memory
should
be
used.
But
if
you
were
to
go
look
at
the
cap
on
individual
container,
you
would
see
that
its
CPU
usage
could
exceed
the
limit
that
was
there.
B
B
B
Yeah
and
so
I
haven't
talked
fully
into
that,
but
the
way
it's
implemented
on
Linux
is
they
use
the
CFS,
which
is
supposed
to
be
the
completely
fair
scheduler,
and
basically
they
pick.
How
many
shares
are
there
for
the
system.
I
think
kubernetes
fixes
that
at
a
thousand
and
then
based
on
the
CPU
allocations
that
you
give
it.
B
D
F
B
A
D
So
a
bunch
of
people
and
myself
included
I've
run
into
this
problem
with
the
cube,
ATM,
scripts
and
I.
Guess
just
overlay
networking
in
general,
where
well,
there's
two
important
things
with
so
with
the
cube
ATM
scripts
in
sig
Windows.
They
will
install
a
version
of
flannel
that
doesn't
have
necessary
code
to
to
allow
pod
readiness
checks
to
work
where
you
need
to
curl
from
the
host
to
the
Container
subnet,
and
that
was
like
a
known
issue,
but
it
was
not
well,
it's
still
confusing,
because
it's
not
like
there's
no
release.
D
The
problem
is:
there's
no
release
version
of
flannel
that
has
these
changes
from
Kalia,
so
I
was
proposing.
I
think
the
PR
is
still
open
to
change
to
a
like
custom
binary
that
I
built
myself
but
I.
Think
David
was
saying
there
was
like
feedback
that
having
a
bunch
of
custom
binaries
is
not
very
good,
which
I
understand
too,
but
it's
kind
of
a
little
loose
loose
and
it
seems
like
the
lesser
of
two
evils,
but
yeah.
G
A
G
D
D
Of
still
like
it's
broken
out
of
the
box
seem
suboptimal.
H
So
why
aren't
we
just
pushing
that
like
why
don't
like?
Why
are
we
going
through
all
these
workarounds?
Why
don't
we
just
push
final
to
do
that?
Is
there
any
sorry
I
don't
mean
that
in
a
rude
way,
I'm
just
I'm,
just
kind
of
just
trying
to
catch
up
here
so
like
is,
is
final.
Are
there
other
questions
on
the
PR
or
there
oh
I,.
D
H
D
H
H
H
F
A
I
think
they
had
a
little
sway.
Let
me
let
me
actually
peeing
a
couple
of.
Let
us
try
a
couple
of
ways
independently.
So
I'll
try
the
pink
rancher,
Ben
and
David.
If
you
guys
can
lean
on
I
didn't
catch
the
name
of
the
person
that
you
guys
talked
about
earlier,
let's
see
if
I
can
get
them
to
spin
it
release
I.
D
D
We're
still,
let's
see
so,
we've
been
working
on
Cloudant
using
cloud
init
base
cloud
base
in
it
sorry
cut
those
words
backwards
for
and
we
got
it
working
on
an
AWS,
but
there
were
some
features
of
cloud
base
in
it
that
are
missing
like
for
compatibility
with
cloud
in
it,
which
is
like
the
linux
version.
So
we've
been,
we
submitted
a
PR
to
add
it's
like
python,
templating
features
that
are
used
by
cluster
api
and
have
some
people
from
cloud
base.
Looking
at
that.
C
C
Jing
from
Google
is
she's,
starting
to
take
a
look
at
like
how
to
set
up
like
CI
infrastructure
around
it.
We
have
been
really
maxing
it
to
get
the
binary
up
set
up
like
a
Windows
machine
through
have
a
framework
called
I,
think
Bosco's
that
allows
you
to
spin
up
DCP
machines
in
like
history
projects.
So
she
started
looking
into
that
and
the
SMB
side
of
the
api's
to
allow
CSI
drivers
that
want
to
mount
our
SMB
volume
like
a
sure-fire
and
stuff.