►
From YouTube: Kubernetes SIG Windows 20191203
Description
Kubernetes SIG Windows 20191203
A
Hello,
everybody
and
welcome
to
another
seen
windows
meeting
it's
the
3rd
of
December
is
our
first
meeting
of
December
as
we
kind
of
start
to
ramp
down
for
the
end
of
the
year.
I
think
you
all
for
attending.
Hopefully,
everybody
had
a
great
Thanksgiving.
First
order
of
business.
Mark
Rosetti
is
becoming
one
of
our
newest
contributors
to
Lucic
windows.
He
has
a
PR
open
to
become
a
member
of
the
kubernetes
org.
Oh
thank
you
Mark
for
all
your
contributions
and
we
look
forward
to
a
lot
more
so
could
consider.
B
A
C
A
D
D
A
D
Yes,
so
I
talked
to
Adelina
about
adding
1903
to
test
grid.
We're
gonna
do
that.
One
first
and
the
thing
we're
working
on
right
now
is
we're
basically
I'm,
get
tightened
down
and
add
some
cleanup
scripts
to
our
subscription
because
we
don't
have
enough
capacity
to
run
it,
but
once
that's
done
probably
sometime
this
week,
maybe
we
can
get
something
running
for
next
week.
I
mean
basically
like
it
doesn't
look
like.
We
need
any
code
changes.
D
A
D
D
No
I
think
I
think
those
are
the
big
ones,
there's
a
whole
lot
of
test
tests
in
for
a
work
but
I
think
kind
of
the
big
thing
that
we
still
need
to
get
closed
on
on
the
tests
on
the
tests.
Interest
of
is
getting
the
automatic
image.
Promotion
working
I
know
that
Claudius
got
some
PRS
that
have
merged,
and
you
know
that
work
is
getting
getting
closer
to
all
getting
merged,
but
we
still
need
to
work
with
cig
testing
to
get
that
stuff
actually
hooked
up
on
the
build
side.
D
D
D
Sure
so
a
couple
people
have
asked
about
that
recently
and
there's
actually
a
PR
open
that
I've
got
links
there,
and
so
it
looks
like
there
so
that
the
proposed
change
is
adding
a
new
CRI
field
over
on
the
windows.
Excuse
me
on
the
windows
container
config
for
nano
cores
and
based
on
the
screenshots.
They
have,
it
seems
to
be
working
the
thing
of
them
I'm,
checking
on
I
want
to
check
on
this
is
there's
basically
like
you
in
this
PR.
Once
that's
done.
D
There
are
separate
configurations
for
CPU
shares,
CPU
account
in
nano,
CPUs
and
I
believe
there's
actually
an
order
of
precedence
or
validation.
That
needs
to
be
done.
That's
not
implemented
yet.
I
know
that
in
the
case
of
docker,
if
you
specify
both
the
CPU
maximum
and
to
see
in
CPU
shares
its
sign,
it
ignores
one
of
them
and
then
has
a
warning
about
it
in
container
D
that
behavior
is
actually
enforced,
and
so,
if
you
specify
both
of
them
it's
rejected,
and
so.
D
So
I've
got
a
got.
A
thread
started
with
some
folks
on
the
windows
team,
but
before
we
merge,
this
I
want
to
make
sure
that
we
document
what
the
actual
behavior
is
and
make
sure
that
it's
something
that's
not
going
to
change
when
you
switch
between
docker
and
container
D,
because
I
don't
want
people
to
have
to
go
change
their
Yama
files.
D
So
that
way
they
run
again,
and
so
that's
a
major
concern
before
before
merging
this,
and
so
because
there's
a
user
experience
impact
there
I'd
want
to
make
sure
that
I
raised
this
so
that
people
could
could
review
it
and
get
feedback
on
it
as
well,
because
I
think
we
could.
Basically,
since
this
is
a
feature
that
wasn't
implemented
before
I,
think
that
we
need
to
just
be
careful
about
when
we
introduce
it
were
doing
it
in
a
way
that
is
properly
documented
and,
and
everyone
understands
it
so.
A
D
E
D
E
E
F
E
E
D
D
D
D
F
Basically,
we're
seeing
that
in
the
default
to
proxy
configuration,
some
of
our
users
are
hitting
like
the
limits
and
basically
the
the
reasoning
is
that
they
are
running
out
of
available
ephemeral,
TCP
ports
using
the
non
DSR
load,
balancing
mode
that
proxy
uses
by
default,
and
that
may
cause
such
symptoms.
So
we
wrote
up,
we
added
something
to
the
troubleshooting
section
and
Doc's
not
to
the
kubernetes
I/o
Doc's
yet,
but
to
the
Microsoft
page
on
how
to
mitigate
that
workarounds.
There
are
as
well
as
what
fixes
were
looking
at
next
to
backboard.
D
F
F
D
F
D
D
G
F
G
G
F
F
B
To
complete
on
what
David
was
mentioning,
I
want
to
do
also
mention
that
it's
not
all
only
impacting
load
balancers,
but
it
can
also
impact
endpoints
when
there
are
outbound
not
configured.
We
basically
use
these
host
ports
for
all
19
purpose,
whether
it's
for
the
load
balancer
or
for
the
album
that
it's
not
impacting
the
SR
load
balancers.
It's
impacting
non
DSO
load
balancers,
and
it's
also
impacting
all
the
load
that
each
one
of
the
guests
can
have
on
whether
through
the
admin
hat
or
through
the
load
balancers.
B
Basically,
we
did
some
additional
points
from
windows
and
because
of
that,
it's
kind
of
very
complex
to
come
up
with
an
accurate
formula
to
predict
how
many
boards
are
going
to
be
available.
So
the
strategy
was
more
about
trying
to
figure
out
or
trying
to
make
the
best
guess
in
the
collect
log
scripts
that
we
have
and
try
to
assess
as
soon
as
possible.
If
we're
going
to
run
out
of
ports.
F
So
yeah,
just
to
recap
in
the
non
DSR
load,
balancers
we're
having
to
reserve
64
ephemeral
ports
any
pod
with
out
on
that
configured
as
good,
also
reserved
64
horse.
And
then,
if
there's
traffic,
that's
being
sent
around
between
the
sources.
We're.
F
More
ports
as
well.
So
that's
why
it's
hard
to
give
an
exact
number,
like
an
exact
scale
number
to
give
to
users,
but
typically
wonder
trading
like
hundred
little
bouncers
they're,
starting
to
see
these
issues
as
a
rough
rule
of
thumb
and
where
we're
documenting
the
tools
as
like
self-help
resources.
So
people
can
analyze
if
they're
hitting
this
issue
will.
A
It
make
sense
to
also
document
the
the
same
part
from
the
Microsoft
dogs
into
the
kubernetes
troubleshooting
dogs
for
Windows.
Under
our
sake,
yeah,
absolutely
yeah,
that's
the
next
step
so
collect
logs
ps1
is
in
there
and
some
I
mean
some
of
the
tools
are
there,
but
the
actual
steps
by
worthwhile,
adding
those
thank
you.
Yeah
Ya,
Allah.