►
From YouTube: Kubernetes SIG Windows 20201027
Description
Kubernetes SIG Windows 20201027
A
Hello,
everybody
and
welcome
to
another
sig
windows
meeting
it's
the
27th
of
october
last
meeting
for
the
month,
as
always,
is
a
recorded
meeting.
So
please
adhere
to
the
cncf
code
of
conduct.
We're
actually
entering
almost.
The
final
stretch.
Like
you
know,
november,
is
going
to
be
a
big
month.
We
have
a
release
coming
up
with
1.20.
A
We
have
cubicon
coming
up
in
the
in
in
mid
to
late
november,
and
then
we
have
thanksgiving
at
least
for
the
folks
that
are
in
the
us.
A
So
there's
not
a
lot
of
time
left
to
to
work
on
some
of
our
big
investments
that
we're
making
for
1.20,
enabling
the
container
d
work,
which
is
probably
your
number
one
priority,
plus
some
networking
enhancements,
and
also
making
progress
on
the
privileged
containers
mark
and
I
chatted
offline,
given
that
next
week,
at
least
in
the
u.s
it's
election
day,
I
want
to
cancel
our
meeting.
A
I
think
that's
a
strategy
that
a
lot
of
companies
and
open
source
projects
are
following
to
give
everybody
a
bigger
opportunity
to
go
and
exercise
the
right
to
vote
feel
free
to
use
that
time.
As
you
see
fit,
but
no
meeting
next
week,
it
should
have
already
been
removed
from
your
calendar
claudio
last
week
we
were
talking
about
the
docker
hub
and
the
functional
user,
but
do
we
have
an
update?
I
know
deep
was
gonna
work
on
seeing
if
we
could
get
us
some.
Some
special
sauce
from
docker.
B
Hey
so
I
updated
the
I
posted
a
link
to
issue
com
comment.
That's
where
I
posted
like
all
the
findings
and
notes
and
what
needs
to
happen
at
a
minimum.
Unfortunately,
all
the
tracking
will
be
happening
on
a
docker
hub
user
basis,
so
at
a
minimum
like
anonymous
polls
like
there's,
nothing,
we
can
do
about
it
because
you
know
those
are
not
related
to
any
any
authenticated
user.
B
So
I
think
at
least
the
test
infra
needs
to
be
updated
to
make
sure
all
the
tests
are
indeed
logging
in
first
and
once
we
have
this
docker
hub
user
id.
That
can
you
know,
then
you
know
I
can
facilitate
that
thing.
Where
docker
makes
sure
that
you
know,
because
it's
open
source
project
we
discuss
like
how
we
can
make
sure
those
polls
are
not
getting
throttled
for
that
user.
A
So
so,
what's
the
so
did
what's
the
strategy
here?
Should
we
go
create
a
user
and
and
basically
tell
docker
that
this
is
a
functional
user
that
you
created
for
an
open
source
project.
B
A
B
Yeah
so,
as
I
posted
in
the
comment,
the
ideal
recommendation
would
be
to
create
what
we
call
a
hub
team
and
then
have
a
bunch
of
users
depending
on
you
know
how
many
users
different
cigs,
need
as
part
of
that
team,
and
then
we
talked
to
docker
and
say:
hey
just
like
make
sure
anyone
who
is
part
of
this
team
does
not
get.
A
I'll
I'll
ping
iron
and
see
if
they're,
if
they're,
gonna,
create
or
claudia,
if
you
can
ping,
I
don't
ask
if
they're
gonna
create
this
happening.
C
Just
just
just
a
minor
update
for
the
moment,
I
think
we
might
be-
maybe
the
only
ones
which
will
require
something
like
this
more
intensely
intensively.
C
That's
because
the
the
mirror
that
g
dot
io
images
have
been
updated
to
also
include
manifest
lists,
similarly
to
how
they
already
are
on
docker
hub,
which
means
that
almost
all,
almost
all
the
test
jobs
do
not
really
require
any
special
account
just
for
this,
and
what
they've
done
for
most
of
the
testing
for
jobs
from
what
they
saw.
B
B
Images
got
it
and
do
you
think,
like
the
user
authentication,
that
should
just
be
a
simple
step
that
can
be
added
to
the
test.
Intro,
the
docker
login
step.
C
C
Then
we
can
just
tell
the
infra
people
to
have
that
mounted
as
a
secret
to
have
this
stored
improv
as
a
circuit
and
then
just
mount
that
secret
inside
our
jobs.
C
Okay
sounds
good
and
then
we
can
just
have
docker
demons
use
those
docker
config
files
for
authentication.
B
C
Think
we
have
a
lot
of
jobs:
okay,
okay,
got
it
and
we
pull
off
a
lot
of
images.
Sure
I
think
we
are.
We
have
at
least
20
something
images
which
are
currently
being
used
in
e3
tests.
Okay,
and
we
have
two
nodes
per
test
run.
So
double
that,
and
we
have
multiple
jobs
for
container
d
for
darker
for
darker
darker
shim,
right,
four,
eighteen
or
nine.
Eighty
three.
Ninety
one
2004
hyper
isolation
process,
isolation.
B
Yeah
that
that
makes
complete
sense,
so
I
think,
like
once
that
authentication
framework
is
there.
I
I
can
make
sure
that
that
that
docker
user
is
indeed
getting.
B
C
At
the
beginning,
in
case,
someone
else
also
needs
this
privilege
yep.
A
Okay,
so
it
kind
of
to
ten
boxer
discussion,
so
claudia
you're,
gonna
work
on
the
authentication
framework
and
then
get
back
to
deep,
potentially
on
that
issue
common
thread
and
let's
figure
out
how
to
proceed
with
the
creation
of
an
exempted
user.
Sure
thank.
A
Sounds
good
all
right
cool
sounds
good
jay.
Let's
and
I
know
if
the
android
team
is
here,
you
guys
had
a
request
to
add
some
cni
guidance
to
the
kubernetes
windows.
Docs.
D
A
Yeah,
essentially
they
ask
here
is
that
when
you
go
to
doctor
kubernetes
and
you
look
for
windows,
we
have
some
guidance
there
with
our
l2
bridge
up
to
or
you
know,
we
have
obvious
ovn
solution
in
there,
but
we
don't
have
anything
about
andrea.
We
don't
have
anything
about
calico,
so
we
haven't
really
expanded
that
table
david.
You
had
originally
created
that
a
year
and
a
half
ago,
two
years
ago,
almost
so,
let's
make
sure
we
add
the
updates
there,
jo
whoever
ends
up
adding
them.
A
D
Too,
working
on
an
end-to-end
installation
dock
for
myself,
I'm
happy
to
include
that
if
anybody
wants
it
anywhere
upstream
sort
of
one
other
community
guidance.
Related
thing
is:
do
I
have
this
kind
of
jar
open
and
I
don't
really
care
about
the
pr.
But
I
just
wanted
to
see
if
we
could
get
some
consensus
on
how
we
want
to
phrase
the
I
it
sounds
like.
D
We
generally
agree
that
we
need
to
have
some
kind
of
a
community
vision
or
guidance
around
that's
explicit
and
that's
written
about
what
we
recommend
as
far
as
wins
and
transition
as
people
move
off
of
wins
towards
the
privileged
containers,
which
seems
to
be
the
goal,
the
long-term
goal
since
there's
so
many
sharp
edges
in
terms
of
timelines
and
how
we
get
there
and
all
that.
How?
D
How
should
we
phrase
that,
as
if,
as
one
of
the
things
that
the
sig
kind
of
is
gonna
provide,
there
was
some,
I
guess
mark
and
me
and.
E
Michael
yeah,
from
my
perspective,
it's
like,
I
am
very
hesitant
to
ever
endorse
something
like
wins
kind
of
as
a
whole
sig,
primarily
because
of
the
kind
of
security
implications.
E
I
think
that
we
can
say
that
wins
is
a
great
way
to
help
bootstrap
like
a
cluster,
and
we
have
examples
using
some
like
cube
adm
to
do
this.
This
is
probably
the
simplest
quickest
way
to
do
this,
but
I
know
a
lot
of
probably
enterprises
would
kind
of
not
want
to
ever
deploy
wins.
I
know
we're
not
going
to
do
something
like
that
in
azure,
just
because
of
the
kind
of
security
implications
that
having
that
service
on
the
node
could
possibly
do
so.
E
We
because
we
don't
have
privileged
containers,
we
install
cni
on
the
nodes
and-
and
it's
it's
not
as
easy
to
manage
as
kind
of
the
deploy
your
cmi
through
a
pod,
but
it's
kind
of
a
necessity
for
us
too.
So
that's
why
I
I
I
think
we
should
be
careful
with
that,
because
windows,
at
least
from
my
experience,
like
windows,
enterprise
users,
are
kind
of
expecting
things
to
be.
They
they
come
to
windows
for
a
lot
of
compliance
reasons.
Oh.
D
Yeah,
I'm
I
mean
I'm
not
saying
we
should
endorse
anything,
I'm
just
saying
we
should
like
how
do
how
should
we
phrase
the
this
whole
thing
so
that
we're
not
endorsing
something,
but
at
least
we're
being
explicit
about
what
the
boundaries
of
the
cigar,
because
I
assume
we
have
some
goal
of
helping
people
who
who
are
using
wins,
or
maybe
we
don't,
and
maybe
that's
what
we
should
be
explicit
about.
I
don't
know.
A
Right
so
so,
so
a
better
way
to
phrase
that
j
is
or
a
different
way
of
phrasing.
Not
better
sorry
so,
like
mark
said,
like
you
know,
wins
is
also
not
under
our
governance
right
so,
and
we
tend
to
you
know
if
it's
not
under
our
governance,
you
know
we
don't
own
this
future
direction.
We
don't
know
the
level
of
testing
that's
happening.
We
don't
know
what
the
direction
it
is.
A
The
second
thing
is
that
I
want
to
ask-
and
I
echo
what
everything
that
mark
said,
that
we
need
to
make
sure
that
we're
pretty
clear
about
what
we
do
own
here
is
the
code
that's
coming
out
of
sick
windows
and
kubernetes.
That's
the
only
thing
we
can
claim
that
we
own,
but
I
understand,
like
you
know,
is
the
goal
here.
That
wins
is
a
necessary
component
for
cluster
api
and
boost
up
your
windows
notes
like
what's
what's
the
usage
of
wins
that
you
guys
are
looking
for.
D
A
Anything
to
do
with
wins
or
whatever.
No,
no,
we're
not
saying
that
right,
so
I
mean
there's
a
lot
of
open
source
solutions
are
adjacent
to
to
kubernetes
and
to
windows
in
general
that
are
very
acceptable
for
a
great
deal
of
customers,
we're
just
not
making
an
endorsement
about
any
of
them.
We're
not
making
any
statements.
A
D
It
was
possibly
going
to
be
used
at
some
level
in
cluster
api
and
I
guess
that's
kind
of
where
this
whole
thing
comes
from
right.
Is
that
like,
if
we
can,
if
it's
being
used
in
some
things
that
are
semi-official
kubernetes
related
projects?
D
You
know
it
would
be
great
for
us
to
at
least
provide
some
guidance
to
the
cluster
api
people
about
hey,
don't
do
this
or
hey
do
do
this,
or
if
you
do
do
this,
this
is
going
to
be
the
consequences,
and
this
is
how
we'll
be
able
to
help
you
get
yourself
out
of
it.
That
kind
of
thing.
I
guess
it's
a
circular
question
right,
I'm
not
really
sure
what
I
I
don't
really
know
what
we're
capable.
D
D
E
Yeah,
I
think
I
like,
if
I
think
I
I'm
more
than
happy
to
say
here's
an
easy
way
to
get
a
windows
cluster
up
and
running,
we're
not
going
to
say
this
is
the
way
to
to
do
this,
but
here's
an
example
of
how
you
can
kind
of
get
yourself
a
cluster
running
windows
to
experiment
on
yeah
I
mean
like
that.
Michael
I
mean
I.
A
I'm
okay
with
that
as
a
reference
implementation.
D
A
Mean
ultimately,
we're
going
to
look
to
some
of
the
commercial
distributions
to
figure
out
how
to
productize
some
of
these
tools
into
a
coherent
offering,
maybe
for
you,
rephraser,
jay,
you
and
james
and
and
and
others
from
the
cluster
api
team?
You
guys
should
talk
about
this
and
figure
out.
You
know:
do
we
have
to
use
wings?
I
know
that
we
use
components
of
wings
as
part
of
the
cube
adm
bootstrap
process
when
we
do
the
cube
adm
work
for
beta.
So
let's
figure
out
this
winston
necessity
here.
A
Can
we
get
away
with
it?
We
know
that
privileged
containers
will
land
at
some
time
as
an
alpha
potentially
like
march
time
frame
right.
Would
they
come
with
sufficient
quality
level?
To
just
say,
wins
is
a
stopgap
solution
for
now
and
the
later
on,
privileged
containers
is
the
right
way
to
go
about
it.
Won't
you
all
chat
and
come
up
with
a
proposal,
but
you
know
definitely
we
can
endorse
something,
that's
not
in
our
governance
and
will
base,
but
it
can
be
a
reference
implementation
without
concrete
caveats.
This
is
a
good
start.
D
To
that
I'll
I'll,
just
keep
I'll
just
keep
hammering
away
that
see.
We
can
come
up
with
that's
cool.
Can
you
link
to
the
pr
in
question
as
well,
jay
sure
yeah,
it's
it's
a
trivial
pr.
It
was
just
a
conversation.
F
Starter
but
yeah
I'll
relate
to
it
and
we
do
call
out
or
we
we
have
wins
as
part
of
like
the
cubidium
for
windows,
documentation
like
michael
mentioned,
but
we
don't
call
out
any
of
the
security
implications
they
did
get
called
out
in
the
cap,
but
didn't
make
it
into
the
dock.
E
Yeah
yeah,
and
I
think
I
was
looking
at
the
docs
a
little
bit
too,
and
we
have
a
couple
of
different
sections
like
kate's
dot.
Io
has
a
bunch
of
different
sections.
There
is
a
section
for
like
production
environment
and
that
section
didn't
mention
winds
at
all.
If
I
remember
correctly,
there
was
another
kind
of
section
about
how
to
kind
of
you
know,
get
a
cluster
up
and
running,
and
that
tutorial
had
a
page
for
out
of
windows.
Note
and
that's
where
there
was
the
wins
reference
yeah.
E
So
I
think
we
should
probably
just
be
a
little
bit
careful
about
putting
things
that
are
under
the
the
heading.
The
production
environment.
D
D
So
he's
probably
the
most
critical
opinion
to
get
here.
G
Yeah,
so
I
was
just
gonna
say:
that's
kind
of
how
we
framed
it
was
it's
a
stop
gap
for
cluster
ap
api
so
that
we
could
get
moving
forward
with
knowing
that
privilege.
G
Containers
was
coming
in
the
future,
and
I
think
I've
talked
with
the
the
wins
team
a
little
bit
and
learned
a
little
bit
about
the
way
that
we
can
configure
it
so
that
it
is
a
little
bit
more
safe
for
for
just
from
a
security
perspective
like
you
can
tie
it
to
only
run
these
types
of
binaries
and
then
we
can
add
some
extra
layers.
On
top
of
that
which
would
be
similar
to
like
privileged
containers
scenario.
So
I
think,
like
I
think,
yeah.
D
A
All
right
folks,
so,
let's
time
box,
this
discussion
as
well
so
can
get
through
our
agenda
today.
So
I
think
we
got
some
small
action
items
from
there
so
mark
james
and
mars.
Next
item
is
yours:.
E
Yeah
recently,
we've
been
seeing
a
lot
of
issues
where
customers
are
reporting
that,
with
high
cpu
load
on
their
windows,
nodes
kind
of
different
random
bad
things
happen
most
notably.
The
node
will
go
into
the
not
ready
state
and
like
flip-flop,
and
also
sometimes
the
node
will
stay
into
the
ready
state
and
the
metrics
will
stop
reporting
correctly
so
cube
ctl
top
node
will
either
stop
updating
or
show
kind
of
unknown
in
brackets,
and
also
that
causes
things
like
hpa
to
stop
working.
E
E
If
you
want
to
open
that
michael
and
then
scroll
down
to
the
bottom,
but
it
looks
like
a
lot
of
these
are
stemming
from
how
the
changes
that
patrick
made
in
118
to
enforce
cpu
limits
were
implemented.
E
So
the
previous
behavior
was
that
all
of
the
the
limits
that
got
added
were
essentially
treated
as
weights,
so
that,
even
if
you
were
over
committing
resources,
the
system
critical
processes
would
get
some
cpu
time
which
would
allow
things
to
mostly
work
after
118.
E
The
cpu
limits
were
more
strictly
enforced
and
it's
much
easier
to
starve
the
system,
critical
services,
docker
cubelet
q,
proxy,
even
the
host
compute
system
itself
of
cpu
resources,
so
kind
of
I've
been
looking
with
james,
had
a
couple
of
different
solutions
here,
and
I
think
we
have
a
couple
of
like
kind
of
mitigations,
but
none
of
them
can
fully
resolve
the
issue
and
they
can't
fully
resolve
the
issue,
because
we
have
no
way
today
of
guaranteeing
that
users
won't
intentionally
over
commit
their
resources.
E
A
couple
of
the
kind
of
possible
fixes
that
I
wanted
to
bring
up
here
for
discussion
were,
we
can
and
probably
should
be
bumping
the
process.
Priority
classes
for
the
system,
critical
services
that
we
own,
potentially
even
making
that
just
extra
flags
to
the
processes
and
then
we're
also
looking
at
the
documentation
and
the
the
current
documentation
recommends,
setting
a
memory
buffer
with
a
system
reserves
cubot
flag,
but
not
a
cpu
buffer
and
we're
finding
that
under
high
cpu
load
kind
of
scenarios.
E
That
is
really
important
to
re,
to
carve
that
out.
So
probably
looking
towards
updating
some
recommendations,
we
have
found
that
that
does
need
to
scale
based
on
the
number
of
processors
on
the
machine
as
well.
So
I
think
james
and
myself
we're
still
kind
of
digging
into
this
and
are
going
to
probably
proof
of
concept
or
start
some
documentation
for
some
of
these
solutions.
A
I
mean
I'm
hoping
that
most
most
customers
are
putting
this
in
production,
have
some
monitoring
that's
basically
looking
at
both
the
host
resources
and
and
before
placing
pods
into
that
and
kind
of
monitoring,
both
everything
from
storage
to
to
memory
to
networking.
But
I
agree
with
your
recommendations
here.
Some
of
the
possible
fixes
right,
like
elevate
the
cubelet
in
terms
of
the
cpu
allocation
that
it
gets,
and
then
I
don't
know
like
how
we
have
can
scale
the
system
reserves
based
on
the
number
of
cpus,
how
that
would
look
like.
A
E
Yeah
yeah,
I
was
thinking
that
the
the
cpu
reserves
would
have
to
just
be
recommendations
in
the
docs
one.
Other
thing
that
we've
kind
of
noticed
is
that
a
lot
of
other
projects,
even
in
the
kubernetes
six
orgs,
aren't
adhering
to
the
guidance
that
we
have
where
for
windows,
pods
or
containers
that
limits
must
equal
reserves.
H
Agreed
yeah
and
michael
just
I
mean
from
from
a
customer
point
of
view.
The
one
of
the
reason
is,
I
mean
first
of
all,
monitoring
for
windows
is
a
little
bit
harder
than
linux,
but
even
if
you
have
monitoring
in
place
because
of
the
image
size,
even
by
the
time,
they
understand
that
you
know
they're
over
limit,
or
I
mean
they
can't
they
scale
up
in
real
time
right.
H
A
I
mean
we
could
also
come
up
with
another
recommendation,
rather
than
just
saying,
like
set
the
reserve
to
500x,
we
could
say,
never
commit
more
than
70
capacity
of
a
host
to
containers
like
no,
then
it
becomes
a
percentage
right
yeah
and
that
usually
works,
because
it
leaves
you
enough
capacity
to
also
do
upgrades
right,
because
you're
going
to
have
to
upgrade
these
things,
you're
going
to
have
to
do
things
that
always
eat
up
a
little
bit
more.
So
I
think
a
good
rule
of
thumb
is
somewhere
in
the
70
range
we
can
talk.
B
Yeah
we
released
a
second
beta
recently.
It
had
lots
of
fixes
from
everyone
working
on
csi
proxy.
So
thanks
for
all
the
contributions,
one
of
the
major
things
I
wanted
to
point
out
was
the
support
for
iscsi.
As
a
fresh
api
group,
I
was
put
in
by
dan
ellen
so
yeah.
If,
if
you
were
or
if
anyone
who
was
thinking
about.
B
A
iscsi
csi
driver,
then
that
basic
support
is
now
in
awesome.
A
B
No,
I
have
not.
I
spoke
to
chang,
who
used
to
maintain
the
ebs
driver,
the
ubia
csi,
and
he
has
left
aws
recently.
So
I
think
he
was
like.
I
don't
know
where
this
is
going
and
I
pinged
wangma.
I
think
michael
wong
yeah
and
didn't
hear
back
from
him.
A
Can
you
add
my
email
to
that
thread
and
I
will
escalate
at
aws
it's
time
that
we
do
that?
Okay
sounds
good.
I
add
my
email
to
that
I'll
escalate,
all
right
cool,
all
right,
everybody,
it's
time!
Thank
you
all
for
attending
have
a
great
rest
of
your
week
and
no
meeting
next
week,
we'll
see
you
all
in
two
weeks
bye.
Thank
you.
Bye,
bye,
everybody.