►
From YouTube: Kubernetes SIG Windows 20181113
Description
Kubernetes SIG Windows 20181113
A
Hello,
everybody
and
welcome
to
another
sip
windows
meetup.
Thank
you
all
for
attending.
We
are
actually
entering
now
the
last
few
weeks
of
basically
trying
to
get
the
stable
support
of
Windows
into
into
kubernetes.
So
I
attended
a
very
sort
of
meetings
in
the
last
couple
of
weeks
with
sick
architecture,
the
sick
conformance
class,
the
life
cycle
and
there's
a
we'll
update
you
on
some
of
those
discussions,
one
of
the
things
that
we
wanted
to
bring
up
and
I'll
paste
into
the
chat.
The
link
that
Patrick
created
is
a
discussion
around.
A
If
you
are
in
a
hybrid
cluster,
where
there's
Linux
and
Windows
compute
nodes
in
your
faster,
what
do
you
do
to
make
sure
the
windows
workload
land
on
the
right
set
of
hosts
and
and
as
well
as,
more
importantly,
your
existing
Linux
apps
that
worked
very
well
on
Linux
host?
How
do
you
make
sure
they
continue
working
without
any
time
or
any
scheduling
difficulties
so.
A
A
I
I
certainly
was
not,
but
the
the
key
thing
that
I
wanted
to
advocate,
for
that
is
the
following:
if
I'm
a
Windows
developer
today
and
I'm
using
kubernetes,
you're,
actually
obligated
and
we're
actually
forcing
you
to
go
in
at
the
node
selector
and
and
without
that,
your
Windows
workload
really
will
not
work
right.
So
what
if
we
actually
went
down
the
path
of
saying?
Not
only
do
you
have
to
add
the
node
selector,
but
you
also
have
to
add
the
Toleration.
A
The
first
goal
is
to
make
sure
the
windows
work
with
Linux
workloads,
don't
land
on
Windows
at
all,
and
the
second
thing
is:
we
can
also
enforce
the
versioning
on
the
windows,
server
containers
with
those
nodes
as
well,
something
that
was
called
out
by
by
a
couple
of
people
in
this.
In
this
document,
I
were
essentially,
if
you
have
an
1803
container,
then
in
is
run
on
an
18:03
container
host,
otherwise
windows.
Several
containers
wouldn't
work.
However,
the
isolation
might
work
on
future
versions
of
the
West,
but
Windows
Server
containers
will
not
so
I.
A
Think
that
and
using
taint
scintillations
would
hobble
out
there,
because
your
taint
will
be
the
Windows
Server
version
that
you're
requiring
your
prospect
to
apply
to-
and
you
know,
Patrick
I
guess
for
you.
First
since
I
don't
know
these
are
my
common
before
this
meeting.
What
do
you
think
about
that?.
B
Some
existing
samples
for
things
like
service
per
open
service
broker
and
the
service
catalog
as
I've
run
several
of
those
in
a
mixed
cluster
I've
had
problems
where
they
get
scheduled
to
the
windows
nodes,
and
so
generally,
the
approach
I've
been
following
is
to
go
back,
and
if
that
repo
will
accept
the
PR
I
go
ahead
and
add
a
node
selector
to
constrain
to
Linux.
Anyway,
that's
worked
some
other
time,
but
if
we
don't
use
a
tank,
we'd
have
to.
B
B
The
problem
is
that
I
I
mean.
Maybe
someone
else
who
knows
more
about
this,
but
I
haven't
generally
seen.
Taints
used
a
whole
lot
in
production.
They
seem
to
be
there
but
I
thought
that
the
path
they're
really
there
for
a
pattern
of
you
know
this
node
really
shouldn't
be
used
for
anything,
and
the
case
of
you
know
running
an
extra
pod
on
a
leader.
Node
is
sort
of
the
main
use
case
for
adding
a
toleration
and
so
I'm
hoping
that
we'd
be
able
to
get
some
more
feedback.
B
I
haven't
looked
to
see,
it
looks
like
Tim
had
just
replied
to
some
stuff
that
I
haven't
seen
yet,
but
that
would
be
my
main
concern
there
is
that
it
would
just
if
it's
not
something
that
people
are
commonly
doing
today.
You
know
that
might
run
into
other
roadblocks,
but
we'll
just
have
to
try
it
and
see.
I
guess,
yeah.
A
I
mean
if
it
works
and
it's
stable,
I
think
you
know.
Everything
here
is
new
on
Windows
side
right,
so
I'm,
comfortable,
saying
that
if
this
works
and
it's
a
reliable
option,
you
can
at
least
explore
it
until
object
like
run
time
policy
or
other
things
come
into
play
that
could
help
us,
but
the
differential
between
Linux
and
Windows
I'm
totally
comfortable
with
something
it's
we
can
say
if
it's
new
or
old,
because
on
the
windows
world,
everything
is
new
here
in
kubernetes.
C
Yeah,
this
kind
of
stuff
been
used
this
like,
for
you
know
machines.
I,
have
GPUs
like
places
where
you're
looking
for
scheduling
pods
on
certain
specific
hardware
and
stuff,
like
that,
but
I
don't
have
any
personal
experience
with
you
it
in
production,
but
I
think
that
is
another
use
case.
That's
out
there,
okay
sort
of
the
same
problem.
You
know.
D
E
A
Yeah
that
works
on
as
most
of
the
wops.
If
we
start
pushing
this
in
the
wild
and
eventually
going
towards
hybrid
clusters,
you're
gonna,
a
lot
of
people
gonna
be
absolutely
have
to
go,
modify
every
prospect
and
but
but
more
importantly,
this
doesn't
solve.
The
problem
were
different.
Windows
versions
are
incompatible
with
each
other.
F
A
F
F
B
Yeah
I
think
we
could
do
that.
We
would
have
to
make
it
change
to
cubelet
to
add
the
extra
node
label
yeah.
We
would
only
want
to
match
a
portion
of
the
version
number
the
wherever
the
very
last
portion
of
the
tuple
is
doesn't
have
to
match,
because
that's
the
monthly
patch
version.
It's
only
the
major
version
or
Windows
that
match.
B
So
that's
something
we
could
do
so
I
guess
there's
sort
of
two
two
problems
here,
so
one
of
them
is.
If
we
want
to
avoid
things
getting
placed
on
windows
nodes
by
default,
then
it
sounds
like
the
easiest
way
to
achieve
that
is
to
use
the
taint.
Then,
if
we
want
sub
sub
select
within
the
windows
version,
then
maybe
that
since
we're
telling
people,
you
must
add
the
Toleration
that
we
could
say
best
practice
is
also
to
go
ahead
and
add
a
note
selector
based
on
the
OS
version,
but
does
that
make
sense?
Michel
reverse.
G
B
B
A
G
E
A
That
the
ten
will
stop
it,
so
paints
are
like
if
you
mark
and
know
it
was
attained,
let's
say
1809,
then
only
couplets,
also,
the
only
pot
specs
with
a
toleration
of
1809
will
be
scheduled
on
it.
So
anything
else
that
doesn't
happen,
you
just
not
be
there,
so
so
it's
basically
II
think
of
this
as
a
matching
thing
yeah.
So
now
not
only
does
the
the
the
note
indicates
what
kind
of
container
she
wants
to
attract
the
container
also
has
to
match
for
the
node
expect
a
requires.
A
A
B
D
B
Do
think
that
that
Tim
raises
a
good
point
around
the
runtime
classes,
which
are
still
in
development,
because
those
are
supposed
to
be
able
to
handle
this
a
little
bit
more
broadly
in
the
scheduler.
It's
not
designed
for
the
OS
differences,
but
I
think
that
as
that's
implemented,
if
we,
then
we
could
go
ahead
and
put
the
default
runtime
classes
in
there
for
Windows,
and
then
this
may
become
something
that's
more
natural
later.
A
B
I'll
probably
be
doing
a
little
bit
of
looking
at
it
as
well,
just
because,
at
least
at
a
high
level,
I
want
to
make
sure
that
it
lines
up
with
something.
So
we
can
have
a
runtime
class
for
hyper-v
for
the
customers
that
want
to
use
that
for
either
the
OS
backwards,
compatibility
with
hyper-v
or
the
extra
of
isolation
for
security.
F
A
So
no
more
details
next
week
and
let
you
get
everybody,
and
the
last
thing
that
we
have
is
again
Patrick
and
I
are
totally
came
to
seek
release
this
week,
making
sure
what
is
it
Patrick,
oh
dear
tomorrow,
that's
I,
don't
even
know
when
it's
so
much
scheduled,
I
know
it's
it's
somewhere
yeah!
It's
tomorrow
at
4
o'clock,
Eastern
yeah,
I'll
figure
out.
You
know
make
sure
that
we
have
all
our
ducks
in
a
row
for
1
to
15.
A
A
D
B
B
A
C
B
B
B
So
I
linked
another
doc
in
here
where
I'm,
basically
just
trying
to
describe
all
the
sort
of
the
test
infrastructure
is
shaping
up.
Yeah
I
see
that
Peters
added
links
for
the
infrastructure
on
on
GCE
as
well,
so
basically
I'm
using
that
other
doc,
along
with
cig
testing,
to
sort
of
describe
how
the
tests
will
be
run
and
what
some
of
the
current
work
is,
and
so
it's
that's
just
something.
I've
got
there
to
capture
all
that
stuff.
B
H
So
yeah
the
notes
for
the
meeting
I
shared,
are
linked
to
a
our
test
grid,
which
has
the
continuous
testing
results
were
running
on
GCE.
Basically,
everything
I
want
to
say
is
here
in
the
notes.
Corresponding
links
had
a
discussion
on
Friday
with
David
and
then
and
some
others
about
the
new
wind
bridge
plug-in.
So
we
were
still
getting
some
errors
with
wind
bridge
that
we
weren't,
seeing
with
Wednesday
and
I,
which
I
think
was
related
to
cleaning
up
end
points
when
tearing
down
pods,
so
we're
still
using
Wednesday
and
I-4.
H
Now,
because
it's
not
clear
to
me
if
it's
okay
to
switch
to
win
bridge
yet
and
honestly,
it's
hard
for
me
to
keep
track
of
the
various
issues
and
PRS
that
are
going
on
with
that
end.
Point
leaking
problem
so
still
using
one
scene
I
and
then
we're
still
there's
still
some
flakiness
in
the
tests
which
are
documented
in
one
of
the
docs.
That's
links
we're
working
through
some
of
those
issues
and
you
juice
file.
Some
issues
already
for
some
of
those.
B
I
B
Yes,
so
for
now,
I'm
moving
to
the
testing
amount
of
just
Windows
nodes,
because
when
we're
swapping
the
repos
out
with
that,
but
that's
going
to
cause
is
all
the
pods
scheduled
for
most
of
the
conformance
test.
Cases
will
now
use
a
Windows
container
image,
and
so,
when
we
do
the
conformance
profiles,
that's
where
we
should
be
able
to
do
the
thing
to
do
the
appropriate,
node,
selectors
and
I
think
the
open
question
for,
for
that
would
be,
if
you're
going
to
certify
more
than
one
os/2.
B
Those
are
all
questions
that
we're
not
going
to
have
result
413
so
for
right
now
we're
doing
a
separate
test
pass
and
we
do
have
the
hybrid
flag
that
Adelina
added
but
I,
don't
think
that
any
ways
can
emerge
that
for
version
13,
because
the
conformists
discussions
not
closed
so
I
was
going
to
move
back
to
running
my
tests
just
with
the
exclusion
list
and
I.
Think
that's
what
we're
gonna
merge
upstream
as
well!
That's
what's
in
the
current
PR
and
then
the
hybrid
flag
will
just
be
set
aside
until
14.
I
I
If
there
is
anything
that
needs
to
be
patched,
what
is
everything
or
the
upstream
that
we
can
just
testing
against
windows
nose
because
I'm
I'm
guests
on
the
issues
I've
filed
I
feel
like
there
are
some
inconsistencies
between
our
test
results,
I'm,
not
sure
if
I'm
missing
anything
like
I
didn't
pick
up
an
EPR,
so
anything
so
I
think
for
GA
having
something
that
we
people
can
just
go
through
this
list
of
patches.
You
have
four
configurations
so
that
we
can
produce
consistent
results
with
you
guys.
B
I
I
B
I
A
B
A
B
A
A
G
Sure
so,
regarding
final
updates,
I'll
start
out
with
the
PR
that
went
in
earlier
this
week.
We
finally
got
the
initial
Windows
support
for
flannel
inside
and
merged
him
to
the
chorus
repo,
so
we'll
be
pointing
and
updating
our
Docs
to
use
the
newest
like
bits
and
binaries
from
the
chorus
repo
going
going
ahead
and
so
they're
there.
The
other
aspect
that
we're
doing
in
that
we
want
to
switch
the
plugins
final
is
delegating
to
is:
they
are
pointing
to
older
binaries.
G
However,
there
are
still
some
issues
with
those
binaries
that
Peter
mentioned.
I'll
summarize
those
issues
very
briefly.
The
one
one
issue
is
that
endpoints
are
leaking
when
pods
are
being
put
down,
so
we
were
tracking
the
pod
Network
status
as
we're
putting
parts
down
and
endpoint
gets
removed
from
the
pod.
When
the,
when
the
part
is
still
running,
we
run
and
get
pod
network
status
and
notice.
The
endpoint
is
missing,
we
try
to
add
one
and
it
gets
leaked
that
way
because
the
container
is
being
put
down.
F
G
A
fixing
commit
for
that
ready
that
we
validated,
but
there
is
another
issue
that
we
also
want
to
get
to
the
bottom
of,
which
is
the
DNS
suffixes
are
not
being
programmed
in
and
the
pods
that
are
being
created.
So
we
want
to.
We
want
to
fix
both
those
issues,
and
then
we
can
point
update
our
Docs
to
users
that
they
can
use
it.
The
other
update
would
be
on
the
overlay
or
Reax
landside.
We
finished
validation
internally
for
that,
and
we
should
be
reaching
out
with
instructions
shortly
as
well.
A
G
However,
we
are
working
through
the
red
tape
and
the
approval
process
to
get
feature
changes
back,
ported
to
Windows
Server
2008
that
is
still
ongoing,
but
then
inside
our
builds,
half
I
would
have
to
double-check
with
exact
one,
and
once
we
have
our
Doc's,
they
did
you
excellent,
so
I
I
hope
to
follow
up
by
the
end
of
this
week
to
get
to
the
bottom
of
the
wind
bridge,
issues
leaking
endpoints
and
DNS.
Suffixes
updates
our
docks
for
that
and
then
start
on
overlay.
A
A
C
G
B
A
B
G
G
A
All
right
all
right,
thank
you
all,
have
a
great
rest
of
your
day
and
I'll
see
you
guys
will
still
meet
next
week.
If
anybody
has
vacation
or
a
nicer
holiday
in
the
u.s.,
you
know
feel
free
not
to
attend,
but
since
you're
getting
so
close
to
GA
will
will
still
meet.
If
we
need
to
talk
about
anything.