►
From YouTube: Kubernetes SIG Windows 20191001
Description
Kubernetes SIG Windows 20191001
A
Hello,
everybody
and
welcome
to
another
see
windows
meetup,
it's
the
first
of
October,
so
have
a
good
month.
Everybody
and,
as
always,
is
a
recorded
meeting
so
adhere
to
the
code
of
contact
for
CN
CF,
all
right
couple
of
things
we
have
so
it
looks
like
we
have
a
good
commitment
around
I,
getting
some
work
around
container
D
and
G
MSA
and
the
CSI
work.
We
don't
have
a
commitment
on
and
we're
having
a
little
problem
with
resourcing
is
QB
diem.
A
So
the
kind
of
two
biggest
things
that
we
need
to
kind
of
worry
in
that
area
are
essentially
how
do
we
make
sure
that
we
have
a
good
test
infrastructure
for
Cuba
diem
and
make
sure
that
upgrades
are
also
working
great
Kalia.
That
kind
of
helped
us
get
us
to
this
point.
I
might
not
be
able
to
work
a
lot
on
this,
so
so
I'm
wondering
if
we
can
have
someone
that
can
spend
a
little
bit
of
time
and
see
a
is
like
a
couple
of
days
worth
of
effort.
A
Or
is
this
a
couple
of
weeks
or
is
this
a
much
bigger
endeavor
and
that
you
might
not
be
able
to
do
to
kind
of
help
us
on
the
continue
advancing
Cuba
diem?
Now,
if
you
guys
remember,
Cuba
diem
is
super
important,
because
we
need
that
to
be
able
to
go
down
the
path
of
cluster
epi
for
Windows,
which
is
gonna,
be
a
much
quicker.
Endeavor
and
I
know
that
you
have
a
lot
of
assets
from
VMware
that
will
help
on
that,
since
VMware
is
kind
of
leading
the
cluster
API
effort.
A
But
we
need
to
finish
up
the
cube
ADM
work
on
our
on
our
own
I
kind
of
asked
right
before
we
jumped
on
the
recorded
call.
If
Ben
has
a
little
bit
of
cycles
to
at
least
look
into
this
and
and
and
Ben.
If
you
do
check-
maybe
let
us
know
next
week
and
and
see
if
you
have
some
time
to
check
it
out
and
another,
no,
no
other
linearize
on
the
call
Adelina
do
you
guys
have
any
capacity
to
kind
of
start
incorporating
some
of
the
cube
ATM
tests
for
for
Windows.
B
C
B
The
idea
is
that
we're
investigating
it
and
idea
is
to
just
replace
some
of
the
ansible
scripts
that
we
did
for
the
flannel
work,
just
replace
it
with
cube
ADM.
That
would
be
the
idea,
but
we
need
to
see
exactly
how
how
much
of
a
straightforward
plan
that
how
much
that
works?
Okay,
it
should
be
simple
enough
to
be,
to
be
honest,
I
mean
it's
just
running
that
script
and
on
the
the
Linux
nodes,
just
you
know,
figuring
out
something
else,
but
we
are
still
in
the
beginning
of
the
process.
B
A
C
A
C
D
D
B
D
So
a
couple
updates
here
so
I've
been
working
with
with
lantau
over
on
the
container
D
side
and
he's
helped
finish
up
and
get
a
few
things
merged
so
that
we
think
that
at
least
process
isolations
probably
going
to
be
feasible
in
the
next
version
of
container
D
and
he's
been
working
on
some
testing
there.
And
let
me
actually
add
a
missing
link
here.
D
And
he
set
up
a
continuous,
build,
that's
building,
that's
building
the
container
D
with
cry
enabled
from
the
master
branch
and
it's
outputting
the
basically
some
basic
runtime
tests
there,
and
so
that's
that's
something
that
we
can
use
as
a
signal
just
to
make
sure
that
you
know
kind.
The
builds
are
there,
but
there's
not
actual
release
binaries
yet
and
so
in
the
job
set
anyway.
D
No
has
running
right
now,
they're
building
from
from
a
fork,
but
we
should
be
able
to
move
that
back
over
to
the
master
branch
by
the
end
of
the
milestone,
so
that
so
that's
moving
along
good.
We're
taking
that
to
sig
note
in
20
minutes
to
get
their
feedback
on
it
and
just
kind
of
give
them
a
status
update,
because
lantau
is
active
over
there
as
well.
I
said
I
still
want
to
push
for
getting
hyper-v
working
as
a
stretch.
D
We
have
it
marked
as
an
as
an
alpha
using
docker
shim
today,
although
it
doesn't
they
work,
but
that's
using
annotations,
and
so
you
know
worst
case.
We
could
still
use
annotations
with
container
d2
to
enable
and
test
that,
but
I've
got
a
kept
update,
links
there,
where
I'm
proposing
basically
clarifying
how
we
can
use
runtime
class
to
work
with
hyper-v
and
support
multiple
OS
versions.
D
If
we
wanted
to
basically
make
it
easier
to
use
the
node
selectors
to
apply
the
OS
match
for
Windows
and
then,
if
we
were
to
add
an
additional
node
label
with
the
OS
version,
then
we
could
basically
set
up
a
separate
runtime
class
for
like
Windows
Server,
1809
and
19:03.
Today,
because
the
runtime
class
scheduler
will
just
going
to
sees
a
matching
run
class
on
the
on
the
pod.
D
Spec,
then
it'll
automatically
the
admission
controller
for
a
runtime
class
scheduling
will
just
automatically
append
those
tolerate
or
sort
of
those
note
selectors,
and
so
that
would
make
things
a
little
bit
easier.
But
where
things
get
difficult
is
that
if
we
want
to
support
running
18:09
containers
on
19:03
or
a
later
version,
then
we
basically
have
to
change
labels
that
are
there,
and
so
you.
D
D
Since
so,
this
handler
is
pretty
important
if
we
want
to
control
the
behavior
of
container
D,
and
so
that
would
still
give
you
a
very
simple
deployment
experience
on
the
pod.
That's
you
know
nice
and
very
specific
for
how
you
want
to
run,
but
then
these
other
details
over
here
some
of
that
verbosity,
is
basically
hidden
behind
runtime
class.
D
D
D
A
D
A
E
So
basically,
this
is
an
observation
from
some
reports.
In
sake
window,
the
Coppola
did
have
issues
and
some
testing
that
Jeremy
was
doing
around
GM
si
as
well
as
something
we
ran
into
when
trying
to
bring
our
pods
in
GC
as
well.
It
seems
like
when
windows
containers
are
brought
up
there's
a
period
of
time
when
the
networking
is
pretty
unstable.
E
Jeremy
came
up
with
the
nice
workaround
for
it,
where
he
patches
in
a
poor
start
hook,
a
lifecycle
hook,
which
just
tries
to
kind
of
reset
things
and
tries
to
reach
out
to
a
network
address,
but
I
was
wondering
if
other
people
are
running
into
this,
and
if
this
is
something
on,
like
maybe
the
initial
alleles.
Rather
there
are
networking
settings.
A
A
So
basically,
when
a
container
starts
up
with
that,
you
know,
networking
doesn't
work
so
I
believe
it
was
Jeremy.
But
I
don't
know
if
it
was
his
peer
that
I
reviewed
they
put
a
basically
a
loop
up
to
60
seconds
to
wait
until
you
get
an
IP
address
and
when
you
did,
it
means
networking
came
back
up,
but.
C
E
F
What
it
was
doing
is
net
logon
was
coming
up
before
there
was
a
good
network
connection,
so
it
wasn't
able
to
actually
log
on
to
the
domain.
So
we
put
up
I,
put
a
post
start
hooking
that
restarts
the
net
logon
service
until
it
gets
a
success
back,
and
it
only
adds
maybe
five
or
six
seconds
to
the
to
the.
A
F
E
D
The
it
so
the
issue
here
is
that
the
code
that
we
have
today
doesn't
support
network
namespaces
and
that
requires
moving
the
container
D
and
so
the
container
has
to
start
executing
and
then
the
network
interface
is
basically
configured
afterwards
and
so
there's
there's
an
own
delay
there.
When
we
moved
to
container
D
we're
going
to
initialize
the
network
once
using
the
pause
container,
and
so
then,
when
the
other
workload
container
starts,
it
doesn't
have
to
wait
on
the
network
stack
to
reinitialize,
and
so,
like
Ganesh,
had
a
comment
on
that.
E
E
A
Basically,
they're
kind
of
important
thing
is
that
we're
not
gonna
do
anything
to
fix
it.
Now,
if
any
apps
or
tests
need
this,
they
can
basically
see
what
Jeremy
did
and
duplicate
that,
and
maybe
you
should
put
it
in
sick
windows
tools.
I
know
it
says
more
modules
like
a
few
lines
of
code,
but
people
can
just
copied
and
then
and
then
the
future.
A
A
A
Jeremy
I
send
you
that
that
I
work
okay
target
D
Fork,
is
that
deep
as
well
yeah.
E
This
was
something
that
came
up
where
we
were
trying
to
push
a
PR
on
the
target.
The
external
provisioner,
that's
maintained.
What
we
were
trying
to
enable
before
since
I
proxy
is
completely
in
a
beta
or
a
stable
state
is
get
the
target
e
provisioner
to
return
Peavey's
with
a
source
pointing
to
the
Flex
volume,
spreads
that
Microsoft
own
has
published
and
that
be
referred
to
from
our
Docs.
E
Any
thoughts
on
that
I
think
it
was
more
of
a
question
for
Nick
I.
Think
he's
not
in
the
call
today.
So
maybe
I'll
I
can
ping
him
on
slack
to
see
what
are
his
thoughts.
But
if
you
guys
have
any,
let
me
know
as
well.
The
suggestion
was
to
potentially
form
the
repo
cause,
it's
not
being
actively
maintained,
but
it's
great
for
test
purposes.
For
now,.