►
From YouTube: Kubernetes SIG Windows 20201013
Description
Kubernetes SIG Windows 20201013
A
A
We
have
our
kubicon
presentation
we're
going
to
be
recording
that
early
next
week
and
very
likely.
We
might
also
include
a
demo
there
so
james
and
mars,
and
myself
and
deep
and
mark,
are
talking
about
that.
If
you
have
any
interesting
ideas
of
things
that
you
want
to
that,
you
want
us
to
bubble
up
as
part
of
the
presentation.
A
Let
us
know
I
know
cubicon
is
not
until
end
of
november
and
actually
our
presentation
is
on
the
on
at
the
end
of
the
events
on
the
20th,
but
still
we
have
to
record
it
next
week.
So,
if
there's
anything
that
you
want
us
to
to
kind
of
bring
up
and
bubble
up
this
week
is
the
week
to
actually
bring
it
up.
So
it's
very
important.
A
The
other
thing
is,
we
haven't
gotten
any
invite
for
any
office
hours
for
for
for
for
cubicon,
but
there's
usually
meet
the
maintainers
style
sessions
who
attended
those
in
the
past,
not
for
kubicon
europe.
We
haven't
done
that,
but
I'll
I'll
sign
us
up
when
the
emails
come
out.
So
you
know
collection
of
you.
You
know
we'll
have
to
kind
of
get
every
way
together
and
and
come
and
chat
and
basically
be
there
to
support
the
community
all
right.
A
That's
that's
it
for
for
for
the
public
announcements.
The
next
item
there's
a
lot
of
folks
from
microsoft,
a
few
folks
from
vmware
and
obviously
a
lot
of
folks
from
the
envoy
general
community
that
they
added
support
for
envoy
on
windows.
This
is
huge
for
us.
It
enables
a
whole
lot
of
scenarios.
A
A
If
you
are
a
service
mesh,
that's
built
on
top
of
envoy
now
you're
able
to
run
envoy
on
on
windows
and
obviously
be
able
to
perform.
You
know
the
pod
to
pod
are
back
and
and
inspections
and
and
all
the
networking
that
the
service
mesh
capability
brings
up
for
windows.
So
super
critical
super
importer.
Congratulations
to
the
envoy
team!
That
has
it
a
couple
of
caveats.
You
have
to
build
the
envoy
code
for
windows
yourself,
it's
not
shipped
as
a
pre-baked
binary.
Yet
but
they're
working
on
that.
A
So
you
know
it's
a
little
bit
of
a
out-of-box
experience
and
need
some
work,
but
they'll
get
there.
So
we
look
forward
to
that,
especially
the
contour
team,
from
vmware,
as
well
as
some
some
other
teams
that
are
leveraging
and
we
are
looking
forward
to
this
work
and
envoy
con
is
happening
next
week.
I
guess
someone
added
a
relevant
envoy,
con
presentation
here
oops.
A
I
guess
I
can't
I
don't
know
what
joined
the
event
will
do,
but
envoy
windows
use
cases
roadmap
sanjay
here.
A
Your
name
david,
so
david
anything
else
you
want
to
add
to
this.
B
I
think
what
you
summarized
makes
sense,
so
yeah
from
there
will
be
a
lot
of
information
showing
at
this
envoy
con
event.
If
you're
interested
on
more
details,
we
also
have
some
two
demos
lined
up,
so
there
will
be
lots
of
details
shown
on
the
future
roadmap
and
what
is
currently
working
and
what
is
coming
and
what
can
be
expected
in
current
alpha
state.
A
And
and
david
one
thing
you
know
from
the
reading-
I
have
done
this.
The
invoice
support
for
windows
is
a
feature
parity
with
linux
right.
So
I
think
that's
an
important
thing
to
note
for
folks
that
are
looking
into
this
correct.
B
No,
it's
not
feature
parity
with
linux.
There's
a
number
there's
an
open
issue
that
you
can
track
with
for
the
features
that
are
not
enabled
on
windows
such
as
hot
restart,
for
example.
You
know
signal
process
or
like
processing
signals
like
signal
controls
is
not
implemented,
so
there's
some
some
things
that
are
still
missing.
I
can
add
the
issue
as
well.
C
B
Yeah
we're
also
yeah
yeah,
we're
also
looking
for
feedback
on
which
features
we
should
work
on
next,
but
it's
more,
it's
more
a
matter
of
making
sure
that,
like
confirming
that
they
that
they
do
work
as
well,
there's,
like
validation,
for
example,
needed
you
know
there
might
be
some
differences,
for
example,
in
building
the
enviro
binaries,
depending
on
which
shell
you
use.
So
it's
an
alpha
state
right,
so
we
we
need
to
validate
it.
Everything
works.
B
D
B
A
little
bit
of
time
there's
a
number
of
basically
configuration
combinations
that
can
be
tried
out
and
enumerating
all
of
those
and
seeing
which,
which
ones
users
who
will
need
the
most
first,
something
that
we're
looking
feedback
on.
So
let
me
link
the
issue
for
where
it's
being
discussed,
where
the
current
limitations
are
being
discussed
and
addressing
them.
A
God
it
sounds
good.
Thank
you,
david.
All,
right
next
on
our
agenda
is
james.
If
you
don't
mind
just
one
one
second,
so
last
week
we
kind
of
the
kept
that
line
passed
and
we've
got
our
caps
in
for
container
d
and,
like
we
mentioned
privileged
containers,
will
be
pushed
for
the
next
release
alpha
for
121..
A
E
A
A
F
Right
right
for
csr
proxy
right
now,
120
like
we
keep
it
beta,
that's
the
decision.
Yeah.
A
And
and
by
then,
the
the
general
consensus
on
that
so
everybody's
educated
is
that
by
then
privilege
containers
will
know
a
lot
more
about
them.
Then
we
figure
out
how
they
would
interact
with
csi
proxy
and
potentially
update
our
plan
of
record
or
at
least
understand
it.
You
know
best
case
for
privileged
containers
is
that
they
get
released
with
123,
for
example
sga,
so
that
will
have
gives
any
significant
runtime
or
runway
for
csi
proxy
to
be
out
there
before
privileged
containers
are
stable.
A
So
we
wanted
to
better
understand
what
that
would
look
like
before
one
by
120
end
time
frame
before
we
make
that
decision.
A
So
I'm
hoping
that
nobody
has
any
any
objections
to
that.
You
see
a
lot
of
new
folks
here
on
the
call
today,
so
we
have
nadir.
That's
thank
you
for
taking
notes
as
well.
Here
nadir.
A
We
have
jay
as
well
as
perry,
maybe
others
I
don't
know
if
I
see
all
of
the
names,
but
these
are
additional
folks
from
vmware
that
are
going
to
be
heavily
involved
in
post-seq
windows
hopping
with
james
and
others,
on
the
copy
work
on
the
image
builder
work
and
we
haven't
lined
up
the
right
resource
for
storage,
but
we're
going
to
get
someone
from
storage
to
come
and
help
us
on
the
csi
work
as
well.
A
A
All
right,
james
james
passing
the
man
talk
to
you
for
the
image
builder.
Do
you
want
me
to
stop
sharing.
G
All
right,
so
I'm
gonna
do
a
quick
demo
of
the
the
image
builder
and
then
also
have
it
working
in
capsi,
the
so
for
folks
who
aren't
familiar
with
image
builder,
it's
it's
a
repository
out
on
kubernetes
sigs.
G
It
provides
there's
a
there's
a
book
here
and
it
provides
the
ability
to
quickly
build
some
of
the
images
for
cluster
api
or
capi,
and
I've
been
working
on
a
pr
for
adding
some
of
the
base
configuration
scripts
in
azure
builder
and
as
of
about
last
week,
or
so
I
was
able
to
get
it
fully
working
so
that
what
I'm
gonna
do
is
demo
it
here
for
the
capsc
changes,
there's
actually
for
the
provider
changes
and
cluster
api
there's
not
a
whole
lot
of
changes.
G
This
is
basically
all
of
the
change.
That's
required
it's
just
at
least
for
capsi.
It's
just
kind
of
passing
the
right
os
type,
there's
a
bunch
more
changes
in
here
for
for
other
for
passing,
configuration
and
things.
But
that's
that's
the
major
change,
so
I'm
going
to
switch
over
and
switch
screens
here.
If
I
can
figure
out.
G
G
All
right,
can
you
see
the
video
I've
pre-recorded,
the
video
it
has
everybody's
aware
windows
can
take
some
time
to
pull
images
and
the
configuration
of
the
the
vhd
actually
takes
a
while
so
we'll
kind
of
skip
through
here.
But
so
I
have
the
image
builder
code
pulled
down
and
the
the
main
thing
you
do
is
you
set
a
couple
of
environment
variables
and
then
you
call
make
and
build
the
version.
G
You
want
that'll
go
out
and
make
sure
you
have
the
right
requirements,
and
then
it
calls
some
some
initial
setup
scripts
and
starts
to
build
the
calls
backer
packer
build.
Let's
skip
ahead
here,
so
packer
build
in
azure
case
begins
to
actually
stamp
out
an
azure
vm,
which
it
will
then
run
the
different
commands
against.
G
Let's
see,
I'm
gonna
skip
ahead
a
little
bit,
and
then
we
run
an
ansible
playbook
against
that
it
does
a
bunch
of
setup.
So
it
sets
up
disables.
Some
of
windows
updates
make
sure
that
the
correct
things
are
installed.
Installs,
cloud-based,
init,
make
sure
open,
ssh
is
installed
and
then
towards
the
end.
It's
it
will.
If
you
have
it
configured,
it'll
actually
pre-pull.
Some
of
the
images
there's
a
whole
bunch
of
different
things,
setting
up
container
d
and
docker
as
well.
G
G
I
did
have
a
little
bit
of
trouble
getting
it
set
up
at
the
screen.
Recording
video
had
taken
over
some
of
my
shortcut
keys.
So
let
me
see
if
I
get
into
the
right
spot
here.
G
Yeah
so
we
use
tilt
and
cap
c,
and
so
we
call
make
tilt
and
that
sets
up
all
the
controllers
that
are
required
to
run
cap
c
and
then
once
I.
G
Once
I'm
ready
once
the
cluster
is
up
and
ready,
I
can
run
cluster
template
windows
yaml
and
then
make
create
work
workload.
Cluster,
that's
gonna,
deploy
out
the
workload
components
for
adding
the
control
plane
as
a
linux
vm,
and
then
it
also
add
the
windows,
vhd
windows
vms,
using
that
image
that
I
had
built
previously.
G
On
the
left
hand,
side
here
you
can
see
this
is
the
the
control
plane.
Sorry,
the
the
management
cluster.
So
if
you're
not
familiar
with
cluster
api,
there's
a
management
cluster
where
you
provide
some
yaml
that
defines
the
workload
clusters
that
you
want
to
run
and
so,
on
the
left
hand,
side
here.
These
are
the
yamls
that
provide
the
workload
cluster
definition.
G
So
there's
two
control
planes,
there's
two
worker
linux
workers
and
then
there's
two
worker
windows
vms
and
then
on
the
right
hand,
side.
Once
they
start
to
come
online,
we
can
see
that,
on
the
right
hand,
side
I'm
connected
to
the
actual
workload
cluster.
We
can
see
that
those
vms
start
to
show
up
when
I
do
cube
ctl
get
nodes.
So
in
a
few
minutes
here,
let's
see
yep
so
right.
G
There
is
the
first
windows
node
that
came
on,
so
this
is
using
the
cluster
or
cloud
base
init
to
take
the
init
config
and
then
on
upload
and
configure
cubelet
and
get
get
it
all
connected
and
then
over
here
we
see
now
the.
If
I
look
at
all
the
pods,
then
we're
going
to
see
q,
proxy
controller
manager
and
everything
come
online,
and
then
I
can
start
to
apply
the
flannel
configuration
for
linux
and
then
also
windows,
and
this
is
using
this-
the
sig
windows,
tooling,
that
we
have.
G
I
think
ben,
had
configured
a
lot
of
that.
So
after
a
little
while
we'll
see.
G
Oh
sorry,
yep
so
we'll
see.
If
the
flannel
windows
come
up
once
flannel
comes
up,
then
we
can
apply
the
q
proxy.
G
G
The
control
of
the
video
I
lost
control
hold
on
a
second.
G
So
I'm
gonna
skip
ahead
and
yeah,
so
so
here
you
can
see,
I've
got
cube.
Flannel
windows
and
I've
also
got
the
windows
proxy
running
and
then
I'm
actually
applying
a
couple.
Two
different
windows
deployments.
So
I've
got
the
windows
iis
and
I've
also
got
the
just
the
busy
box
that
we
use
for
testing
and
then
I'm
gonna
remote
into
those
I'm
gonna
exact
into
one
of
those
and
call
the
other
ones
component
so
down
towards
the
end
here
yep.
So
here
I
curl
iis.
G
So
this
is
from
one
pod
to
another
pod
across
the
nodes,
and
you
can
see
that
I'm
able
to
curl
both
of
the
components.
So
then
I
just
kind
of
list
out
that
I
have
the
two
pods
running
and
I've
got
the
cluster
api
and
the
external
api.
D
A
James,
fantastic
change
in
team
fanta,
fantastic
work
right,
so
this
is,
you
know
great
to
see.
You
know
the
the
progress
here
and
I
know
we're
targeting
a
pretty
big
release
at
the
end
of
this
milestone.
So
this
is
awesome
to
see
no
leveraging
investments
that
ben
and
the
cloud-based
team
did
around
club
base
in
neet.
G
That's
correct
and
we're
actually
specifically
targeting
that
that
that
we
are
not
going
to
do
anything
for
windows
nodes
in
the
management
cluster
for
cluster
api.
So
there
is
an
open
cape
for
windows,
support
and
the
the
code
that
I
just
showed
for
the
image
builder
is
open.
So
please
take
a
look
and
review
it.
It's
ready
for
review
and
hopefully
we'll
get
the
both
of
those
merged
pretty
soon.
Here.
A
And
my
second
question
is,
you
know,
I
know
your
concentrate
a
lot
of
the
effort
around
both
copy,
as
well
as
the
cup
z
effort
that
you
just
showed
us
I'm
assuming
that
a
lot
of
this
work
you
know
when
we
want
to
introduce
additional
providers
like
kappa
or
cavi
or
or
or
others.
A
The
incremental
work
from
them
will
be
fairly
negligible
right.
You
know,
obviously,
image
filter
and
some
of
the
other
components
of
creating
the
templates
is
going
to
come
into
play,
but
not
necessarily
the
overall.
You
know
the
definition
on
what
your
cluster
definition
will
look
like.
G
Yeah,
so
I
think
perry
I
don't
know
if
he's
on
the
call,
but
he's
already
started
building
on
top
of
the
pr
that
I
I
I
had
there
and
I
think
he
just
added
a
few.
It
wasn't
too
much
to
add.
I
think
they
had
some
extra
things
because
they
were
building
it
offline
but
yeah,
but.
H
H
Yeah
so
yeah
there's
not
much
to
add
really
to
make
it
work
on
other
providers.
I
think
on-premise
is
a
bit
of
a
a
different
use
case
because
of
licensing
and
there's
a
few
things
around
drivers
and
things
that
I'm
sure
I've
struggled
with.
But
those
are
you
know
those
are
things
to
we
can
sort
out.
I
think
the
biggest
the
biggest
hurdle
is
just
getting
an
again
a
consistent
iso
to
use,
because
there's
no
public
download
link
for
a
microsoft
iso.
A
H
A
Yeah,
so
everything
for
on-premise
is
all
about
you
know.
Part
of
the
effort
is
the
first
thing
you
do
is
download
the
windows,
server,
2019,
lts
image,
great
great
things,
folks-
and
I
look
forward
to
to
see
more
here
in
this
area.
A
F
A
C
Amber
is
following
up
on
that.
We're
still
testing
privilege
continue,
as
we
said,
but
there
are
some
exception
scenarios,
so
danny
from
our
side
is
like
the
developer
who's
working
on
those.
I
think
we
haven't
gotten
a
chance
to
like
verify
the
the
volume
mounts
and
how
that
would
work.
But
that's
definitely
something
amber
is
following
up
on.
C
One
thing
michael,
I
I
just
wanted
to
say
I
think,
in
the
six
backlog
meeting
we
had
a
bunch
of
bugs
and
issues
that
we
moved
to
the
backlog
of
kubernetes
120..
I
think
because
of
the
the
most
of
the
sig
windows
or,
like
you
know,
the
folks
here
are
focused
on
container
d
or
you
know,
cap
kathy,
the
the
big
items
or
the
privileged
containers
for
for
this
release.
So
if
there
is
anyone
who
wants
to
take
on
issues,
I
think
that
would
be.
That
would
be
good.
A
Yeah
a
good
good
point
mars
and-
and
I
wanna
we
should
reiterate
you
know
in
our
backlog,
meeting
maz
myself
mark
james
and
others.
We
get
a
lot
of
incoming
tickets,
but
a
lot
of
the
incoming
tickets
are
coming
from
either
automation
failure.
A
So
someone
tried,
you
know
something
in
test
and
he
failed
and
those
who
prioritize
when
they
start
becoming
important
either
because
they
impact
our
dashboards
and
green
or
the
impact
test
case
that
we
want
to
fix,
and
I
know
adelina
and
her
team
are
and
claudio
are
working
on
all
of
those.
So
that's
great,
but
you
know
one
of
the
things
that
we
don't
get.
A
lot
is
issues
from
customers
right.
We
get
the
odd
issue
asking
for
a
customer
for
featurex
or
why?
A
But
if
any
of
you
are
working
with
customers
and
there's
something
that's
customer
impacting
make
sure
you
tag
us,
because
we
want
to
give
that
a
little
bit
more
priority
like
scenarios
that
move
the
needle,
it's
neither
the
failed
other
customer.
You
know
that's
how
we
fixed.
If
you
all
remember
the
stats
issues
that
we
had
in
the
past
that
we
fixed
with
119,
all
of
those
were
custom
reported
like
things
were
just
slow
or
things
were
not
as
efficient
or
you
know,
couldn't
scale
up.
A
So
we
want
to
see
those
issues,
yes
great
to
see
if
things
that
break
automation-
and
you
know-
will
continue
making
progress
there
and
as
much
said,
if
any
of
you
want
to
go
and
look
at
any
of
those
issues
and
fix
a
few,
you
know
please,
please
fully
empowered
to
do
so,
but
if
there's
something
that's
coming
from
a
customer
bubble,
it
up
with
a
little
bit
higher
priority
or
even
bring
it
to
this
meeting.
So
we
can
talk
about
it.
A
Since
there's
no
other
questions,
I'm
going
to
ask
one
more
question:
james-
and
I
had
almost
a
small
offline
chat
about
that.
So
james,
you
wanna
talk
a
little
bit
about
how
we're
gonna
enable
active
directory
domain
join
with
the
cluster
api
work,
because
I
think
that's
gonna
be
fundamental.
A
You
know
it's
gonna
kind
of
tie
everything
together
right
in
container
d,
we're
fixing
the
gmsa
support
with
1.20
we're
going
to
cluster
api
support
for
1.20
as
well,
and
then
you
know
how
we're
going
to
be
able
to
do
an
end-to-end
demo
where
we
deploy
an
environment.
We
deploy
a
kubernetes
worker
cluster.
It's
then
windows
nodes
are
tied
to
active
directory,
they're
activated
they're
licensed,
and
we
can
deploy
a
workload
that
leverages
gmsa
and
active
directory
identity.
I
think
that
would
be
a
dynamite
demo.
I
Just
just
to
be
just
to
be
clear,
the
cluster
api
has
different
release
timeline
to
kubernetes.
So
it's
not
so.
Windows
support
is
targeted
for
v1
alpha
4,
which
is
not
the
same
as
the
kubernetes
1.20
release,
and
we
have
a
different
enhancement
proposal
process.
So
yeah.
A
I
And
james
mentioned
earlier,
a
cape,
a
cape
is
a
cluster
api
enhancement
proposal.
So
if
you
want
to
read
through
that
windows
proposal,
you
can
go
to
kubernetes
plus
slash
cluster
api
and
in
the
documents
directory,
you
will
see
all
the
proposals
and
see
that
windows
proposal.
G
Yeah,
so
to
just
talk
about
the
gmsa,
I
I
need
to
look
into
it
a
little
bit
further.
G
I
think
initially
at
least
for,
like
anything
related
to
like
the
core
cluster
api,
we
don't
think
that
there
should
be
any
changes
that
are
required
there,
but
we
should
be
able
to
use
the
post
or
pre
cube
adm
commands
to
be
able
to
register
that
node
with
a
domain
controller,
and
so,
but
I
haven't
actually
tried
that
yet
so,
but
I'm
gonna
comment
that
in
the
cap
and
try
to
detail
that
out
a
little
bit
further
today,
all
right.
If
anybody
else
has
any
ideas,
I'm
definitely
open
to
it.
D
E
I
think
I
can
answer
that
so
for
windows
and
muzz
actually
had
a
whole
cubecard
talk
about
the
different
windows
container
of
runtimes.
Today
we
are
recommending
folks
around
docker
enterprise
edition.
We've
also
been
working
on
graduating
container
d
support
without
running
docker
too
stable,
and
we
hope
that
should
go
stable.
This
release
in
120..
E
There
are
some
technical
limitations
with
the
windows
implementations
for
just
interfacing,
with
containers
in
docker
that
there
are
plans
to
address
in
the
long
term,
but
because
of
that
currently,
as
soon
as
continuity
is
stable,
we're
going
to
recommend
folks
config
configure
all
of
their
nodes
with
container
d,
because
it
does
allow
for
a
lot
of
more
flexibility
and
a
feature
parity
with
how
containers
work
with
linux.
Today.
D
Okay,
so
that's
kind
of
the
real
kind
of
gold
star
thing
that
overall
people
are
aiming
for
is
container
d
is
a
big
part
of
the
puzzle.
It's
not
like
most.
It's
not
like
this
docker
enterprise
thing
is,
is
a
long
term
thing
and
container
d
is
some
bleeding
windows.
Windows.
Container
d
is
some
bleeding
edge
windows
thing
that
isn't
okay,
so
that's
going
to
be
like
a
mainline
thing.
Okay,.
E
Yeah
and
the
longer
term
plan
with
that
is
so
there's
mobi,
which
is
the
container
engine
which
docker
uses
on
linux.
That's
configurable
and
you
can
actually
have
moby
use
container
d
as
the
container
runtime
layer
and
then
just
install
docker,
which
will
come
come
with
mobi
for
windows
that
those
code
paths
either
don't
exist
or
are
extremely
experimental
and
untested,
I'm
not
sure
which
one.
E
But
I
think
the
long-term
plans
are
to
eventually
introduce
windows,
support
for
container
d
into
mobi
and
then
that
will
bubble
up
and
get
included
with
docker
ee
releases.
And
then
we
can
leverage
some
of
the
continuity
specific
functionality
through
docker.
But
that's
where
that's
not
really
something
that
we're
tracking
kind
of
in
the
kubernetes
project.
But
you
can
reach
out
wait.
D
E
Doesn't
use
continuity
by
default
for
windows;
it
does
not
for
windows,
there
is
a
runtime
that
is,
I
believe,
directly
calling
the
hcs
v1
windows.
E
And
they're,
I
think
the
long-term
plans
are,
and
container
d
is
based
on
the
hts
v2,
the
host
compute
service,
v2
apis,
which
have
a
lot
more
like
flexibility
and
can
support
things
like
pod
termination
grace
period
instead
of
input
or
having
updating
mobi
to
use
hcs
v2
directly.
The
longer
term
plans
are
to
just
update
mobids
container
d
and
have
container
d
be
kind
of
keep
up
to
date,
with
the
host
compute
service
or
future
kind
of
run
times
for
windows.
So
hey.
A
Hey
folks,
who
are
out
of
time
jay,
there
is
a
presentation
that
we
did
at
cubicon
eu
on
container
d
maz,
and
I
drove
that
a
youtube
view
that
that
talks
about
the
hts,
v1
and
v2
and
how
container
d
takes
advantage
of
that
so
very
useful.
All
right,
everybody
see
you
all
next
week.
Thank
you.
Bye.
Everybody
bye.