►
From YouTube: Kubernetes SIG Windows 20210112
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hi
everybody
and
welcome
to
the
january
12
2021
iteration
of
the
sig
windows
meeting.
As
always,
these
meetings
are
recorded
and
will
be
uploaded
to
youtube.
So
please
adhere
by
all
of
the
cncf
code
of
conduct,
all
right,
let's
get
started.
A
First,
we
have
a
couple
of
announcements
related
to
the
121
release
planning.
Previously,
the
release
leads
would
spend
a
great
deal
of
effort,
just
checking
in
with
all
the
sigs
and
all
of
the
the
users
to
or
contributors
to
figure
out
what
enhancements
were
going
to
be
delivered
for
that
milestone
and
for
121
they're,
changing
that
up
a
little
bit
and
they
are
asking
that
all
of
the
sigs
kind
of
notify
the
release
team
about
which
enhancements
they
plan
on
delivering
and
also
providing
status
updates
for
that.
A
So
if
anybody
I
we
can
continue
to
work
on
this
in
the
next
couple
of
meetings,
but
if
anyone
is
planning
on
advancing
any
any
caps
that
we're
not
currently
aware
of,
please
bring
that
to
attention
of
the
me
or
the
any
of
the
tech
leads
here,
just
to
make
sure
that
we
can
get
every
all
of
those
tracked
for
the
121
release
and
in
a
similar
note
the
enhanced
proposed
enhancement
freeze
for
121
is
will
be
february,
9th.
A
The
exact
dates
and
all
the
milestones
for
the
121
release
are
still
kind
of
in
flux,
but
in
the
the
kdev
discussion
article
that
I
linked
is
the
current
proposals.
So
if
anybody
has
any
questions,
they
either
just
reply
to
that
or
feel
free
to
reach
out
to
us.
A
Does
anybody
have
any
other
kind
of
questions
about
the
121
release?
If
not,
we
can
go
into
the
just
get
started
with
the
agenda.
A
All
right
I'll
take
that,
as
I
know
so,
first
item
on
the
agenda
is
an
carryover
item.
From
last
week,
where
we
were
going
to
discuss
a
little
bit
of
differences
between
how
the
cni
plug-ins
operate
with
container
d
on
windows
versus
docker.
So
far
james,
are
you
here?
Do
you
want
to
kick
this
off,
or
should
I.
A
Okay,
so
a
little
bit
of
background
context,
for
that
is
with
the
way
that
the
sig
windows,
testing
or
segwindos
tool
repository
is
kind
of
set
up
for
the
cube,
adm
and
cluster
provisioning
demos.
A
They
use
there's
a
little
bit
of
a
workflow
where
wins
gets
installed
on
the
machines
and
then
different
containers
bring
in
cni's
and
execute
those
any
cni
daemons
that
are
needed
on
the
node
through
wins
and
that
works
with
docker,
because
apparently
docker
shim
has
support
for
a
kind
of
a
pseudo
host
network.
A
Support
on
windows
nodes
where
docker
shim
is
is
configured
in
clusters,
where,
if
the
pod
spec
has
host
network
set
to
true,
then
it
will
attach
those
it
will
attach
those
pods
to
a
network,
a
docker
network
on
the
machine
named
host,
which
is
configured
as
part
of
the
default
docker
install,
so
that
kind
of
works
out
of
the
box.
A
So
that's
how
you
get
this
pseudo
host
network
support
on
windows
with
container
d
that
doesn't
happen
at
all
and
there's
actually
code
paths
in
container
d.
That
say
pretty
much
ignore
the
host
network
field
on
the
cri
calls
that
get
passed
in,
and
so
that
is
kind
of
leading
it
into
a
little
bit
of
a
difficulty
in
getting
that
same
behavior
where
you
can
bring
the
cni,
plugins
and
configs
in
via
a
pod.
A
When
you're
running
with
container
d,
we
believe
that
that
will
be
we'll
be
able
to
address
that
when
we
have
privileged
container
support,
which
we
hope
to
kind
of
progress
in
the
121
milestone.
A
But
for
the
current
kind
of
efforts
in
cap
z
to
support
container
d,
we
think
there
we
may
need
an
alternate
approach
here
and
not
rely
on
wins
and
that
basic
host
network
support
to
get
set
up.
So
we
wanted
to
have
an
open
discussion
about
what
to
do
there.
One
option-
or
I
think
one
of
the
most
straightforward
options
to
get
this
to
work-
is
just
to
configure
and
run
any
cni
demons
that
need
to
be
run.
A
Configure
that
during
the
image
build
time,
so
they
we'll
just
kind
of
the
so
the
cni
will
be
configured
as
the
notes
come
up
and
not
wait
on
that
to
work.
This
has
a
little
bit
of
downside
in
that,
and
you
can't
just
upgrade
or
swap
out
the
cni's
on
the
nodes.
Via
you
know
up
deployments,
you
would
actually
need
to
either
log
into
the
node
or
just
bring
up
a
new
node.
A
I
think
this
also
has
an
added
kind
of
downside
of
each
image
in
cluster.
Api
would
potentially
only
support
one
cni.
So
if
you
so
any
providers
that
wanted
to
support
multiple
cni's
would
potentially
need
to
create
multiple
images
with
that.
B
C
B
Pretty
good,
I,
oh
sorry,
I
think
perry
has
been
installing
container
d
on
their
side
of
the
world.
Sorry
go
ahead.
I.
C
Yeah
no
worries
james.
I
just
wanted
to
mention
to
to
mark
that
the
approach
that
he
mentioned
is
sort
of
what
we're
doing
with
our
operator
when
the
node
comes
up
is
when
we
configure
cni
but
you're
right.
It
has
all
the
downsides
that
you
mentioned,
but
we
have
not
had
too
many
customers
complain
yet,
but
I'm
thinking
that
might
happen
in
the
future.
So
but
it's
been
working
well
for
us,
it's
all
I'm
trying
to
say
that's.
It.
A
Yeah
and
that's
the
same
approach
that
we
take
with
with
aks
currently
is
the
cni
kind
of
just
comes
up
with
the
node
as
and
we
we
actually
run
cube
proxy
just
as
a
service
on
the
node
as
well.
We
don't
run
that
as
a
container
and
that's
working
for
us,
like
you
mentioned
it's
working
for
you
as
well.
C
A
I
mean
you
could
always
build
an
image
that
has
both
cni
or
like
that,
has
multiple
cni's
baked
into
it.
It's
just
you
might
kind
of
get
a
matrix
explosion.
When
you
look
at
all
the
different
cni's
that
you
wanted
to
support.
B
D
You
know
everybody
knows
there
is
demand
for
calico,
like
network
policy
and
stuff
and
then
andrea,
and
if
they
cannot
work
together.
I
think
that's
gonna
be
a
huge
blogger
for
everyone
in
kubernetes
community.
I
I
don't
know
what
everybody
else
thinks.
E
So
we've
we've
I've
been
experimenting
with
using
the
post,
cube
adm
actions
as
part
of
the
cluster
spec
in
in
cluster
api
to
run
the
sort
of
required
steps
after
the
the
join
and
that's
been
working.
Okay,
I
mean
I'm
still
struggling
with
some
of
the
the
new
container
d
apis
and
we're
still
going
through
that.
But
that
kind
of
works.
E
Okay,
because
it
gives
you
a
way
of
being
able
to
set
like
a
startup
script
which
allows
you
to
do
start
doing
any
any
sort
of
reboots
and
startups,
and
I
find
that
cloud-based
in
it
kind
of
is
it's
quite
resilient
to
restarts,
because
once
it's
done
a
step,
it
kind
of
knows
that
it's
done
a
step.
And
then
it
continues
on.
So
I've
been
finding
that
even
reboots
have
been
okay
and
and
setting
things
up.
So
that
was
kind
of
where
I've
been
aiming
at.
B
So
I
think
that
all
like
for
short-term
options-
it's
it
sounds
like
you
know,
installing
it
on
the
node
is
the
way
to
go,
maybe
even
using
the
post
qbdm
if
you're
in
capi
to
do
those
installs
long
term
is.
Do
we
think
that
the
way
that
we
want
to
do
this
is
via
privilege
containers,
or
are
we
going
to
continue
to
to
do
all
these
installs
up
front?
I
think
that's
kind
of
one
of
the
open
questions
here.
A
Yeah,
I
think
that
one,
like
one
of
I
think
one
of
the
reasons-
and
this
might
be
similar
to
for
for
openshift-
that
aks
isn't
using
the
the
model
that
the
sig
windows
tools
uses
is
because
of
wins
and
the
potential
security
implications.
A
For
that,
I
think
that
those
I
mean
privileged
containers
are
obviously
going
to
bring
in
another
kind
of
set
of
security
concerns,
but
they
wouldn't
have
the
same
set
of
concerns
as
just
running
leaving
the
wind
surface
running
on
the
host,
which
can
potentially
allow
anybody
who
has
access
to
that
named
pipe
to
yeah.
Just
do
a
lot
of
things
on
the
host,
so
I
think
that
I
actually
don't
have
a
very
strong
opinion
right
now
about
which
way
to
which
way
to
take
this
in
the
future.
A
As
musk
mentioned,
though,
as
you
know,
clusters
get
more
advanced
and
people
or
folks
users
do
want
to
have
multiple
cni's
running.
It
might
be
easier
to
manage
it
by
switching
back
to
bringing
the
cni's
in
via
a.
A
Not
exactly
so
a
lot
of
the
cni's
have
a
demon
that
needs
to
run
on
the
host
like
final
d
and
calco
has
one
as
well,
and
I
think
that's
where
the
benefit
of
bringing
this
in
as
a
pod.
A
What
will
kind
of
start
to
shine
is
that
you
can
run
that
as
a
service
and
run
on
the
host
today,
when
we
try
and
go
through
this
workflow
and
run
those
in
the
pod
in
sig
windows
via
the
sig
windows
tools
route,
what
ends
up
happening
is
the
pod
gets
connected
to
a
nat
network
which
is,
and
then
network
traffic
from
outside
of
the
pod,
like
a
host
local
net
network
and
then
traffic
from
outside
of
the
pod,
but
inside
of
the
cluster,
doesn't
get
routed
to
the
daemon
service.
That's
running
correctly.
A
F
We
have
a
lot
of
wrappers
to
ovs
and
stuff
that
are
totally
irrelevant
to
what
the
calico
folks
are
doing.
So
I
I
I
I
I
agree
with
you.
It's
not,
but
it
feels
like
it
would
be
like
architecturally
nice
if
we
could
have
some
middle
ground
between
everything
is
just
a
service
and
everything
is
a
pod
like
the
way
we
have
with
csi
proxy.
But
I
just
don't
know
what
that
would
look
like.
G
If
I
can
pitch
it
from
the
akshay
perspective,
there's
a
huge
interest
in,
on
my
end,
having
everything
isopod,
mostly
to
have
the
same
patterns
as
for
linux,
especially
when
it's
the
matter
of
upgrading.
It's
so
much
easier
on
linux
than
it
is
on
windows
when
you're
running
a
service.
There's
also
a
few
things
that
happen
is
that
synchronization,
when
you
boot
or
reboot,
a
machine
between
the
services
is
much
more
complicated
if
it's
not
in
a
pub.
G
So
there
are
a
lot
of
caveats:
lots
of
workarounds
that
we're
using
today
when
we're
running
everything
as
a
service,
but
these
workarounds
are
necessary,
but
in
the
long
term
I
really
hope
that
privileged
containers
on
windows
will
be
able
to
have
the
similar
experience
as
what
we
have
on
linux.
A
A
H
A
So
in
cluster
api
there
is
an
assumption
that
the
cni,
plugins
and
config
will
come
in
through
a
pod
and
that's
where
the
blocker
is,
but
just
in
terms
of
running
and
setting
up
clusters
outside
of
cluster
api.
There's,
there's
no
blockers
and,
as
we've
mentioned,
openshift
and
aks
are
just
configuring
cn
the
like
the
cni
as
part
of
node
setup,
and
that
is
working
out
for
us
is
that
does
that
answer
your
question
peter.
H
Okay
yeah,
so
it
sounds
like,
like
you
said
it's
in
the
cluster
api
is
where
the
blocker
is,
which
is
probably
why
I
don't
understand,
because
I
have
not
spent
any
time
on
that.
B
Yeah
and
and
the
work
for
that
is
to
just
use
the
post
q
adm
commands
so
that
you
can
do
it
like
perry
and
the
folks
over
it,
though,
in
vmware
again,.
G
G
For
example,
I
think
the
official
workaround
that
we
recommend
is
to
use
to
create
a
net
network
for
the
host
network
on
windows,
but
internally
in
hns
host
networking
is
implemented
for
windows,
server,
containers,
normally
container
d.
There's
a
few
changes
to
be
made
in
container
d
to
even
support
this.
G
That
will
be
super
useful
for
privileged
containers.
I
can
definitely
help.
I
don't
know
if
I'll
have
cycles
to
actually
do
the
work,
but
I
can
definitely
help
guiding
someone
who
has
cycles
I'll
try
to
find
someone,
but
I
can
definitely
drive
on
what
work
needs
to
be
done.
A
That
would
be
great
yeah,
we're
working
on
finishing
all
of
the
updates
for
the
privileged
container
cap
to
move
that
hopefully
move
that
into
implementable
in
121,
and
that
should
be
done
by
hopefully
this
week
and
we
did
call
out
the
how
host
network
is
handled
differently
in
container
d
for
windows
versus
linux.
I
will
make
sure
that
you're
that
you're,
that
you're
paying
down
the
pr
and
can
comment
on
on
that
and
help
direct.
That
is.
G
There
any
meeting
I
can
attend
also
to
to
help
there,
at
least
from
a
guidance
perspective.
G
A
Now
but
we
can
certainly
set
one
up
later
on
in
this
week.
We
can
work
on
doing
that
in
in
slack.
That
would
be
helpful
for
folks.
F
Sounds
great
technical
question
on
the
container
d
stuff
before
we
jump
off
it
really
over
on
our
side
was
working
on
reordering
the
cni
cri
steps
in
container
d
so
that
it
works
better
for
andrea.
I
recall
that,
and
so
now
I'm
kind
of
confused.
I
always
thought
you
always
created
a
container
first
then,
because
that's
necessary
in
linux
at
least
right.
You
have
to
attach
a
network
namespace
to
a
to
a
running
process,
but
in
windows.
Does
it
work
differently
on
the
container
decide
like?
Do
you
create
a
process
like?
F
G
F
I
don't
remember
that
thread.
I
will
just
you
know
what
we
can
so
that
we
don't
take
up
the
whole
meeting.
I
will
just
keep
jocelyn
again
there.
Four
two,
nine
one,
four
nine
two
one
you'll
see.
Let's
see
what
we
see
on
our
end,
I
mean
on
our
end,
it's
just
that.
It's
like
we're
just
thinking
of
it
selfishly,
but
I
don't
know
if
other
people
have
have
had
the
same
issue,
but
for
us
we
just
want
we
just.
I
called
to
always
happen
after
after
this.
G
It
is
so
I
can,
if
I
can
steal
a
few
minutes,
I'll
try
to
be
quick
but
yeah.
It's
the
the
thread
that
I
was
talking
thinking
about,
there's
actually
multiple
threats.
On
the
same
topic.
I
was
hoping
that
keith
would
reply
to
it
because
that's
more
of
a
design
decision
that
needs
to
be
done.
I
used
to
to
work
in
this
area
before,
but
I
don't
own
it
anymore.
G
So
that's
why
I'm
a
little
bit
silent
on
it,
but
typically
what's
happening
is
that
when
you
are
in
the
container
world,
we
don't
talk
about
v-necks.
We
don't
talk
about
switches.
We
talk
about
networks
and
endpoints
internally.
These
networks
and
points
are
creating
these
switches
and
v-nets.
The
thing
is:
it's
not
really
designed
originally
to
be
visible
to
the
custom,
to
the
end
customer.
G
So
that's
what
makes
the
answer
to
the
question
very
difficult,
because
the
end
point
is
definitely
created
first,
but
the
problem
is
the
vitic
is
not
because
there's
a
difference
in
time
between
when
the
endpoint
is
created
and
when
the
v-neck
is
created,
because
we
don't
expect
people
to
actually
know
about
when
the
v-neck
is
created.
But
in
your
case
it's
what
you're
interested
in
yeah.
G
The
step
forward
is,
I
I'm
trying
to
get
an
answer
from
keith
about
it
and
the
guidance
from
keith,
because
he
owns
that
and
if
we
are
trying
to
give
some
information
about
the
internals
of
hms,
we
need
to
have
teslas
on
our
end
to
make
sure
that
in
the
long
term
it
still
works.
I.
G
So
we
can
take
the
conversation
on
slack
and
they
can
point
you
to
the
alleged
to
the
so
slack
thread
that
we
have
in
cases
there
on
that
slack
right,
cool.
F
A
B
No,
I
was
just
gonna
quickly
summarize
the
the
conversation.
Maybe
it
sounds
like
for
now
installing
this,
the
cni,
especially
container
d
up
front,
is
the
way
to
go
and
with
inside
cappy,
we
can
do
that
via
post
cube,
8m
commands
that
way
still
like.
If
someone
wants
to
do
a
different
one,
they
can
use
a
different
cni
and
configure
it
that
way
and
then,
as
privileged
containers,
come
on
we'll
we'll
look
at
using
this
using
that
as
a
cni
option.
Moving
forward.
A
Yep,
that
was
my
takeaway
from
the
conversation
as
well
sounds
good
all
right.
We
have
a
couple
minutes
left
for
the
next
topic,
which
is
also
cni
related
james
or
jay.
I
see
some
dns
some
topic
around
dns
and
cni.
F
Yeah,
I
mean
james
feel
free
to
interrupt
me
at
any
point
here.
So
I
was
I've
just
been
so.
The
biggest
priority
for
me
right
now
is
making
sure
that
we've
got
parity
in
tkg,
with
with
other
windows
deployments
and
so
on
and
so
forth,
and
so
I've
been
going
through
the
end
ends,
and
I
appreciate
everybody
helping
the
field,
the
things
I've
been
putting
up,
and
I
I
feel
like
the
the
issue
I
hit.
That
was
a
sticking
one
is.
F
I
was
doing
an
a
b
test
between
calico
on
eks
and
azure,
and
well,
congratulations,
azure
folks.
You
definitely
seem
to
win
on
that
one,
but
I
think
in
android
we
also
are
going
to
be
missing
this
functionality,
I'm
not
sure
yet.
You
know
parrot.
I'm
gonna
test
on
one
of
perry's
clusters
he's
in
the
middle
of
hacking
some
stuff
right
now,
but
I
feel
like
this
dns
config
thing
is
up
in
the
air
and
it's
really
up
in
the
air
outside
of
the
boundaries
of
this.
F
F
He
knows
it
better
than
I
do,
but
in
azure
y'all
are,
I
guess,
sending
a
dns
input
into
your
cni
and
then
like
regurgitating
that
into
the
cri,
and
then
it
gets
plumbed
in
as
a
windows,
native
dns
record
or
something
or
dns
server
record.
Is
that
the
basic
idea?
So
we
just
kind
of
you
like
trojan
horse
the
some
dns
information
through
the
cni.
B
Yeah,
so
so
the
way
it
worked
was
this
is
before
container
d
was
around.
So
if
you
look
at
the
docker
shim
the
way
docker
shim
does
this,
is
they
plummet
into
resolve,
conf
and
for
windows?
We
didn't
have
that
capability,
so
we
went
through
the
cni
capabilities,
and
so
we
passed
that
through
the
run
time
and
then
the
runtime
looks
it
up
and
it
essentially
plums
it
into
the
the
container
so
and-
and
I
linked
to
one
for
the
container
d
as
well.
A
F
Yeah,
so
I
put
an
email
on
sig
network
yesterday
asking
some
folks
about
this
and
casey
kalandera,
who
you
know
he's
one
of
the
cni
spec
owners.
He
said
well,
we
were
just
talking
about
removing
all
that
and
I
was
like
oh
okay.
Well
like
so
I
guess
we
I'm
happy
to
so.
I
work
a
lot
in
sig
network
and
I
have
been
for
for
a
while.
So
I'm
happy
to
represent
us
there,
I
just
I
I'm
just
trying
to
figure
out
what
exactly
should
we
be?
F
What
exactly
is
our
goal
here?
Is
our
goal
to
just
have
them
indefinitely?
Keep
that
dns?
It's
a
it's
a
dns!
It's
a
cni
configuration
field
right!
That's
that's
the
dns
field
that
we
actually
use
right.
We
use
it
on
the
cni
configuration
is
the
thing
that's
in
the
spec
and
the
output
is
just
arbitrary
right.
That's
not
in
the
spec
right.
F
F
F
Anyway,
the
only
other
thing
I
got
is
the
dns
config
test,
I'm
in
the
process
of
like
plumbing
in
something
a
little
bit
a
little
bit
more
fine-grained
than
what
we
have
doing
something
like
potentially.
I
was
just
hacking
on
it
with
carter
a
little
while
ago.
Maybe
just
checking
to
see
if
and
one
one
quick
question
james
is
the
the
ns,
the
the
agn
host
records.
Don't
have
that
resolved,
dns
powershell
command,
so
we
do
have
dig
and
we
do
have
ns
lookup.
F
B
No,
I
I
think
the
I
think
nsf
ns
lookup
should
work.
I
just
know
that
it's
generally
recommended
to
stay
away
from
an
s
lookup
on
windows
because
it
doesn't
use
the
native
dns
lookup.
So
if
it's
working,
I
think
it's
probably
okay,
but
I
don't
know
if
anybody
from
the
networking
team
can
chime
in
further
on
that.
But.
F
B
I
I've
seen
things
where
like,
if
you
use
resolve
conf
the
dns
resolves
immediately.
But
if
you
use
nslookup,
it
takes
a
few
seconds
for
the
when
the
container
boots
up.
F
A
Yeah,
we
can
always
come
back
to
this,
as,
as
things
develop,
does
anybody
have
any
other
topics
they'd
like
to
bring
up
quickly
or
should
we
end
the
meeting.
I
Hey
mark
really
quickly,
so
the
pr
to
change
the
leadership
should
go
in
today.
The
the
lazy
consensus
is
over.
Nobody
brought
any
objections,
so
we
should
be
able
to
get
all
the
changes
in
the
kubernetes
six
repo.
I
already
made
all
the
changes,
while
the
tech
leads
so
jay
deep
james
and
appeared
myself
so
the
one
last
one
remaining
is
this
one,
so
I
should
be
able
to
go
in
today.
I
Thanks
everybody
I'll
I'll
I'll,
monitor
you
guys
from
afar
and
best
of
you
guys
are
all
doing
great
job,
and
I
look
forward
to
seeing
more
advancements
in
windows.
Yep.
A
A
Tried
all
right,
I
guess
that's
it
we'll
see
you
guys
all
next
week
and
I
will
reach
out
on
slack
about
setting
up
a
meeting
for
the
anybody,
who's
interested
with
container
d
and
host
network
support
on.