►
From YouTube: Kubernetes SIG Windows 20210622
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody.
This
is
june
22nd,
and
this
is
sync
windows,
regular
weekly
meeting
and
we
are
part
of
the
cncf
and
so
with
all
the
cncf
guidelines.
And
if
you
have
any
questions
or
concerns
around
those,
please
reach
out
to
myself
or
any
other
leads
and
now
we'll
go
ahead
and
get
started.
Today,
looks
like
the
agenda's
pretty
light.
A
A
Okay,
cool,
okay,
so
announcements
code
freeze
is
july,
8th,
so
we're
just
over
two
weeks
out.
So
if
you
have
any
features
or
open
prs,
make
sure
you
ping
us
on
them
and
let
us
know
you
can
drop
it
in
slack
or
you
know,
bring
it
up
at
the
meeting
here
next
week
and
we'll
make
sure
we
get
some
eyes
on
them.
A
Is
anybody?
Does
anybody
have
any
that
they're
tracking?
At
this
point
I
know
alvarez,
you
have
a
bug,
fix
and
also
you're
working
on
the
node
log
thing,
but
is
there
anything
else
out
there.
A
Nope,
okay,
cool
all
right,
all
right,
we'll
go
on
to
the
agenda.
Then
we
have
host
process
tests
up
so
I
got
those
set
up
and
merged
on
thursday.
A
If
we
come
over
here,
you
can
see
they're
running
and
other
than
a
little
yaml
mess
up
on
the
first
couple
of
rents.
It
looks
pretty
green
for
that
particular
test.
It's
pretty
simple
and
straightforward.
It
just
tests
to
make
sure
that
the
container
itself
does
actually
see
the
host,
and
so
you
can
take
a
look
at
that
test
in
particular
and
yeah
it's
up
and
running,
and
so
we're
making
some
progress
there.
If
anybody
has
any
questions.
B
Hey
james,
have
we
are
we
ready
to
say
20
h2
is
supported
or
we're
still.
I
know
I'm
I'm
it's
not
with
just
the
host
process.
I'm
talking
about
just
tests
in
general.
A
I
think
we
actually
do
have
20h2.
I
think
mark
did
that
right
before
he
took
off.
I
haven't
monitored
that
test
as
much
as
some
of
the
other
tests
yeah.
So
we
do
have.
We
do
have
the
set
of
tests
running
against
20h2
here.
A
Yeah
yeah,
we
absolutely
can.
I
think
that
was
just
maybe
an
oversight
and
actually
making
that
and
I
think
actually
ravi
had
opened
up
the
pr
for
that
at
some
point
and.
B
A
A
Yes,
the
host
process
job
in
the
grid
is
for
122..
Post
process
is
only
enabled
in
122..
I
think
you
need
to
have
alpha
122
alpha
2
build
to
get
those
tests
to
work.
The
release,
notes
say
alpha
one,
but
I
wasn't
able
to
get
it
to
work.
When
I
used
alpha
two
or
the
head,
it
worked.
Fine.
A
A
Cool
any
other
questions
on
those.
A
Two
cool,
so
I
also
enabled
the
container
de
1.5
tests
just
bumping
that
up
to
one
one,
five
for
one
of
our
tests
to
just
see
what
it
looks
like
and
there's
a
three
or
four
tests
that
fail
pretty
regularly.
Luckily,
adelina
and
claudu
had
already
caught
that
and
some
for
some
of
the
work
that
they're
doing
with
the
cry:
integration
tests
for
container
d
and
so
they've
fixed
that
and
what
we
need
to
what.
A
If
you
are
moving
up
to
container
one
five,
you
need
to
use
the
main
branch
of
hts,
shim
and
so
go
ahead
and
use
that
I
don't
know
if
claudiu
or
adelina
wants
to
just
give
a
quick
summary
of
maybe
what
happened
with.
D
Those
yeah-
it
was
a
simple-
I
mean
it
was
a
bug
where,
if
you
deleted,
so
you
created
the
sandbox
in
container
d
and
then
you
create
containers
inside
that
sandbox.
When
you
stopped
one
of
the
containers,
the
whole
shim
process
died
was
actually
closed.
Not
it
didn't
die
was
it
was
closed.
So
that
means
that
the
sandbox
and
the
the
other
containers
that
could
have
been
inside
there
were
unusable.
D
They
not
having
a
shim
in
in
continuity.
This
created
a
lot
of
problems
with
the
cni,
because
endpoints
were
being
leaked
and
stuff
like
that,
and
you
couldn't
just
remove
the
sandbox.
You
have
to
to
go
and
delete
endpoints
and
all
that
stuff
in
kubernetes.
It
wasn't
that
I
mean
it
still
failed,
but
you
will
have
those
leakages
because
container
quantities
does
its
own
cni
cleanup.
I
mean
it
has
a
another
path
by
which
it
it
creates
and
manages
the
the
endpoints
for
the
containers
so
yeah.
D
E
Yeah-
and
this
would
also
cause
problems
if,
for
some
reason,
one
of
the
containers
would
fail
and
which
would
mean
that
it
would
close
it
will,
it
would
kill
the
entire
pod,
including
other
containers,
and
you
wouldn't
be
able
to
recreate
the
same
container
in
the
same
part,
with
the
same
name
with
the
same
id,
but
with
the
new
hcm.
That's
gonna
be
fine.
A
Awesome,
that's
for
the
summary
there
if
anybody
hasn't
seen
that
we've
added
a
new
tab
to
the
sig
windows
container
gu
run
time
up
here
and
we've
got
now
got
all
of
the
integration
tests
running
for
container
d
specifically
looks
like
we
have
a
couple
that
aren't
quite
there
yet,
but
I
know
claudu
and
adeline
are
working
hard
on
those.
So
thanks
for
all
your
work,
there.
D
Yeah
the
the
failures
there
are
something
we
that
you're
seeing
or
something
that
we
need
to
track,
and
it's
an
issue
that
we
we
are
looking
into
the
way
manifest.
I
don't
think
it
affects
kubernetes
or
it
should
affect
kubernetes
we're
still
looking
into
that,
but
it
would
be
nice
to
have
it
fixed
and
have
those
tests
green.
F
F
So
yeah,
I
have
a
question
regarding
container
d,
so
just
wanted
to
check.
Is
there
any
work
for
enhancing?
Actually
the
full
time
on
windows?
Give
you
a
quick
overview
from
yeah
our
testing
and
enabling
this
on
gce.
F
We
still
see
that
container
d
is
a
bit
slower
compared
to
docker,
for
example,
on
the
20
h2
images
you
can
see,
docker
will
be
able
to
pull
the
server
core
in
five
to
six
minutes,
but
actually
docker
will
take
around
eight
minutes,
even
including
that
yeah,
the
defender
is
disabled
and
we
actually
have
the
cleanup
at
the
beginning
of
running
the
container,
the
demon
itself,
but
still
seems
like
yeah
the
windows,
the
snapshot
or
in
general,
is
a
bit
20,
30
percent
slower
when
getting
an
emergency
on
a
fresh
node.
A
Okay,
I
know
that
we
had
an
open
issue
on
microsoft,.
A
The
issue
repository
and
we
had
fixed
it
for
or
we
had
improved
quite
a
bit
and
we
were
looking
for
feedback
if
there
was
any
other
issues
that
people
are
seeing
so
I'll
see.
If
I
can
find
it
here.
While
I
hand
it
over
to
jay
for
the
next
item-
and
maybe
you
can
comment
there
and
give
that
feedback
so
that
we
can
track
it.
H
I,
what
do
I
got?
Okay,
henry
is
building
a
new
burrito
james,
so
you
know
downstream.
We
give-
and
I
know
openshift
does
this
right.
I
think
garvin's
or
ravi,
I
think
arden's.
I
don't
know
if
robbie's
here,
but
I
know
arvin's
here
like
and
and
then
we
hit
this
with
the
windows.
Dev
environments.
Also
right.
It's
this
real
interesting
issue
that
everybody
keeps
having,
which
is
that
we
need
to
serve
up
the
artifacts,
so
people
can
build
images
or
install
things,
and
you
know
in
the
dev
environments.
H
We
see
this
because
when
you're
mounting
things
into
a
hypervisor
locally,
that's
a
very
expensive
abstract
operation,
and
so
it's
just,
and
so
I
was
just
sitting
around
one
day.
It
was
late.
I
mean
friedrich
were
kind
of
hacking
on
them
and
I
was
like
we're
just
like
jesus
christ.
Every
time
we
remove
a
amount
of
a
directory,
it
speeds
things
up
by
like
10
minutes,
like
literally
like
it's
ridiculous,
and
so
we
were
just
like
well.
H
If
we
had
a
server
inside
the
linux
vm
that
just
served
up
the
kubelet
and
all
that
after
they
were
built,
then
it
would
be
super
easy
to
install
everything
from
our
windows
vm
in
our
dev
environments.
But
then,
like
you,
know,
it's
like.
We
have
the
same
thing
for
customers
right.
We
give
customers
image
builder
and
we
tell
them
to
install
it
and
build
their
own
windows
os,
and
so
we
have
the
same
problem
and
we
have
a
downstream
solution
to
it
and
I
merged
upstream,
a
initial
hack
at
a
little
artifact
server.
H
H
If
the
image
builder
folks
want
it,
we
could
push
it
up
there,
and
so
that
was
something
me
and
me
and
perry
were
talking
about,
and
and
so
you
know
that
I
just
wanted
to-
let
folks
know
we
were
going
to
sort
of
try
this
out
and
make
sure
there's
no
like
objection
to
putting
this
in
sig
windows
tools.
B
Yeah,
hey
jay,
so
just
I
just
want
to
clarify
that.
We
don't
do
this
in
openshift
in
openshift.
What
we
do
is
we
build
all
the
artifacts
that
are
needed
to
configure
a
windows
vm
as
a
windows
worker
within
our
operator
image,
and
once
the
vm
comes
up,
all
we
say
is,
which
is
why
we
have
that
other
pr
about
using
the?
What
is
that
the
cloud
in
it
sort
of
capabilities
in
azure?
B
H
A
B
B
We
do
this
in
other
parts
of
openshift,
but
for
windows,
because
we
want
it
to
be
tightly
coupled
with
a
cluster
version
of
kubernetes,
and
given
that
our
windows
edition
is
like
a
day
two
operation,
we
decided
to
go
down
this
route.
C
B
B
H
Okay,
I
still
have
two
good
reasons
to
do
this,
so
I
don't
know
so
yeah.
That's
kind
of
you
know
what
we're
looking
at
and
if
folks
really
think
this
is
a
bad
idea.
H
We
could
do
this
or
I
know
nobody
thinks
it's
a
bad
idea,
but
I
just
want
to
make
sure
like
there
are
workarounds
like
we
could
do
this
on
a
personal,
github
repo,
or
we
could
spend
five
weeks
at
a
time
like
merging
it
into
an
upstream
image
builder
in
the
hack
directory,
but
I
just
feel
like
it's
a
very
windows
thing
right
like
I've.
Never
this
problem
has
never
like
punched
me
in
the
face
this
many
times
in
such
a
short
period
of
time
until
I
started
working
on
windows.
I
It's
basically
image
builder:
when
you're
building
a
windows
image
with
image
builder,
you
need
to
specify
some
url
endpoints
to
be
able
to
download
stuff,
but
if
you're
trying
to
build
when
you
can't
access
the
internet
or
you
want
to
build
from
local
sources,
it's
it's
a
bit
harder
to
make
sure
that
you've
got
everything
in
one
place.
I
So
this
is
basically
just
a
way
of
creating
a
docker
container
that
contains
everything
you
need
to
build
a
windows
image
with
image
builder
and
then
my
idea
is
that
it
will
auto
generate
the
variables
you
need
to
then
build
and
build
a
windows
image
from
that
docker
container.
That's
shared.
H
Okay,
the
whole
thing
with
image
builder
is
run
it
and,
and
it
and
it
magically
goes
and
it
downloads
windows,
kubelet.exe,
coupeproxy.exe,
rancher,
wins
all
that
stuff.
There's
just
hard-coded
urls
that
it
downloads
and
so
no
vendor
would
use
those
right
like,
but
so
every
vendor
that
wants
to
use
image
builder
has
this
fundamental
problem
of.
H
We
need
to
give
image
builder
a
json
file
that
has
the
http
artifacts
to
our
blessed
golden
kublet
and
coop
proxy
and
compiled
wins
distribution
and
our
cni
provider
and
everything
else
that
you
need
to
put
into
the
windows
image
before
you
create
the
ova,
okay
right
and
so
the
whole.
It's
like
this
sort
of,
and
and
and
there's
a
specific
packer
json
format
for
that.
So
it's
like
kind
of
like
a
two-tier
problem.
First,
you
have
to
format
it
for
image.
H
So
like
it's
like,
you
could
do
it
in
like
40
or
50
lines
of
go
code
and
just
pump
that
json
out
and
people
could
run
the
artifact
server,
see
all
the
json
of
where
the
artifacts
were
and
any
vendor
then
could
just
use
that
to
give
people
a
canned
windows.
Image
builder
experience
that
kind
of
worked
into
end,
so
selfishly
at
vmware.
We
we
want
to
do
this,
so
our
customers
can
can
build
ovas.
H
H
So
and
the
the
other
alternative
is
to
like
run
engine
x
or
something
but
like
that's
just
you
know:
you'd
run
engine
x
and
then
you'd
have
some
python
script
that
ran
on
nginx,
that,
like
told
nginx
to
serve
up
a
bunch
of
artifacts
and
it's
just
like
a
whole
bunch
of
hold
on
it's
like
you're
doing
this
big
rain
dance
just
to
do
something
that
you
can
do
in
like
30
lines
of
code
anyways
right.
So
so
that's
the
sales
pitch.
But
does
anybody
do?
A
Yeah
and
we
build
all
our
images
and
sign
them
all
and
everything
and
push
them
all
to
different
places,
but
we
use
urls.
We
use
azure
storage
for
for
all
the
blessed
images
we
already
have
an
entire
pipeline
for
that.
So
for
for
the
azure
side,
we
just
build
that
json
object
and
just
say
this
is
this
is
where
you
go
get
energy.
This
is
where
you
go
get
cubelet.
This
is
where
you
go,
get
those
things
and
you
just
pass
that
to
image
builder.
A
That's
how
we
do
it
on
our
side.
We
don't
have
to
and
all
those
images
are
all
public
and
everything
so.
C
Yeah,
I
think
I
think
the
solution
that
should
probably
go
here
is
is
the
hook
to
provide
the
json
file
right
and
maybe
no
specific
implementation
of,
like
all
the
other
stuff,
because
you
know
it's
very
similar
to
how
the
other
scripts
are
like.
I
tell
the
kubernetes
version
it
in
the
current
in
the
hack
scripts
like
when
they
dm
in
the
scripts.
C
In
there
like,
I
gave
it
a
version,
it
just
builds
the
url
and
goes
pulls
it
right
and
I
can
change
any
version
I
want,
but
you're
trying
to
serve
up
custom
artifacts
that
you
built,
so
you
have
to
build
your
own.
So
I
think
there's
there's
two
things
here
right.
There
is
the
json
format
right
and
then
there's
like
a
an
example
of
a
web
server
for
doing
local
dev
work
where
you
can
build
and
publish
too
that
you
can
pull
from
right.
C
But
that's
just
that's
just
files
right
with
the
structure,
so
it
would
be
cool
if
there
was
like
a
reference
implementation
of
a
web
server
that
had
the
same
structure
from
a
url
perspective
as
all
the
official
kubernetes
binaries
right,
because
then
the
json
file
is
really
simple
and
it
stays
the
same.
It's
just
you're
just
changing
the
base,
url
right
and
then
you
can
still
construct
them
in
all
the
scripts.
H
H
H
Mean
you
know
we'll
think
about
it.
If
you
just
want
to
leave
the
general
gist
of
it
in
there,
we'll
at
least
keep
it
in
mind
when
you
know,
because
that's
not
a
hard
thing
to
approximate
right
like
but
yeah.
Absolutely
I
don't
think
that's
hard.
I
mean
it's
up
to
perry,
really
he's
the
one.
That's
doing
it,
but
he's
he's
not
really
all
that
opinionated.
I
don't
think
about
the
directory
structure
either
so
yeah.
If
folks
want
flags
or
features
like
he's
gonna,
I
don't
know,
I
think
you're
using
viper
or
something
right.
I
Well,
what
I'll
do
is
I'll
come
up
with
a
basic
work
in
progress,
I'll
put
a
pr
out
there
and
then
I'll
drop
the
notes
in
before
next
sick
windows,
and
then
people
can
comment
and
go
through
and
see
what
they
think
and
and
if
you
don't
want
to
use
it,
that's
cool
too.
It's
just
thought
it'd
be
helpful
to
people.
A
Well,
I
know
image
builder
also
like
we
use
the
core
image
builder
like
building
the
bhds
and
things,
but
there
is
a
vision
longer
term
to
create
some
sort
of
cli
or
something
along
those
lines
that
helps
with
some
of
these,
like
configurations
and
and
changing
things.
So
it
might
be
worth
chatting
with
the
folks
over
there
to
figure
out
where
they're
going
with
that
and
see
how
this
might
fit
into
their
health
ease
easily.
Yes,
because
right
now
it
is
pretty
difficult
to.
I
I
A
I
think
that
looks
like
it's
the
end
of
the
adjectives,
or
we
have
three
minutes
left,
I'm
gonna
drop
in
the
link
to
the
container
d
performance
issue
to
the
chat.
So
please
comment
on
that.
I
didn't
catch
who
made
that
comment
earlier,
but
it's
there
please
either.
If
that's
not
exactly
it
open
up
another
issue
on
that
repository
and
that's
probably
the
best
one
to
get
some
eyes
on
it
and.
I
I
asked
this
in
sig
windows,
but
I'm
not
sure
whether
anyone
saw
it
we're
we're
seeing
a
couple
of
reports
about
the
smb
csi
driver
and
in
particular
the
the
powershell
commandlet,
which
is
new
smb
global
mapping
and
the
problem
that
we're
seeing
is,
if
you
include
the
requires
privacy,
it
doesn't
work.
But
if
you
remove
that
flag,
it
does
work
which
I'm
guessing
is
something
to
do
with
permissions
or
some
something
to
do
with
the
implementation
of
like
what
with
how
they're
setting
up.
I
But
what
I'm,
what
I
was
after
or
if
anyone's
got
anything,
for
it
is
some
information
on
what
that
commandment
does,
because
it's
not
included
anywhere
in
the
microsoft
docs.
It
seems
to
just
have
been
missed
at
some
point.
So
there's
like
new
smb
mapping,
but
there's
not
an
smb
global
mapping
anywhere
in
the
docs.
A
Okay,
I
don't
have
any
information
on
that,
but
maybe
I
can
see
this
muzz
on
the
call.
Maybe
you
can
help
call
follow
up
with
the
right
team.
J
A
J
Am
bear
perry,
I'll
forget
the
details.
Could
you
brighten.
J
Yeah
raise
an
issue
just
tag
it
for
me
it's
just
for
because
I
can
get
the
storage
team.
The
windows
storage
team
get
involved
here,
but
I
you
know
yeah.
J
A
Okay
with
that
we'll
conclude
the
meeting
and
we'll
see
y'all
next
week,
oh
did
I
record.
I
did
excellent
all
right
still.