►
From YouTube: Kubernetes SIG Windows 20210209
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
february
9th
2021
instance
of
the
sig
windows
kubernetes
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct
and
standards
all
right,
let's
kind
of
just
get
into
it.
I
don't
see
any
announcements
today
other
than
today
is
february.
9Th
today
is
the
kept
enhancement
freeze
for
the
v121
release.
A
A
B
A
C
B
Jordan,
jordan,
on
this
saying
I
addressed
his
comments.
We
needed
an
act
from
him
and
I'm
assuming
jordan
is
part
of
sigot
or
what
is
he
was
he.
C
A
A
He
can
kind
of
he
can
give
approval
for
sig
auth.
I
was
speaking
with
him
with
the
privileged
containers
kept
yesterday
and
he
said
he
would
approve
for
sigoth
for
that.
So
we'll.
I
think
we'll
ask
him
for
for
this.
Okay
as
well-
and
I
mentioned
in
the
release
slack
channel-
that
we
had
segwindos
had
two
caps
that
were
tracking
and
we're
going
to
file
extensions
for
both
of
them.
If
they
didn't
merge
today,
it
seems
like
it's
a
little
bit
of
a
rough
release
in
general.
A
I
know
yesterday
the
release
leads
were
saying
that
out
of
the
62
caps
that
sigs
had
put
forward
as
wanting
to
make
progress
on
in
this
release
that
only
22
of
them
had
met
all
the
deadlines
as
of
yesterday,
so
there's
probably
going
to
be
a
fair
amount
of
extensions
being
requested.
So
I
wouldn't
worry
too
much
about
this,
but
yeah
we'll
start
kind
of
stepping
up
the
the
nagging
I
will
for
this
one.
A
B
A
All
right
sounds
good
yeah.
I
I
took
a
look
and
I
think
that
the
biggest
question
was
how
do
we
want
to
handle
this
this
bit
about
how
to
disable
this
for
cluster
edmonds?
That
don't
want
this
on,
and
I
think
we
have
a
pretty
decent
answer
for
that
that
yeah
we
don't
have
to
pump
through
ourselves.
So
I
hope
that
I
hope
that
it's
just
that
we
need
somebody
to
come
and
verify
and
then
give
the
approval.
A
Yeah
sounds
good
kind
of
in
the
same
boat,
with
the
privileged
container
kept.
So
far.
I
have
approval
from
well
deep.
Looked
for
sig
storage.
Yesterday,
jonathan
or
jordan
liggett
did
quite
an
in-depth
review
here,
and
I
was
chatting
with
him
on
slack
for
a
while,
and
I
think
he
we're
pretty
much
in
alignment
with
that.
I
addressed
all
this
feedback.
I'm
still
waiting
for
him
to
come
back
and
give
an
lgtm
for
that.
A
A
So
container
container
mounts
are
going
to
look
a
little
bit
strange
for
these
containers,
because
there
there's
no
file
system
isolation,
just
a
quick
update,
but
what
we're
planning
on
doing
is
adding
an
environment
variable
to
the
to
the
to
the
the
job
object
so
it'll
be
inherited
by.
I
believe
it
should
be
inherited
by
all
of
the
processes
that
get
spawned.
A
That
will
point
to
the
absolute
path
on
disk,
where
the
container
volume
that
is
created
for
these
containers
is
mounted
and
then
from
there
they'll
be
able
to
any
program
in
the
container
will
be
able
to
access
the
any
value
mounts
that
get
added
in
via
a
relative
path,
rooted
off
of
that
environment
variable
so
we'll
go
ahead
and
also
update
a
couple
of
the
kubernetes
library,
client,
libraries
that
do
things
like
search
for
the
cluster
config
in
tree
for
four
windows.
A
Only
to
look
for
that
environment
variable
and
prefix
the
paths
with
that,
so
I
think
that
that
ended
up
being
a
pretty
good.
I
think
that
that
ends
up
being
a
pretty
good
user
experience.
There's
one
comment
left
from
deep
about
named
pipe
and
domain
sockets
that
I
think
we
just
need
to
do
a
little
bit
of
investigation
to
make
sure
that
we
can
create
simulinks
or
junctions
to
those,
and
if
that
works,
then
we'll
follow
the
same
path.
There.
A
Yep
and
then
so
yeah,
I'm
waiting
for
jordan
to
give
the
sig
off
lgtm.
I
don't
know
if
deep
is
your
lgtm
kind
of,
would
that
be
just
accepted
for
sig
storage
or
do
we
need
somebody
else
too?.
D
If
that
comes
up,
you
can
say
yeah.
I
took
a
look
at
it
from
the
windows
csi
perspective,
so
okay.
A
Sounds
good
and
then
there's
does
anybody
have
any
questions
on
either
of
those
caps.
E
I
guess
my
comment
might
work
in
this
thing,
so
the
the
question
I've
got
is
I've
been
trying
to
get
this
working
and
I've
came
to
the
sort
of
conclusion
that
that
it
wasn't
skipping
a
cni,
even
though
it's
going
on
the
host
network.
So
I
kind
of
was
talking
that,
through
with
yourself
mark
and
and
danny
and
I've
kind
of
got
it
to
a
point
that
I
think
that
it
it
now
is
skipping
the
cni
in
container
d.
But
then
the
issue
I'm
getting
is
then.
E
If
I,
if
I
build
another
release
for
container
d,
then
it
breaks
the
privileged
bit.
So
then
I
get
on
the
host
network,
but
with
no
privileged
escalation.
So
I'm
trying
to
work
out
what
am
I
missing,
because
I've
just
built
off
master
on
container
d,
I'm
now
skipping
the
cni,
which
is
good,
but
now
I'm
at
a
loss
as
to
what
I
can't
escalate
so
like
I'd,
lose
like
the
host
and
stuff
like
that.
A
Interesting
yeah,
for
that
I
think
we
really
need
danny
cantor
to
chime
in
I'm
looking
at
the
participant
list,
and
I
don't
see
him
there
he's
the
the
one
who
built
the
continuity
bits
that
I've
been
using
and
he
has
much
more
deep
experience
in
container
d
versus
myself.
I
will
make
an
action
item.
Take
an
action
to
pick
him
for
that
and.
E
A
Yeah
as
I'm
trying
to
pull
up
the
section
this
kept
has
gotten
quite
large,
but
part
of
the
work
here,
which
wasn't
called
out
in
detail
but
was
highlighted,
was
a
bunch
of
changes
that
were
planned
for
container
d
itself.
I
think
some
of
them
were
here
yeah,
so
we
called
out
enabling
host
network
mode
for
privileged
containers
and
also
all
of
the
changes
in
container
d
needed
to
plumb
making
sure
the
cni
like
cni
was
called
correctly.
A
There
is
I'll
just
spend
a
minute
on
this,
so
we
could
break
out
about
this
after,
but
there's
also
in
some
of
the
comments
in
here.
I
walked
through
how
the
cubelet
translates
the
host
network
trueflag
and
the
pod
spec
to
the
fields
that
live
on
the
cri
calls
and
go
through
and
we'll
want
to
just
make
sure
that
that,
like
that
whole
end
to
end
story,
works
for
with.
A
De
four,
definitely
for
the
next
major
release
of
container
d
that
comes
out,
which
I
think
is
going
to
be
1.5
cool.
Does
that
help
answer
the
questions?
Yep
yeah
yeah?
This,
unfortunately
like
this
cap
is
spans
so
many
different
projects,
it's
hard
to
figure
out
which
ones
to
go
kind
of
in
depth
detail
into
and
which
ones
are
just
adding
noise.
So
I
think
we
chose
to
not
kind
of
go
deep
into
the
container
d
like
like
changes
that
we
think
we're
going
to
need
what.
F
Is
at
least
cadence
of
container
d,
like
so
say,
perry
was
to
get
this
working
like
say
by
the
end
of
next
week
or
the
week
after
that,
and
then
we
had
an
upstream
container
patch
after
talking
to
you
and
danny
what
like,
what
what's
the
dance
that
we
have
to
do
in
terms,
I
guess
I
don't
even
care
what
the
dance
is.
I
just
want
to
know.
What's
the
timeline
for
the
dance
to
be
over
for
us
to
be
able
to
get
this
in
the
earliest.
A
Yeah,
I
think
that
their
release,
cadence,
is
pretty
much
whenever
there's
sufficient
new
like
feature
churn
to
warrant
a
new
minor
release.
There's
a
new
minor
release.
We
can
well
we'd
have
to
kind
of
ask
them,
like
I
think,
say
hey
this
is
ready.
Can
we
cut
a
new
release
and.
A
We
do
in
microsoft,
there's
a
couple,
so
brian
goff
is
he's
a
big
mobi
contributor.
He
works
on
container
d,
mainly
on
the
linux
side,
but
I
think
he
would
be
able
to
help
talk
to
the
right
folks.
Kevin
parsons
and
danny
cantor
are
getting
more
and
more
involved
in
that
yeah
again.
G
Kevin
is
one
of
the
maintainers
now,
so
we
definitely
can
ask
for
it
and,
as
as
mark
you
said
it's
basically
based
on.
If
you
go
to
the
community
members,
there
is
mark
brown.
There
is
forgotten,
forgetting.
G
Yeah
mike
brown,
and
then
there
is
the
derrick
right
derrick
mike
brown,
and
then
these
are
a
couple
of
folks.
If
you
reach
out
to
them
and
there's
a
legitimate
case,
they
they
do.
You
know
they'll,
listen
to
you
and
they'll
have
the
next
release,
but
I
think
there's
mark.
Isn't
there
also
a
nightly
build?
They
have.
A
They
have
a
nightly
build
yeah,
they
have
nightly
builds,
and
I
also
have
I
set
this
up
for
a
while
a
while
ago.
Let
me
pull
this
up
in
a
different
window
and
then
we
can
take
a
look
at
it.
I
have
in
one
of
my
github
repositories.
A
I
let
me
just
look
up
this
yeah.
I
have
a
github
action
that
builds
container
d
from
a
container
d
branch,
hds
shim
from
tip
of
tree
and
ctr.exe
from
tip
of
tree,
and
we
have
been
using
this
in
the
some
of
the
ci
jobs
for
sig
windows.
So
this
is
kind
of
a
stop
gap.
A
If
we
want
to
get
automated
testing
around
all
this
or
have
kind
of
an
unofficial
like
a
nightly
build
cadence
as
soon
as
we
do
think
that
we
we
have,
you
know,
changes
up
that
are
gonna,
bring
big
functionality,
improvements
for
windows,
we
should
get,
we
should
reach
out
to
them
and
say
hey
like
what's
what's
what
are
the
timelines
for
a
new
release?
A
If
and
here's
what
we'd
like
to
have
included
in
that
and
try
and
just
make
sure
we're
on
that
radar
too?
That's
a
great
thing.
F
A
It
is
but
sometime,
but
the
one
of
the
reasons
why
we
did.
This
was
because
there's
a
the
the
ingestion
or
vendoring
process
can
take
time,
and
one
early
on
there
were
issues
where
there
were
regressions
in
hcs
shim
that
weren't.
A
That's
kind
of
the
main
reason
all
right.
We
could
talk
more
about
that
after
one
other
enhancement
that
I
wanted
to
call
out.
I
just
actually
got
notified
of
this
yesterday
and
I
looking
at
the
comments.
I
think
that
the
plan
is
to
try
and
put
together
the
cap
in
121
and
start
implementation
on
this
in
122,
but
so
there's
a
cup
from
peter
hunt
at
red
hat
that
basically
tries
to
make
a
yeah,
as
it
says,
a
a
c
advisor,
less
stats.
A
Endpoint,
that's
entirely
based
on
the
cri
stats,
and
the
reason
why
I
got
pulled
in
was
there's
a
lot
of
good
questions
about
what
will
like
does
this
make
sense
for
windows
and,
if
not
like,
how
do
we?
What
do
we?
What
fields
do
we
need
for
windows
and
what
objects
in
here?
A
I
have
just
started
to
take
a
look
at
this,
but
I
wanted
to
call
this
out
if
anybody
else
is
interested
or
has
time
to
take
a
look.
That
would
be
helpful.
I'm
definitely
going
to
reach
out
to
those
same
folks,
danny
and
kevin
to
take
a
look
at
this
as
well.
Here,
I'm
really
thankful
that
derek
and
elena
from
sig
node
kind
of
reached
out
and
said:
hey.
We
don't
think
a
lot
of
these
drive
with
windows
before
like
as
part
of
their
initial
review.
A
So
I
think
that's
awesome
that
folks
are
looking
out
for
windows
now
more
and
more
all
right,
yeah
the
link
to
that's
on
the
agenda.
If
anybody
wants
to
take
a
look
I'll,
reformat
that
link
quickly,
I
wanted
to
talk
a
little
bit
more
about
the
windows
defender
discussions
we
had
last
week,
so
there
were
a
couple
issues
that
that
were
coming
up.
A
We
talked
we
got
a
hold
of
some
of
the
windows
defenders,
folks,
and
I
think
that
the
one
kind
of
concrete
piece
of
feedback
or
guidance
we
have
today
is
that
the
defender
team
kind
of
said
that
the
whenever
we're
running
whenever
anybody's
running
container
d,
though
at
they,
should
also
run
that
powershot
commandlet
to
add
the
container
d
process
as
a
windows
defender
exclusion
with
the
powershock
I'll
I'll,
actually
put
the
whole
powershell
command
in
in
the
notes.
A
In
a
second,
I
believe
it's
like
updatemp
exclusion
list,
and
they
said
they
must
point
to
the
full
path
of
the
container
d-binary.
A
They
said
that
that's
should
be
sufficient
to
block
most
of
the
to
disallow
like
to
stop
most
of
the
scanning.
A
If
anybody
doesn't
see
that
that
would
be
I'd
like
to
be
able
to
kind
of
raise
that
to
folks
too,
I
took
a
quick
look
at
none
of
the
kind
of
official
install
steps
for
container
d
mention
that
they
usually
just
register
it
as
a
service
and
set
up
some
environment
for
that.
But
they
don't
run
that
powershell
thing.
A
I
think
that
it
would
be
good
to
kind
of
hopefully
patch
the
continuity
install
process
eventually
to
like,
if
you
do
register
it
as
a
service
call
that
out.
So
I
think
we'll
get
an
issue
filed
to
see
what
folks
think
of
that
and
if
but
at
a
minimum
update
some
docs
and
say,
please
exclude
this
path
from
the
defender
scanning
now.
A
I
know
that
there
were
some
other
kind
of
concerns
about
cpu
overhead,
even
with
defender
not
installed,
and
those
we
still
don't
really
have
answers
for,
but
we
will
kind
of
continue
investigating
most.
Does
that
kind
of.
H
A
Up
everything
that
we
talked
about
sorry
ibrahim
did
you
have
something
yeah.
H
E
H
E
You
do
set
mp
preference
and
then
disable
do
any
of
the
mpu
preferences.
It
just
takes
effect
straight
away,
so
I've
done
disable,
real-time
scanning
and
then,
as
soon
as
I've
disabled
it
immediately
the
the
thing
goes
away.
I
was
wondering
whether
we
could
get
some
of
the
exclusions
either
added
into
the
docs
or
into
the
sick
windows
tools.
E
So
at
least
someone
could
just
like
replicate
their
exclusions
and
also,
I
guess
it's
useful-
for
people
who've
got
third
party
antivirus
to
be
able
to
then
to
copy
the
exclusions
and
try
and
get
them
working
with
us.
A
All
right
and
then
most
is
that,
did
you
did
you
get
any
other
updates
that
I
hadn't
seen
or
is
that
kind
of
all
the
guidance
we've
gotten
so
far.
G
Well,
that's
all
the
guidance.
I
think
there
are
a
lot
of
like
we're,
seeing
different
kind
of
issues
and
we're
trying
to
treat
them
differently
so
with
defender,
I
think
you
covered
it.
There
are
some
from
like
jeremy
yeah,
I
think
forward
with
from
container
d
perspective.
I
am
asking
our
container
d
engineers
also
to
take
a
look
if
there
is
like
something
coming
from
continuity
itself,
not
defender,
but
as
far
as
the
defender
is
concerned,
you're
right.
A
Alright,
I
had
a
quick
question.
I
know
james
kind
of
presented
last
week
the
proposed
restructuring
of
the
sig
windows
board
to
add
a
new
group
called
signal.
I
think
that
this
is
pretty
much
ready
to
merge.
I
just
wanted
to
do
one
last
kind
of
temperature
check
to
see
if
anybody
had
any
concerns.
If
not,
I
think
we'll
we'll
get
this
merged.
A
A
Not
for
the
test
grid,
okay,
so
this
yeah
yeah.
Yes,
let's
go
ahead
and
do
that
too,
which,
where
is
it
this
this
project?
I.
F
Yeah
that
way,
arvind
the
the
reason
for
this
is
that
I
I
went
to
grad
school
with
this
guy
from
the
military
and
he
always
used
to
tell
us
that
if
you
have
red
tests,
then
nobody
ever
fix
any
of
the
tests,
because
the
broken
window
theory.
So
this
way
we
have
something
to
look
at
and
we
we
can
have
a
binary,
yes
or
no,
and
then
we
can
be
really
careful
about
the
thing
that
we
call
signal
right
without
having
to
delete
other
jobs.
That
aren't
signal
right.
That
was
the
idea.
B
F
Mostly,
that's
just
for
the
overall
every
time
all
of
us
join
that
15
minutes
early,
we're
all
context,
switching
over
and
trying
to
figure
out
what
we
should
be
doing,
and
I
don't
think
one
of
us
really
remembers
what
we
did
last
week.
So
so
that
way,
if
we
have
that
column,
we
can
just
have
that
column,
be
the
source
of
truth
for
our
progress
in
this
overall
initiative.
I
guess.
A
All
right
I'll
create
it
right
now
I'll
move
it
over
after
sometimes
it
slows
down
my
browser
quite
a
bit.
That's
a
big
board
all
right,
so
it
doesn't
sound
like
anybody
has
any
concerns
about
that
kind
of
curated
test
bucket,
so
I'll
go
ahead
and
get
that
merged
soon
and
then
can
start
using
that
to
help
identify
issues
that
we
need
to
put
in
that
new
column
on
the
project
board.
A
All
right
perry
did.
We
cover
your
yeah.
A
Time
as
the
cap,
okay
sounds
good
and
then
ravi,
I
think
I
see
you
added.
J
Yeah
yeah,
so
the
main
question
is
related
to
the
pr
that
I
open.
So
as
far
as
I
understood,
container
d
with
csa
has
like
never
been
tested
properly.
Just
to
give
everyone
some
background.
What
we
wanted
to
do
is
we
wanted
to
switch
from
docker
test
to
container
details
on
the
windows
host,
but
I
think
what
we
found
yesterday
when
andy
replied
back
to
me
was
there
are
a
couple
of
issues
posted
in
azure
csi
and
we
never
got
them
to
work
in
first
place.
A
Yeah
so,
first
of
all
I'll
apologize,
I've
been
quite
busy
with
the
whole
cap
process.
I've
been
following
some
of
these,
but
I
haven't
had
time
to
investigate.
We
did
have
csi
proxy
working
with
container
d
in
120..
I
made
some
updates
and
we
also
had
the
entry
storage
driver
container
stuff
working.
A
I
think
that,
and
I
did
set
up
some
periodic
jobs
to
test
those
that
was
quite
a
while
ago,
though,
probably
back
last
october.
A
And
yes
since
then
things
some
something's
changed
and
things
are
rotting,
so
they
they
don't
work
today,
but
they
they
did
work
at
one
point:
we
we
should
get
them.
We
should
figure
out
what's
regressed
and
get
them
working
today.
J
Okay,
yeah
so
andy
mentioned
that
there
are
some
issues
and
then
he
has
pointed
me
to
some
of
the
issues
on
azure
file,
csi
driver
repo
and
as
your
disk
csi
driver
repo.
J
A
Okay,
some
of
these
were
yeah,
so
I
think
that
there
were
a
couple
of
issues
all
over
the
place.
I
we
can
talk
about
this
either
on
slack
or
next
meeting,
because
I
need
to
drop
in
a
minute
for
the
signal
meeting,
but
I
think
there
were
kind
of
a
couple
crisis
of
issues.
One
of
the
issues
was
that
the
the
container
images
that
these
were
producing
were
multi-arch
image
manifests
and
container
d.
A
I
I
don't
know,
I
don't
know
if
this
is
necessarily
a
bug
in
container
d,
but
container
d
requires
that
if
it's
a
multi-arch
image
manifest-
and
it
contains
multiple
versions
of
windows
or
for
of
windows-
containers
that
some
metadata
that
was
not
getting
added
to
the
container
manifests
be
set
specifically
the
os
version,
and
that
wasn't
the
case
with
docker.
A
I
think,
if
that
wasn't
there
with
docker,
it
would
assume
to
pull
the
first
image
that
matched
os
equals
windows
and
if
os
version
was
set,
it
would
pull
the
correct
one.
So
part
of
that
was
there
was
a
we
needed
to
update
the
the
build
process
for
these.
K
A
Okay,
yeah
and
that's
actually
that's
what's
kind
of
described
in
detail
in
for
the
second
issue.
It
looks
like
that.
The
so
I
commented
here-
and
this
is
that
specific
field
that
we're
missing
in
yeah
claudu
actually
did
a
lot
of
work
to
figure
out
how
to
add
these
tags
to
the
manifests
with
when
we
build
these
container
images
with
build
kit,
which
is
awesome.
A
I
believe
that
so
these
image
poll
issues
are
because
of
that,
and
it
looks
like
where
was
it?
Somebody
commented
that
some
of
the
oh
so
for
whatever
reason,
specific
version
tags
for
these
container
images
have
the
metadata,
but
the
latest
tag
doesn't
so.
I
think
this
is
something
that
andy
or
some
of
the
the
maintainers
need.
K
To
help
address,
perhaps
I
think
it
might
be
possible
that
possible
that
the
image
building
process
changed
for
this
project
as
well.
Previously
we
were
using
windows
images.
I
mean
windows,
notes
to
build
the
windows,
images
and
the
windows
nodes
would
automatically
add
the
os
version.
Entry
into
the
image
itself,
but
docker
build
x,
doesn't
know
how
to
do
that
and
you
cannot
use
docker
tag
or
docker
manifest
image.
K
I
I
I
forgot.
I
forget
the
command
itself,
but
at
this
very
moment
you
cannot
use
the
docker
manifest
command
itself
to
add
those
version
itself.
We
kind
of
use
a
workaround
for
that
it
will
eventually
make
it
into
the
docker
manifest
command
itself,
but
not
at
the
moment.
Take
a
look
at
the
pause
image.
A
Links
here,
yeah,
yeah
and
there
cardio
has
really
deep
knowledge
here
so
reach
out
to
him.
But
to
sum
this
up,
I
think
at
least
the
first
set
of
issues
that
we're
seeing.
I'm
not
saying
saying
that
this
is
all
the
issues
here.
Is
that
the
the
builds
the
container
images
being
produced
from
this
repo
aren't
compatible?
A
Like
aren't
formatted
correctly,
there's
a
couple
of
options,
one
thing
that
I
have
done
in
the
past
to
test
this
is,
if
you
pull
the
image,
if
you
pull
the
image
on
like
the
os
version
that
you
want,
and
then
you
re-tag
it
and
push
just
a
single
image
container
d
can
pull
a
single
a
container
manifest
that
has
a
single
image,
no
problem,
that's
the
kind
of
the
best
way
to
to
verify
that
this
works.
J
Like
if
you
look
at
the
the
tests,
they
are
not
failing
continuously
like
if
this.
If
it's
a
image
pool
issue,
it
would
have
failed
continuously,
but
it
wasn't
like
it's
sort
of
flaking
those
tests,
like
especially
seven
of
them
or
nine
of
them.
They're
flaking,
but
they've
been
flaking
pretty
consistently,
but
they
were
there
were
times
when
it
has
passed.
If
you
look
at
the
entire
time
frame,
there
were
instances
where
those
particular
tests
have
passed
as
well,
and
I
could
see
that
would.
J
K
A
K
Make
sure
to
also
paint
the
master
or
coordinate
or
something
like
that,
because
otherwise,
you
might
end
up
with
pods
or
containers
spawning
up
on
the
master
node,
which
isn't
the
point
of
that
particular
test
run
right.
J
Right
the
way
that
particular
test
works
is
it.
It
runs
on
the
windows
host,
always
like
we.
We
ensure
that
the
pod
actually
lands
on
to
the
windows.
No.
J
Yeah,
so
the
other
thing
is,
I
think.
Yesterday
I
attended
the
csi
call
and
I
discussed
with
the
perhaps
what
I
can
do
is
I'll
set
up
a
meeting
with
all
the
interested
folks,
and
then
I
can
show
like
once
the
demon
set
on
the
windows
note
comes
up
like
it
is
flaking
to
mount
the
volume
or
create
the
volume
even
on
the
windows
source,
so
I'll
set
up
an
environment
and
then
try
to
reproduce
this
issue
and
then
I'll
call.
This
meeting.
J
J
Yeah
so
deep,
when,
when
we
had
the
discussion
yesterday
was
was
saying
that
he
wasn't
sure
if
this
was
tested
properly
like
the
the
unix
domains,
if
they,
if
they
work
properly
with
cs,
but
the
container
d
is
something
that
that
was
not
tested.
Correct
me
if
I'm
wrong
deep.
D
Yeah
I
recall
when
this
was
exactly
going
back
to
what
mark
was
mentioning
that
he
initially
you
know,
ran
some
tests
and
I
think
that's
when
he
was
running
into
some
domain,
socket
mount
issues
with
container
d
in
windows,
and
I
was
wondering
you
know
that
was
my
first
kind
of
gut
key
like
if
it's
that
same
issue
but
but
then
I
know
ravi
mentioned
that
it
does
work
out
sometimes.
So
it's
not
like
you
know
a
consistent
failure.
D
D
I
think
the
next
logical
step
here
would
be
to
see
if
the,
if
there
are
any
issues
with
with
the
azure
file,
csi
plugin
itself,
maybe
from
its
logs
or
I
know
from
some
of
the
other
logs.
If
that
would
have
a
note
of
whether
that
that
the
containers
for
that
are
running
properly
or
not,.
J
D
D
Yeah,
so
that's
the
deployment,
and
then
I
mean
you
know
once
it's
deployed
whether
the
containers
are
actually
up
or
not,
as
in
you
know,
demon
set
would
come
and
say:
okay,
you
know
deployment
succeeded,
but
the
actual
pods
for
the
demon
set
may
not
actually
come
up
right
or
as
they're
coming
up.
They
might
have
an
issue
and
try
to
stop.
So
I
think
that's
the
part.
I
think
we
really
need
to
drill
down
into.
F
J
Like
every
part
that
is
coming
on
the
on
that
note,
the
cubelet
would
have
an
entry
saying
if
it
is,
if
it
has
success.
Oh
you're
saying
just
use
the
kubota
to
deposit
okay,
yeah
you're.
J
Yeah,
like
even
when
the
volume
gets
created
on
the
cube,
you
would
get
a
notification
saying
that
this
part
is
trying
to
mount
this
volume,
or
this
volume
is
actually
getting
created
on
the
node,
so
that
information
will
be
present
in
cube.law.
F
J
Yeah,
so
what
I
can
do,
yeah
yeah,
what
I
can
do
is
I'll
set
up
a
debugging
session
and
before
that
I'll
point
everyone
to
the
logs
that
we
have
where
we
can
see
that
a
particular
demon
set
the
csi
demon
set
is
is
like
running
or
not,
and
in
the
same
block
we
can
say
that
there
is
this
particular
part,
which
is
actually
a
test
part
that
gets
spun
up
on
that
windows.
Node
and
the
test
part
is
supposed
to
either
create
a
volume
or
mount
the
existing
volume.
B
D
I
was
also
wondering
so:
do
you
happen
to
have
that
azure
subscription,
where
you
can
bring
up
the
same
testbed
with,
like
you
know,
continuously
deployed
on
the
windows
nodes
and
you
know
just
deploy
the
azure
file
csi
driver
and
then
just
you
try
to
just
you
know:
spin
up
a
node
that
tries
to
mount
a
volume
from
azure
file.
J
Yeah,
so
the
way
I
have
done
it
in
the
past
is,
I
have
I've
used
like
my
own
test
cluster,
like
azure
cluster.
I
do
not
have
like
an
the
same
subscription
group
that
that
we
use
for
ci.
I
do
not
have
it
and
I
have
installed
container
d
and
all
those
things
manually
on
the
on
the
host
using
the
install
script
that
we
had
got
it.
Okay,.
F
J
D
Do
we
happen
to
know
what
andy
is
running
as
part
of
the
azure
file?
Yeah?
Is
that
does
that
exclusively
run
docker
or.
J
So
I
looked
at
the
test
suite
that
they
are
having
on
the
azure
disk
css.
They
have
both
like.
If
you
look
at
the
prs
like
you
can
see
that
they
have
two
different
tests,
one
on
docker
and
the
other
on
container
t.
J
J
C
J
E
J
J
J
So
deep,
if
you
look
at
that
particular
job,
yeah
yeah,
it
says
that
it
says
that
they
are
testing
csi
driver,
but
they
don't
have
anything
specific
to
entry
like
they
have
another
job,
the
next
job,
which
says
full
entry
as
your
disk
tree.
So
I
think
we
are
actually
testing
disk
csi
driver
with
docker.
There.
J
Yeah
and
then
we
have
like,
if
you
add
the
last
job,
I
think
if
you
add
dash
container
d,
that's
the
container
t-shirt.