►
From YouTube: Kubernetes SIG Windows 20190604
Description
Kubernetes SIG Windows 20190604
A
A
And
so
that
looks
like
most
of
the
stuff
on
there
today
was
follow-ups
around
where
we're
at
with
closing
up
1.15
code
freeze
was
Thursday
/
Friday
last
week
it
was
supposed
to
be
Thursday
and
then
due
to
some
integration
issues
between
prowl
and
github
stuff
sort
of
quit
merging,
and
it
took
him
until
until
Friday
afternoon
to
get
that
cleared
out
and
get
that
queue
drained
out.
So
anything
that
wasn't
approved
by
that
time
would
basically
need
to
go
through
the
through
the
exception
process.
A
B
C
A
A
It
became
on
by
default,
and
this
is
something
that
opened
up
another
end
point
and
it
was
for
resource
plugins
to
be
able
to
report
status
of
POD
resources,
and
the
problem
was
that
they
had
that
code
had
never
been
tested
on
Windows,
since
it
wasn't
enabled
by
default
well
and
there's
no
specific
tests
for
it,
but
when
they
enabled
it,
it
was
causing
the
cubelet
to
fail
to
start
up.
So
I've
got
a
link
in
there
and.
A
Adelina
and-
and
you
jus
put
a
temporary
workaround
in
place
to
basically
turn
that
feature
back
off
on
Windows.
So
that
way
the
qiblah
could
start
up,
but
it
didn't
fix
the
root
cause
of
the
problem,
and
so
I've
got
those
temporary
mitigations
links
there,
but
I've
got
some
PRS
that
are
that
are
open
forward
for
discussion
here.
A
A
They
had
made
an
assumption
that
there
would
be
a
UNIX
socket
opened,
but
it
was
going
to
be
on
a
file
path
under
var
log,
I'm,
sorry,
VAR,
Lib,
kubernetes,
pod
resources,
kubernetes
sock,
which
doesn't
really
make
sense
on
Windows,
and
so
the
second
PRI
have
linked
the
seven
eight
six.
Seven
one
is
proposing
to
clean
that
up
a
bit,
but
I
want
to
know
if
anyone
had
to
need
thoughts
on
whether
or
not
this
makes
sense
or
if
we
should
do
something
different.
A
A
If
we
were
to
just
take
their
path,
as
is
with
the
simplest
fix,
then
it
would
be
back
/,
localhost,
backslash,
pipe
var
Lib
cubelet,
pod
resources
cubelet,
which
would
technically
work,
but
it's
just
got
that
extra
VAR
lip
cubelet
that
we
would
never
use
in
there,
and
so
the
cleaner
version
that
I
had
proposed
doing
was
just
shortening
that
down
to
pod
resources.
Cubelet.
A
D
A
Also,
the
the
reason
that
there's
an
inconsistency
here
is
that
the
pipe
namespace
doesn't
actually
exist
on
a
file
system,
and
so
the
fact
that
varlet,
that's
C
colon
backslash,
var
back
/lib,
slash
culet
exists,
doesn't
actually
relate
to
the
named
pipe
path.
Because
those
are
the
you
know,
there's
there's
aren't
ever
persisted,
and
so
we
could
do
it.
A
And
if
you
look
at
the
other
paths
that
we
open
today,
like
dr.
shim,
is
under
violent
cubelet
today
over
on
linux,
but
on
Windows
we
put
it
in
just
in
that
backpack
pipe
back,
Dockers
shim
and
so
we've
already
sort
of
gone
down
the
direction
of
chopping
off
the
varlet
cubelet
stuff,
at
least
when
it
comes
to
docker,
Shen
I.
C
C
My
opinion
is
that
we
can't
really
try
to
force
or
shouldn't
try
to
force
an
equivalency
when
there
is
none
and
I
would
go
at
war
out
for
Windows
and
have
it
pipe
fog
resources,
whatever
another
extra
barley,
but
keep
it
consistent
within
within
within
everything,
they'll
shoot
it
down.
Windows
once
and
expect
and
document
it
very
well,
and
just
there
is
documented,
a
know
hey
whenever
you're
using
pipes
or
wherever
you're
touching
pipes.
It
will
be
like
this
mm-hmm.
A
E
A
E
B
So
when
I
was
trying
out
the
CSI
plugin
stuff,
it
seemed
for
the
device
plugins.
The
domain
sockets
did
work
with
disk,
but
there
was
a
problem
in
a
container
tried
to
reach
out
to
socket
on
the
host
like
from
the
cubelet,
because
it
was
going
from
a
container
to
a
host
process
that
was
being
blocked
by
a
security
policy.
Otherwise,
from
a
container
to
a
container,
II
was
working
out
that
socket
on
the
fan
system,
like.
A
A
B
A
B
A
A
A
A
You
know,
unfortunately,
we
weren't
able
to
get
that
finished
in
time,
I'm
still
trying
to
make
the
updates
over
on
the
AKS
engine
side
to
get
that
testable,
but
Adelina
does
have
some
some
recent
progress
on
that
working
with
flannel,
but
I.
Don't
think
they're
going
to.
Let
us
merge
that
this
this
late
and
we're
still
working
through
a
couple
issues
with
things
like
running
out
disk
space.
Do
you
want
to
talk
more
about
that
Elina.
C
C
There's
a
lot
of
disk
image
disk
image
snapshots
that
contain
would
be
for
some
reason
it
takes
so
there
will
be
a
folder
full
of
snapshots.
I,
don't
know!
Why
is
that
created
by
what
rule
or
whatever?
But
the
point
is
like.
I
had
a
node
with
something
like
that.
A
gigabytes
of
RAM
I
wish
30
gigabytes
of
disk
and
it
was
basically
full
filled
in
a
day
of
intermittently
running
a
container
from
the
same
image.
So
that's
not
that's
not
really
gonna
go
to
fly
protesting,
definitely
not
protection.
C
We
have
a
bunch
of
images
and
not
even
it's
not
usable
now
the
problem
with
those
snapshots
is
they
usually
you
could
delete
them
and
most
of
the
cases
you
plant
from
using
container
these
July,
but
there
are
some
situations
when
you
cannot
and
the
files
themselves
that
are
run
with
this.
If
you
try
to
delete
them
by
hand
by
any
method
possible,
so,
let's
say
you're
in
a
scenario
where
you
just
don't
see
the
snapshot.
Cli
anymore,
you
just
don't
think
that
we
can
delete
I
mean
on
just
ok,
fine,
I'm
gonna.
C
Do
a
linden
by
hand.
I
tried
a
lot
of
stuff
even
taking
ownership
permissions,
every
sort
of
thing
that
could
pass
to
my
mind
to
delete
them.
I
cannot
and,
as
you
can
imagine
that
that's
quite
a
problem
now
the
cubelet
should
be
able
to
do
that
by
hand
too.
Once
you
realize
realizes,
has
this
pressure,
you
should
be
able
to
delete
it.
We
need
those
snapshot,
but
it
doesn't
I
know
why.
C
So
there's
still
a
lot
of
questions
regarding
why
and
how
and
what's
triggering
this
so
I,
don't
think
it's
definitely
not
production
stable.
It's
not
even
stable
for
testing
I
mean
I,
don't
know
if
it
will
affect,
or
it
wouldn't
affect
the
run
by
the
virtue
of
the
cubelet
just
running
our
space,
so
I'm
trying
tomorrow,
to
see
exactly
have
some
more
data
like.
A
A
Then
the
run
is
username
stuff,
so
unfortunately
James
was
able
to
get
that
last
PR
and
yet
and
so
we're
going
to
move
that
over
to
116
but
GMS
a
PRS
are
all
merged
as
far
as
I
know.
Is
that
that
right,
deep
in
Jeremy,
yeah,
okay,
excellent
and
so
cube
proxy
might
think
still
has
some
stuff
open
I'm.
F
F
The
dachshund
so
for
that
PR
at
the
original
intent
of
the
Q
proxy
PR
was
to
have
it
be
like
completely
safe.
The
H&S
Network
went
away,
but
I
wasn't
I
didn't
have
enough
time
to
go
that
route,
so
I
just
added
the
functionality
to
to
allow
it
to
be
started
before
the
CNI
is
deployed.
Basically,
so
that's
the
result
of
that.
You.
H
A
So
wait:
okay,
so
for
cube
proxy
I'm
gonna
make
sure
I
got
this
right,
so
the
cue
proxy
was
going
to
retry
if
the
cubelet
wasn't
started
and
did
that
merge
or
not
no.
F
I
C
I
I
F
The
scripts
and
the
documentation
or
what
need
time
I
was
trying
to
rush
it
to
get
it
done
for
today,
but
that
wasn't
looking
it
just
wasn't
possible,
so
we're
gonna
need
more
time.
I,
don't
know
if
Michael
wants
to
get
an
extension
for
that
or
if
we're
gonna
be
okay
with
pre-alpha,
but
the
cube
ATM
part
is
there:
it
works.
So
it's
usable,
it's
just
from
the
scripting
perspective.
That
means
more
time.
G
G
On
that
is,
we
should
try
to
finish
it
and
when
we're
done,
let's
evaluate
our
options.
I
think
that
having
it
in
a
pre-alpha
stage
is
fine.
We
can
still
merge
the
talks
into
the
official
kubernetes
talks,
and
that
should
be
acceptable.
It's
just
won't
be
able
to
declare
alpha
if
would
make
the
real
deadline
for
1:15,
but
that
doesn't
mean
people
wouldn't
use
it.
That
doesn't
mean
one
be
able
to
start
leveraging
and
getting
feedback
and
fixing
bugs
I
mean
great
job
by
the
way.
F
A
A
A
Of
us
working
to
align
resources
with
sig
note
or
whatever
sig
was
driving
this
so
that
way
of
where
the
cross-platform
support
is
needed.
We
need
to
have
an
earlier
engagement
there,
because
you
know
we
can't
fix
everything
in
a
in
a
bug-fix
scope
if
there's
no,
if
there's
no
testing
or
anything
designed
for
it
from
the
beginning,
so
I'll
talk
to
them
and
see
if
they
have
that
option
on
next
next
next
meeting
here
in
a
few
minutes.
C
So
this
was
an
alpha
feature
until
some
while
ago.
So
I
guess.
My
question
is
better
put.
My
point
is
that
that
are
probably
I.
Don't
look
but
they're,
probably
a
lot
of
other
features
in
this
situation.
There
are
alpha
now
there
will
probably
be
beta
and
before
enable
by
default
in
the
picture.
So
maybe
there
I
know
there
are
tests
for
alpha
features.
Maybe
we
should
consider
actually
putting
a
job
for
alpha
features,
at
least
the
ones
that
impact
the
node.