►
From YouTube: Kubernetes SIG Windows 20190326
Description
Kubernetes SIG Windows 20190326
A
Hello,
everybody
and
welcome
to
another
sick
windows.
Meetup.
Thank
you
all
for
attending.
So
today
is
the
26th
of
March
and
let's
start
with
a
huge
congratulations
to
the
team.
You
know
we
were
able
to
go
stable
or
GA
with
kubernetes
1.14.
That's
probably
right,
Patrick,
that's
probably
a
huge
milestone.
Some
folks
have
even
called
it
one
of
the
most
influential
or
most
important
feature
updates
in
kubernetes.
So
you
know
of
getting
a
lot
of
press
and
analyst
feedback,
we're
getting
a
lot
of
touch
points
both
on
Twitter
LinkedIn.
So
huge,
congratulations
to
everybody.
A
You
know
we
have
done
it
without.
You
know
Microsoft
and
docker
and
cloud
base
and
VMware
and
people
Dahl
and
didn't
miss
anybody
nope,
that's
it
and
Google.
So
thank
you
all
for
contributing
your
time.
Your
effort
getting
buy-in
from
your
employers
to
come
in
and
contribute
in
this
project
and
we're
not
done.
We
have
a
lot
more
work
ahead
of
us.
You
know
moving
forward
so
I
think
now
we
need
to
kind
of
take
stock
in
to
the
amount
of
work.
That's
left
moving
forward.
A
We
kind
of
talked
about
that
last
week
and
we
said
that
you
know
start
thinking
about
where
you
might
want
to
contribute,
and
you
know
start
the
1
to
15
journey,
probably
starting
today.
The
one
thing
I
want
to
mention
is
some
of
you
guys
have
been
big
contributors
to
the
project
and
you
invested
a
lot
of
time
and
effort
into
this
you're
going
to
get
some
giveaways
in
the
mail
and
also
an
email
tonight.
A
So
I
believe
everybody
on
this
call
has
contributed
quite
a
bit
on
the
project
and
you
will
get
an
email
tonight
with
with
something
from
CN
CF
as
well
all
right.
So
let's
look
at
our
agenda
really
quickly.
The
the
first
item
on
the
agenda
is
kind
of
talking
a
little
bit
about
what's
coming
up
next
and
then
we'll
dive,
even
into
a
couple
of
items
that
the
deep
wanted
to
talk
about.
A
So
so
some
of
our
big
features
in
115
or
Cuba
DM
support,
know
them
20s
and
and
kind
of
bringing
those
under
the
under
under
working
for
windows.
First
and
figuring
out
doing
the
same
things
that
we
did
for
the
conformance
test.
Then
container
D
support
GM
sa
getting
it
to
beta
and
then
the
runtime
class
for
hyper-v
isolation.
That
kind
of
ties
in
nicely
with
container
D
so
I
guess
the
the
biggest
thing
that
I
I
wanted
to
see
is
to
see.
A
So
so,
let's
start
with
cube
a
DM
from
from
a
vmware
standpoint.
We're
gonna
have
a
couple
of
folks
lube,
amir,
huge
I.
Don't
see
him
on
the
call
it's
a
little
late
for
him,
so
maybe
he
couldn't
attend
today.
But
Lou
bomber
is
gonna
work
with
Tim
Sinclair
to
see
about
what's
necessary,
to
get
cubed
iam
working
on
Windows.
We
also
need
someone
to
participate
in
from
our
sig.
B
A
B
And
my
plan
for
this
one
is
to
actually
start
at
the
CRI
API
layer,
because
there's
a
number
of
changes
that
we
need
to
make
sure
land
in
there
that
have
been
sort
of
circulating
around
runtime
class.
One
of
them
is
also
accounting
for,
like
the
pod
overhead
and
some
of
the
things
that
are
needed
for
sizing
VMs
with
kata.
Also,
basically,
it
would
apply
to
hyper-v
isolation
here,
and
so
my
plan
is
to
focus
at
that
at
that
layer
make
sure
we
have
the
right
stuff
in
the
cap.
B
A
D
So
we
started
looking
into
it
to
see
exactly
what's
happening
there,
how
they
should
be
run,
so
you
mostly
has
been
looking
so
our
focus.
This
release
is
to
pay
more
attention
to
the
Flamel
and
the
final
test
in
the
obvious
test.
So
I
don't
think
we
would
be
looking
at
the
node
each
me
only
or
consistently,
at
least
for
the
beginning.
D
A
A
F
A
A
Next,
next
item
on
the
agenda,
glad
you
and
I
talked
a
little
bit
about
this,
but
there's
a
sub-project
of
six
architecture.
That's
going
to
focus
entirely
on
conformance.
We
need
to
be
part
of
that
for
a
couple
of
two
reasons:
number
one
is
they
are
all
gonna
make
decisions
about,
conformance
tests
are
coming
in
they're
gonna
make
decisions
about
what
what's
conformance
and
what's
not
conformance
profile
for
Windows.
A
A
A
A
H
B
B
A
C
E
A
E
Want
to
sort
of
bring
up
one
point,
so
one
of
the
open
items
we
had
around
that
was
sorta.
Like
you
know,
you
have
the
security
context
filled
in
the
pod
spec,
and
one
of
the
notes
that
Jordan
had
during
the
during
the
cap
review
was.
Should
we
start
looking
at
our
specific
areas
or
struts
right?
So
basically
you
have
things
for
Linux,
like
SC,
Linux
context
and
then
for
Windows,
var,
starting
off
the
GMs
and
potentially
other
things.
E
B
So
I
had
requested
that,
for
the
run
as
user
PR,
that
James
had
put
together
for
14
and
it
wasn't
accepted
because
it
was
after
the
API
review
deadline.
Okay
and
so
the
run
as
user
would
be
another
well
I
guess
sorry,
it's
run
as
user
name
instead
of
run
as
user
is
what
we
had
called
it.
But
if
we
did
at
a
Windows
security
context
field,
then
I
think
it
makes
sense
to
have
run
as
user
and
you
know,
run
a
service
account
or
credential
spec
or
whatever.
A
A
E
So
so,
if
you
scroll
down
I,
have
appointed
to
a
document,
if
possible,
you
can
maybe
open
it
up,
but
basically
what
happened
is
a
last
meeting,
Jing
Zhu
le
from
Google
was
there
and
after
the
meeting
we
had
a
discussion
between
Nick
from
Microsoft,
yuju
and
Jing
and
myself
around
like
what
might
be
a
potential
set
of
options
for
supporting
persistent
storage
workloads
in
Windows
and
the
feedback
there
was.
You
know
there
are
obviously
lots
of
different
options
because
they're
very
ways
you
can
write
plugins
and
hook
into
the
ecosystem.
E
So
based
on
that
I
wrote
up
a
dark.
The
feedback
was
to
you
know,
enumerated
all
the
options
and
pros
and
cons
of
each
of
them,
so
I
captured
all
of
them
in
the
dock
and
I
came
up
with
a
list
of
things
that
we
should
potentially
do
for
1:15
to
make
like
the
windows
support,
more
sort
of
first-class
citizens
and
in
fact,
even
today,
so
there
are
three
main
things
that
I
am
proposing
there.
E
One
is
you
know,
although,
although
like
the
overall
community
thing
is
like
no
more
entry,
storage,
plugins
I'm,
proposing
that
here
now
that
windows
RGH,
should
we
consider
SMB
entry
plugin
just
to
be
entirety
with
the
NFS
plugin,
and
it
doesn't
kind
of
bring
in
too
much
of
a
baggage
in
terms
of
rendered
good,
because
it's
like
silly
platform
fun.
As
in
like
independent
of
any
cloud
provided
resources
provided
api's,
then
I
propose
a
set
of
enhancements
to
the
existing
entry.
I
see
plug
in
to
make
sure
that
it
can.
E
It
just
doesn't
work
with
ice
cuz,
the
ADM
that's
there
for
Linux,
but
also
uses
the
partial
command
a--let's.
That's
there
for
Windows
and
to
support
CSI
I
present
an
option
around
using
a
privileged
proxy
process
that
will
do
certain
privileged
operations
for
us.
There's
some
security
issues
around
it.
E
But
the
idea
is
with
the
mechanism
that
I
proposed
in
the
dock,
like
one
can,
potentially,
you
know
have
privileged
daemon
that
runs
just
like
you
played
Dixie
or
you
proxy,
and
that
CSI
node
plugins
can
talk
to
in
order
to
do
the
privileged
operations
on
their
behalf.
So
that's
a
model
we
have
already
tied
with
a
tried
with
a
CNI
plugin
and
interacting
with
HMS
API
calls.
E
Dinesh
probably
has
some
experience
with
that
already,
but
so
basically,
I
was
proposing
something
very
similar
for
CSI
as
well
and
sort
of
providing
an
API
layer
that
performs
most
of
the
operations
from
the
storage
side
that
I've
seen
that
are
necessary
for
Windows.
So
this
might
lead
up
to
a
few
caps
eventually,
but
I
just
want
to
get
an
overall
overview
from
the
community
or
on.
If
you
have
any
feedback
comments,
please
please
post
it
there.
A
B
B
B
So
the
other
thing
that
I
wanted
to
mention
real
quick.
Are
you
still
there
Adelina,
oh
yeah,
okay
you're
on
the
call,
so
I
was
also
gonna
start,
taking
a
look
at
making
it
easier
to
get
PRS
tested
using
a
prowl
command,
because
right
now
we're
not
doing
/
PR
testing,
so
she's
going
to
start
doing
some
investigations
in
that
direction,
which
will
make
it
easier
for
us
to
get
these
tested
before
they
actually
go
in,
and
so
hopefully,
lots
more
updates
on
that
soon.
B
D
It
will
be,
it
will
be
the
staging
job.
It
is
a
little
bit
flaky,
but
good
enough
for
a
PR
job.
I
mean
we
can
distinguish
between
an
actual
flake
job
and
a
test
failure.
We
all
have
to
trigger
it
manually,
because
we
cannot
at
this
point
that
we
don't
have
the
resources
to
to
run
it
on
every
possible
PR.
D
D
D
D
What
is
important
to
mention
that
those
tests
are
in
parallel
and
that
we
cannot
I
mean
we
will
not
be
able
to
run
them
same
for
our
tests,
for
some
reason
need
the
sequential
need
to
be
run
sequentially.
So
we
will
not
cover
that
case
other
than
that.
It
should
be
fine
for
the
vast
majority
of
our
use
cases.
B
B
B
A
Right
so
I
guess
the
last
thing
is,
you
know,
look
at
our
backlog
kind
of
split
into
two,
the
back
row
for
115
and
generic
backlog.
You
know
if
you
have
spare
cycles
and
you
can
tackle
one
item
in
there
or
two
and
make
sure
you
there's
some
low-hanging
fruit
and
there's
some
of
them
are
a
little
bit
more
involved,
but
as
you're
as
you
have
time,
let's,
let's
look
at
them.
They're
also
gonna
start
getting
a
lot
more
bugs
from
customers
their
time
windows.
A
You
know,
I
know
that
Dinesh
and
David
have
talked
about
some
better
troubleshoots
that
stuff
on
networking,
because
we've
had
a
lot
of
folks
have
had
network
issues
even
trying,
our
beta
or
even
previous
releases.
So
we
need
to
figure
out
ways
to
scale
so
international.
When
I
look
at
you
and
David
to
kind
of
write
some
of
those
troubleshooting
steps
on
the
networking,
so
that
so
that
that
happens,
but
in
general
you
know
you
have
cycles
robust,
fixed
bugs
our
project
board
has
all
the
details.
A
If
you
see
something
that
you
haven't
documented
right,
let
me
and
Patrick
and
Craig
know
and
we'll
fix
it
or
you
know,
go
Sammy
the
pr
accounts
and
talks
yourself
and
who
can
we
do
it?
So
we
have
pretty
substantial
documentation
this
this
release,
and
you
know
big
thanks
to
everybody
that
contributed
to
write
that.
So,
let's
keep
it
that
way
and
let's
keep
updating
that
for
our
customers
to
have
to
ping
us
for
every
little
thing.
Oh
and
Craig
talked
about
the
discuss.
Server.
I
forgot
to
add
that
on
the
agenda.
A
A
A
I,
have
it
so
so
discuss
next
week,
all
right!
Well,
thank
you
all.
You
know
I'm
again,
congratulations
huge
milestone.
We
look
forward
to
doing
more
things
together
as
a
community
and
everybody's
really
proud
of
the
work
that
we've
done
so
get
a
beer
or
a
drink
or
something
tonight.
Thank
you.
Alright
have
a
good
day,
everybody.