►
From YouTube: Kubernetes SIG Windows 20210126
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
january
26
2021
iteration
of
the
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
all
of
the
cncf
code
of
conduct
and
standards.
All
right
for
those
of
you
who
joined.
We
were
just
starting
a
little
bit
of
a
pre-triage
meeting,
which
I
think
is
getting
set
up
now,
which
leads
us
right
to
the
first
agenda:
item
jay,
ravi
or
james.
Do
any
of
you
want
to
kind
of
introduce
this.
B
Considering
we
want
the
merge
to
be
faster,
so
we
sort
of
agreed
to
that,
and
then
we
decided
that
one
of
the
main
things
that
we
need
to
ensure
if
we
want
to
have
post,
merch
or
periodic
jobs
instead
of
having
pre-submits
is,
we
need
to
have
solid
green
for
all
the
jobs
that
are
being
tracked
by
the
release
team.
There
are
two
boards
that
are
actually
in
the
test:
script
that
are
actually
tracked
by
the
sig
release
team.
B
For
folks
who
do
not
know
there
is
a
separate
release
stream
in
kubernetes,
which
handles
all
the
release
related
issues,
and
they
would
actually
track
up
two
dashboards.
One
of
them
is
informing.
The
other
is
sorry.
One
of
them
is
master
informing
and
there
is
master
blocking
they
have
certain
requirements
for
the
jobs
to
be
in
either
master,
informing
or
master
blocking,
and
we
believe
that
the
container
d
job
that
we
already
have
has
met
those
requirements
to
be
part
of
informing
and
the
upstream
testing
team
is
fine
with
it.
B
B
And
the
next
thing
that
we
want
to
do
is
to
make
sure
that
this
job
actually
becomes
a
release.
Blocking
this
ensures
that
the
release
stream
is
actually
upon
those
test
failures,
and
if
there
is
something
wrong
with
any
of
the
any
of
the
tests
that
are
failing
continuously,
they
would
come
back
to
us
and
then
tell
that
this
has
to
be
triaged
immediately.
B
So
that
is
the
status
from
the
upstream
sick
testing
team
and,
as
far
as
how
can
we
meet
those
requirements,
is
something
that
I'll
discuss
in
a
bit,
but
at
a
high
level.
What
we
want
to
have
is
in
order
to
make
sure
that
we
are
meeting
the
high
bar
that
has
been
test
that
has
been
set
by
the
testing
team
for
our
job
to
be
in
the
sick
release
master
blocking.
B
What
we
intend
to
do
is
have
a
15
minute
or
a
20
minute
time
every
week,
so
that
we
go
through
the
test
board
and
then
ensure
that
there
are
no
jobs
that
are
continuously
failing
or
the
tests
that
are
continuously
failing
in
that
particular
job.
So
we
want
to
use
the
initial
15
minutes
of
this
community
meeting
either
like
extend
the
community
meeting
by
15
minutes
at
the
start
or
at
the
end,
to
go
through
the
the
test
board.
C
Yeah
that
sounded
about
right.
So
just
we
have
three
jobs
in
the
release:
master
and
forming.
Two
of
them
are
on
gce,
so
it's
actually
the
master
and
forming
over
to
the
left.
A
The
sig
release
informings
are
usually
used
to
make
like
when
the
hotfix
or
when
the
new
patch
versions
come
out
yeah.
So
we
have
those
there
too
yeah
there's
a
lot
more.
C
So
there's
three
in
here:
there's
the
gce
2019
1909
and
then
there's
an
aks
engine
container,
d1,
the
I
think
we
just
need
some,
maybe
some
help
from
like
the
gce
folks
on
the
on
the
wind
on
those
couple
tests
there
we're
we're
working
on
the
master
container
d1.
C
So
I
yeah,
I
guess
just
call
out
those
are
the
three
that
we
really
need
to
track
to
be
able
to
get
some
of
these
pushed
to
the
master
blocking
and
they
need
to
be
solid,
green
there's,
there's
a
few
other
requirements
around
those
as
well.
A
A
I
personally
usually
join
the
signoid
meeting
immediately
following
this,
so
I'm
open
to
joining
earlier,
but
I
wouldn't
be
able
to
join
after,
but
if
that
time
works
better
for
the
majority
of
folks
feel
free
to
carry
on
with
that.
D
A
B
The
other
thing
that
jay
wanted
to
bring
there
is
related
to
what
is
the
policy
that
we
are
going
to
adhere
to
when
it
comes
to
failing
jobs
that
are
like
continuously
failing,
and
we
actually
need
some
help
from
other
set
of
people
right.
For
example,
there
are
some
storage
tests
that
are
failing
on
the
windows
side,
or
there
are
some
gce
specific
things
that
are
failing.
B
So,
if,
if
you
want
our
test
script
to
be
green,
like
should
we
come
up
with
a
policy
that
hey,
if
you
are
not
going
to
meet
this
bar,
we
may
have
to
keep
you
out
of
the
test
grid
and
it
can
be
like
the
test
can
be
anything
irrespective
of
whoever
authored
the
test.
A
I
think
we
should
be
careful
with
that.
I
think
if
we
determine
that
it
is
a
flaky
job
that
might
be
okay,
but
I
it
would
really
feel
bad
kind
of
taking
if
there
are
issues
that
we
know
need
to
be
fixed,
but
aren't
in
our
power
to
fix
kind
of
hiding
those
escalating
to
like
other
sig
leaderships.
This
might
be
a
a
good
first
action.
There.
B
Yeah,
I
agree
yeah,
but
in
either
case
I
think
the
policy
needs
to
be
like
less
strict
or
strict,
but
in
all
the
cases
we
need
to
have
some
policy
saying
that
this
is
the
sla
that
we
are
going
to
honor
in
case.
We
we
want
to
go
green
is
the
point
that
jay
was
trying
to
make
and
jay
correct
me
if
I'm
wrong.
D
I
feel
like
we're
kind
of
suffering
from
that
here,
where,
when
I
look
at
it,
I
see
so
much
red
that
I
don't
even
know
how
to
interpret
whether
any
work
needs
to
be
done,
and
so
I
guess
that's
the
trade-off
there
right
is
that.
D
B
Yeah,
I
think
like,
instead
of
seeing
that
you're
going
to
hide
that,
perhaps
what
we
can
say
is
we're
going
to
incentivize
the
other
teams
to
to
be
much
more.
B
Yeah,
that's
what
I
was
suggesting
like
all
of
us
were
suggesting
the
same
thing
like
I
think
james
is
currently
looking
into
it,
but
the
way
we
can
do
is
we
can
specify
certain
jobs
within
the
releasing
forming,
using
the
rejects
saying
that
these
type
of
these
are
the
current
set
of
jobs
that
can
be
ignored
for
this
run,
so
I
think
james
is
actually
looking
into
it.
Correct
me
if
I'm
wrong
james,
I
believe
you
have
started
looking
into
it.
C
Yeah,
well,
I
didn't
want
to
do
anything
because
it
in
particular
it's
the
couple
of
the
jobs
for
the
release
and
forming
is
for
the
google
folks,
and
so
I
didn't
want
to
start
messing
with
their
jobs
in
the
recently
released
informing
until
we
got
some
kind
of
agreement
with
everybody.
I
think
I
pinged
in
slack
as
well
all
right
so.
F
Yeah
I
wanted
to
just
chime
in
and
say
I
agree
with
pretty
much
everything
that
ravi
and
james
and
jay
have
shared,
and
one
thing
that
would
be
helpful
for
me
and
for
for
our
team
is
to
just
have
it
written
down
somewhere
like
where
and
which
test
jobs
are
the
critical
ones
that
we
need
to
keep
green,
because
you
know
there
are
a
lot
of
test
jobs
and
it's
become
a
little
bit
challenging
for
people
who
aren't
looking
at
those
jobs
every
single
day
or
with
some
regularity,
to
keep
track
of.
F
What's
what's
what
and
what's
important
or
most
important
to
keep
green
if
we
had
that
written
down
somewhere,
that
would
be
really
helpful.
A
C
Yeah-
and
one
thing
I
tried
to
do
a
while
back,
was
try
to
reorganize
the
dashboard
a
little
bit
and
we,
I
think,
was
pretty
successful,
but
we
could
do
that
even
further
and
say
these
are
the
critic
and
just
kind
of
restrict
to
these
are
the
critical
tests
and
have
one
view
of
like
the
three
or
four
tests
that
we
really
care
about
and
that
way
we
know
that.
That's
the
dashboard
that
we
want
to
keep
track
of.
A
All
right,
I
think
this
is
good,
but
in
the
interest
of
time
I
think
I'd
like
to
move
the
discussions
on
a
little
bit.
If
that's
all
right
with
everyone,
all
right,
yep,
let's
go,
let's
move
the
so,
let's
think
some
action,
let's
move
discussion
into
slack
about
what
time
people
prefer
and
figure
out
how
to
get
that
kind
of
set
up
and
on
the
calendar
for
everybody
and
kind
of
keep
keep
that
going.
I
think
that's
a
great
effort.
A
I
think
the
next
things
that
I
wanted
to
discuss
today-
or
I
think
are
important
to
discuss
today-
are
some
of
the
121
kep
enhancements
that
we're
planning
on
making
kind
of
wanted
to
give
it
a
pulse
check
on
the
two
that
we're
tracking
so
far
and
the
first
one
was
the
using
the
cube
ctltv
system
service
logs.
I
think
our
event
said
he
was
not
going
to
be
here,
but
christian,
would
you
be
able
to
talk
to
this
a
little
bit.
G
Yeah
sure
hi
everybody,
so
we,
I
think
we're
pretty
well
said
we
do
have
to
do
some
more
investigation
into
what
capabilities.
Cubectl
already
has
at
this
point,
because
we're
essentially
proposing
to
to
expand
an
api
that
already
exists.
We
don't
we
haven't
seen
a
client
for
that.
Yet,
essentially,
there
is
already
a
cubelet
functionality
to
serve
files
from
the
var
log
directory
log
files,
and
we
want
to
add
a
ship
to
that.
G
G
That,
and
especially
we
want
that
to
be
a
functionality
that
can
only
be
accessed
by
admins.
So
we're
not
sure
if
there's
other
commands
that
already
have
this
limitation
so
yeah
some,
some
more
investigation
is
needed,
but
other
than
that
we
would
invite
more
more
reviews.
Obviously,.
A
Yeah,
it
looks
like
deep,
reviewed
everything
I
looked
at
this
a
little
bit.
I
think
that
this
is
a
great
idea,
as
we've
discussed
just
in
terms
of
feedback
for
this
cap.
I
think
it
would
be
good
to
put
some
of
the
design
details
that
were
even
even
if
it's
just
taking
the
route
that
you
had
in
the
implementation
here
and
stick
that
in
the
cap.
A
I
think
I
have
a
feeling
that,
when
folks
from
other
sigs
are
going
to
review
this
cap,
they
may
just
look
at
what's
in
the
actual
enhancement
proposal,
see
that
it's
not
populated
and
not
take
a
look
at
the
actual
implementation
behind
it.
So
I
just
want
to
make
sure
that
this
gets
a
fair
chance
for
review,
and
I
do
know
that
we're
discussing
with
trying
to
discuss
with
sig
cli
about
the
cube
ctl
enhancements.
A
G
No,
not
yet
we
haven't
had,
or
we
haven't
received
any
feedback
from
from
the
this
exclaim.
Yet.
A
Okay,
if
we,
if
you
haven't
gotten
any
feedback,
yet
it
might
make
sense
to
take
this
to
one
of
their
sig
community
meetings.
I'm
not
I'll
have
to
look
on
the
community
calendar
when
six
cli
meets,
and
I
can
help
kind
of
drive
that
facilitate
that
with
you
all
too.
Oh
yeah.
That
would
be
awesome,
yeah,
the
other
I
I
did
mention
too.
A
I
think
sig
auth
would
probably
like
to
have
a
look
at
this
too,
since
I
think
that
they're,
at
least
my
my
view
is
that,
if
we're
exposing,
at
least
on
the
window
side
more
system
events
than
what
is
exposed
just
by
the
cubelet,
that
could
potentially
have
some
security
implications.
G
So
yeah,
definitely
so
in
in
for
the
linux
case
here
this
is
actually.
This
doesn't
actually
expose
anything
that
isn't
already
exposed,
at
least
from
the
cubelet
side.
Again
I
don't
know
if
there
is
the
client
implementation
for
that
specific
feature
already,
but
it
essentially
just
adds
a
shim
for
the
journal
ctl
command,
which
stores
all
all
its
logs
in
that
viral
log
directory.
G
Already
it's
just
not
that
easily
searchable
and
we
can
add
a
shim
for
the
yeah
for
general
ctl
on
linux
and
then
yeah
on
windows
accessing
the
the
win
event
log
there.
That's
probably
something
that
isn't
already
exposed
in
the
on
windows.
It's
the
c
yeah
c
viral
log
directory.
It's
probably
not
in
there
so
yeah
that
for
windows
that
that
might
actually
be
true
and
security.
People
might
want
to
look
into
that.
A
A
A
All
right,
I
take
that
as
moving
on.
Thank
you
christian
for
coming
and
talking
to
this
some
all.
A
The
next
cap
that
we're
tracking
is
the
windows
privileged
container
support
for
alpha
I've
been
working
on
this
along
with
some
other
folks.
I
have
not
seen
too
many
reviews
on
of
or
for
the
kept
outside
of
sig
api
and
jordan
leggett.
So
I'd
appreciate
some
looks
at
that.
Some
updates
we
have
here
are.
A
We
now
have
a
proof
of
concept
of
this
working.
Hopefully,
you'll
try
and
have
a
demo
of
this
for
next
week.
In
order
for
this
proof
of
concept
to
to
run,
there
are
a
couple
of
binaries
that
need
to
be
built
with
changes,
so
I've
called
those
out
here.
One
is
hcs
shim
and
there's
a
pull
request
with
the
changes
that
are
needed
for
hcs
shim
container
d
binary.
A
You
can
actually
just
pick
up
from
the
container
or
recent
container
d
release
and
the
cubelet
has
some
changes
in
order
to
pass
some
annotations
along
and
then
your
your
pod
specs
need
to
have
this
particular
annotation
added,
so
we'll
try
and
demo
that
next
week
and
get
a
little
bit
more
feedback.
I
did
a
little
bit
of
kind
of
preliminary
testing
and
things
were
mostly
working
as
expected,
so
I
think
that
that's
good,
I'm
kind
of
in
the
same
boat,
I'm
waiting
for
we're
waiting
for
feedback
from
a
couple
other
cigs.
A
A
Are
there
folks
here
that
are
interested
in
kicking,
that
off
again
or
in
the
cup?
We've
kind
of
called
out
that
host
network
mode
is
the
only
supported
network
mode
for
these
windows,
privileged
containers
in
the
future,
and
we
will
be
enforcing
that
through
api
validation
for
the
pod
specs
for
now.
So,
if
anybody
has
any
concerns
about
that
specifically
or
wants
to
kind
of
deep
dive
into
what
that
actually
means,
we
can
set
up
another
kind
of
deep
dive
meeting
for
this.
D
I'm
totally
in
on
any
tax
session
deep
dive.
Whatever
about
any
of
this
stuff,
I
think
it's
totally
awesome
and
we
totally
need
it.
So
this
is
really
cool,
just
whatever
like
there's
nothing,
that's
higher
priority
for
us
than
this.
So
if
you
schedule
it
for
4
a.m
on
saturday
morning,
I'll
totally
be
there.
So
don't
worry
about
what
time
it
is
just
whenever
you
want
to
show
us
just.
Let
us
know
we'll
yeah.
B
Okay,
it
will
also
be
beneficial
for
openshift
2,
so
I'd
like
to
join
from
red
hat
sign.
A
Okay,
yeah
in
my
kind
of
experimenting,
I
was
able
to
run
csi
proxy
in
a
privileged
container
and
run
through
some
scenarios
and
it
looked
like
everything
was
working.
I
think
the
next
thing
to
try
is
probably
getting
q
proxy
running
in
a
privileged
container.
If
there's
any
other
things
to
try
after
that,
please
let
me
know
all
right
and
moving
on,
I
see
ray
joined
and
I'm
not
sure
if
you
wanted
to
discuss
what
we
were
talking
about
on
slack
about
the
security
audits
or
not
ray.
H
Yeah,
I
should
have
my
video
on
all
right.
My
name
is
ray
lahano,
I'm
from
sousa
by
way
of
rancher
labs.
I
also
work
a
lot
with
sig
security
and
and
sig
release
with
one
that
leads
for
1.21
for
docs.
H
So
what
we
chat
about
in
slack
was
that
there
will
be
a
an
rfp
out
for
security
audits
for
this
year
and
a
third-party
security
audit
for
kubernetes
windows
is
not
in
scope,
but
as
for
my
work
as
we've
seen,
a
growing
number
of
windows,
customers
and
windows
end
users
for
future
audits.
We
do
want
to
have
windows
in
scope
of
future
third-party
security
audits,
and
ideally,
I
think
I
think
the
cadence
will
be
annually
but
yeah.
H
I
might
be
incorrect
with
that
for
a
third
party
security
audit,
so
so
it
won't
be
in
this
year's
audit,
but
I
am,
I
do
want
to
have
next
year's
audit
and
one
of
my
goals,
or
they
they
might
ask
one
of
the
questions-
was
how
many
end
users
or
windows
and
user
users
are
there.
So
I
know
from
from
my
from
my
perspective.
H
We
don't
keep
track
of
how
many
windows
and
users
there
are,
but
I
don't
know
if
there
are
any
other
people
here
that
where
they're
coming,
I
do
keep
track
of
that.
But
I'm
not
it's
not
required
yet,
but
I
do
kind
of
want
to
have
that.
You
know
to
as
a
to
support
the
case
or
to
support
adding
windows
be
in
the
security
audit
for
next
year.
A
All
right,
I
think
that
that
yeah,
that
would
be
awesome
to
get
windows
included.
As
I
mentioned
to
you
in
slack,
I
will
need
to
reach
out
to
some
folks
to
see
if
there's
anything,
we
can
just
disclose
about
user
numbers
from
aks.
A
If
that
would
help,
I
don't
know,
is
anybody
on
the
call
interested
in
helping
to
get
this
kind
of
security
audit
going
and
if
they
are
and
have
numbers
to
share,
please
reach
out
to
either
ray
myself
or
any
of
the
tech
leads
here
I'll
try
and
get
this
happening.
H
And
I'll
probably
reach
out
again
around
around
the
fall
time,
so
I
could
start
gaining
that
start
building
the
case
for
adding
windows
and
put
that
on
people's
radar.
While
ahead
of
when
the
next
rfp
is
out.
H
All
right
and
in
case
I'm
going
to
put
the
the
rfp
the
link
to
rfp
on
the
traps
finder
right
now:
okay,
so
if
anyone's
curious
to
what
the
current
rfp
is.
A
A
I
think
that's
everything
that
we
had
on
the
agenda
for
today
does
anybody
else?
Have
any
topics
I'd
like
to
discuss
or
go
deeper
into
something,
we've
already
discussed.
B
I
want
to
talk
about
one
particular
github
issue
that
dims
has
created.
This
is
for
replacing
docker
shin
with
container
details,
so
I
think
all
of
us
are
on
agreement.
I
just
want
to
make
sure
that
all
of
us
are
in
agreement
regarding
the
type
of
test
to
be
cleared.
The
docker
shim
tests
earlier
were
using
club
entry
cloud
provider
as
the
plug-in,
whereas
we
wanted
to
use
the
csa
or
we
are
currently
using
csa
plugin
for
container
dds.
B
So
the
initial
plan
of
thought
that
I
had
earlier
was,
I
would
create
a
test
which
is
almost
similar
to
what
we
have
in
docker
shim.
But
the
way
I
see
it
currently
is
we
want
to
use
csi.
I
just
want
to
make
sure
that
everyone
is
in
agreement.
If,
if
you
are
fine,
I
will
go
ahead
and
update
the
pr
that
I
opened
to
ensure
not
to
use
the
entry
plugins,
but
to
use
the
csi
proxy.
A
From
my
perspective,
I
think
that
we
have
at
least
periodic
tests
that
use
the
that
use
csi
products
or
that
use
the
entry
plug-ins
or
they
use
docker
shim
and
csi
proxy.
A
But
I
do
think
that
you're
correct
that
the
the
ones
that
get
triggered
automatically
are
using
docker
shim
and
to
see
it
or
end
the
entry
plugins,
I'm
not
entirely
sure
on
the
timelines.
Maybe
deeper
jing
can
comment
here,
they're
much
more
involved
with
storage
than
I
am,
but
I've
kind
of
been.
My
view
is
that
we
should
pursue
the
csi
proxy.
A
I
thought
that
the
entry
storage
plug-ins
were
slated
to
be
removed
a
couple
of
releases
ago
already
in
favor
of
out
of
tree
plug-ins
and
csi
proxies
the
way
of
for
enabling
those
on
windows.
So
I
think
I'd,
I'm
okay,
with
only
enabling
the
new
tests
for
continuity
and
csi
proxy
okay,
yep.
C
All
right,
I
one
minute-
I
just
say
thanks
to
claudu,
for
all
the
work
he's
done
with
all
the
image
promotion
stuff
we've
actually
had.
Some
test
runs
pulling
from
the
gce
container
registry,
so
we
got
those
images
now
being
pulled
and
we
don't
have
to
for
at
least
a
subset
of
the
images
we
don't
have
to
mirror
those
to
a
windows
only
repository.
So
just
it
was
a
pretty
big
accomplishment
from
for
all
the
testing
work.
That's
been
happening
over
the
last
two
years.
So
thanks.
D
Yeah,
that's
awesome.
I
have
a
quick
question.
I
forgot
to
bring
this
up.
I
have
the
network
policy
tests
running,
but
I
have
not
been
able
to
get
them
to
pass
on
any
windows
environment
like
I've
run
them
on.
I
mean
I've
run
them
on
calculon
eks
with
an
up-to-date
calico.
I've
run
them
on
various
like
andrea
versions,
I've
run
them
well.
I
know
that
and
azure
in
aks
we
don't
yet
support
network
policies
for
windows.
D
E
I
I
Currently,
it's
gonna
be
the
sick
testing
meeting,
so
I'm
gonna
have
to
attend
that
as
well.
But
after
that
sure
after
one
hour,
okay.