►
From YouTube: Kubernetes SIG Windows 20210105
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
afternoon,
good
evening,
everybody
and
welcome
to
our
first
meeting
of
2021.
Hopefully
this
year
it
goes
better
than
last
year-
everybody.
Hopefully
everybody
also
had
some
rest
during
the
break
and
without
a
lot
of
meetings,
at
least
not
in
the
united
states.
So
I
really
got
a
chance
to
reconnect
with
family
and
just
spend
some
time
outside
of
work.
A
So
we
have
a
few
updates
today,
I'll
try
to
see
if
we
can
get
through
everything
in
the
25
minutes
that
we
have
I'm
going
to
skip
the
first
one
I'll
do
that
in
a
little
bit.
Let's
start
with
the
second
item,
which
is
if
anyone
is
interested
in
attending
this
meeting,
and
you
want
to
attend
this
on
a
on
a
frequent
basis.
Please
go
ahead
and
join
this.
A
Google
group
kubernetes
sick
dash
windows
because
that's
where
we
send
the
calendar
invite
so
please
make
sure
you're
a
member
of
this
group.
We
will
likely
update
our
calendar
invite
for
the
new
year
so
today,
after
today,
we'll
cancel
the
existing
calendar.
Invite
and
we're
gonna
send
a
new
one.
B
Yeah,
so
we
missed
this
at
the
last
meeting,
but
we
wanted
to
just
give
a
shout
out
to
james
who
won
the
contributor
award
for
sig
windows.
We
wanted
to
highlight
all
the
work
that
he's
done
with
com,
first
of
all,
coming
up
to
speed
kind
of
pretty
rapidly
with
all
the
windows
work
and
then
also
all
of
the
work
that
he's
done
with
the
cluster
api
and
windows
support.
So
congratulations.
James.
A
Absolutely
when
I
I
call
that
james,
you
you've
been
doing
great
lately
and
you
know
lately
ever
since
you
joined
the
seat
and
you
need
to
reward
you
for
a
lot
of
the
work
that
you've
done
and,
and
you
know,
keep
it
up.
You've
been
you've
been
a
great
contributor
to
the
project.
A
A
If
you
haven't
gotten
that
I
will
search
for
that
email
and
and
see
if
I
can
find
why
we
didn't
get
one
for
sick
windows,
you
may
have
gotten
that's
why
I'm
asking
but
either
or
we'll
get
the
chance
for
a
session,
we'll
likely
sign
one
up,
and
I
know
we
had
some
great
ideas
since
last
session
for
things
that
you
could
try
to
do
here.
Right
yeah.
I
don't
think
I
got
one
I'll
have
to
check
on
that
too.
B
A
Next
one,
I
want
to
talk
a
little
bit
about
our
121
investments,
so
kind
of
moving
forward
and
kind
of
trying
to
set
the
stage
in
terms
of
where
121
will
go
right.
So
we
have
privileged
containers
where
the
kep
pr
is
merging
provision
state.
We
have
some
of
the
work
around
scraping
event
logs,
where
we
potentially
might
need
the
cab
owner
here,
a
cap
as
well
as
a
cab
owner,
low
balancing
health
checks
for
external
traffic
policy.
That's
some
of
the
work
that
david.
A
I
don't
seen
david
on
today,
but
david
and
team
were
looking
into
then
cluster
api
azure
is
merged.
But
you
know
we
don't
really
have
a
clear
picture
for
everything
else.
Then
we
have
container
d,
csi
storage
and
then
the
last
one,
the
gpu
device
support
that
go
merged
over
the
break,
which
was
great
to
actually
see
that
so
well,
it
was
merged
and
broke
some
things.
A
So
thank
you,
james
for
and
and
a
few
others
and
jay
and
others
that
contributed
to
kind
of
fixing
that
very
quickly
over
the
break
and
get
that
back
to
green.
But
it
was
great
to
see
that
that
was
a
pr
that's
been
sitting
around
for
a
while.
C
A
Very
nice,
thank
you
arawin!
So,
let's
let
let's
get
let's
get
the
cap
ready,
it
doesn't
have
to
be
too
huge
right.
It's
not
a
it's,
not
a
huge
amount
of
work
here
right.
So
the
cap,
let's
keep
it
scoped
to
to
basically
to
the
to
the
work
involved,
make
it
simple
that
way.
Can
you
just
get
past
very
quickly
and
and
get
that
get
that
approved?
Thank
you
for
taking
ownership
of
that.
No.
C
C
A
D
Please
include
me
as
well
in
this
one
descripting,
the
logs.
I
am
getting
a
lot
of
requests
on
this
one.
So
if
you
can
just
include
me
in
like
or
cc
me
when
you
read
the
cap,
I
would
love
to
help
yeah
yeah.
C
I'll
try
and
open
a
work
in
progress
one
within
the
next
couple
of
weeks.
A
C
A
That
would
be
good
yeah.
Give
me
one
second
I'll
find
the
deadlines,
while
I'm
doing
that
gab,
I'm
kind
of
looking
at
you
sideways
tiana
for
the
vsphere
csi
proxy
work.
What's
your
thinking
there
gab.
A
It
makes
sense
we
likely
don't
need
a
cab
here,
but
we
kind
of
need
someone
to
scope
the
work
and
basically
start
executing
on
it
right.
A
So
when
you
get
a
chance,
just
kind
of
keep
us
up
to
date
on
that
as
well.
A
All
right
anything
anybody
else
wants
to
own
here.
F
A
So
there's
no
eta
right
now,
so
we
are.
This
is
basically
shipping
out
of
out
of
the
kubernetes
release
cycle.
So
gab
is
gonna
scope,
the
work
and
start
working
on
it.
We
don't
have
a
specific
date
here,
but
if
anybody's
interested
and
can
help
accelerate
that
I'm
sure
gab
will
welcome
additional
work
and
additional
contributions
here.
A
A
That's
quite
the
correct
thing,
but
it
doesn't
exist.
Okay,
so
I
mean.
Let
me
let
me
go
back
so
so.
119,
sorry
117,
that
was
around
the
same
time
frame.
Their
dates
was
end
of
january.
Right.
B
A
A
So
but
you
know
I
mean,
go
ahead
and
create
the
cab
as
early
as
possible.
That
way,
it
gives
us
enough
time
to
review
it
and
and
get
it
get
it
in
there,
but
kind
of
looking
at
our
big
big
boulders
for
next
release.
Right
is
the
gsa
work
door,
fixing
for
container
d,
improving
container
d.
Privileged
containers
is
way
up
there,
the
doing
some
more
enhancements
around
eventing
traffic
policy,
cluster
api
and
then
obviously
the
some
of
the
csi
work.
D
Do
you
have
owner
for
the
privileged
container,
michael,
I
think
brandon
is
now
driving
it
since
amber,
has
transitioned
out
yeah
yeah.
G
Many
investments
we're
making
into
the
testing
and
infrastructure
side
of
things
in
terms
of
I
know.
James
was
working
real
hard
on
that
over
the
other
break,
and
I
I'm
just
I'm
just
getting
up
to
speed
on
how
all
of
our
ci
works
and
I
wouldn't
mind
shadowing
someone
to
help
more
or
just
digging
into
some
stuff,
but
I'm
gonna
sit.
I
know,
there's
a
lot
of
random
stuff
like
some
of
the
containers.
G
I
know
there's
some
container
dci
stuff
that
need
to
be
done
and
I
believe
there
are
some
stuff
related
to
1909
images,
but.
A
Yeah
adelina
and
claudio
having,
along
with
mark
and
mark,
have
been
kind
of
shepherding
a
lot
of
the
leadership
around
rci
and
some
of
the
work
there
jay.
If
you
have
bandwidth
and
you
can
contribute
and
assist
in
some
of
those
areas
and
and
have
accelerated
or
bring
in
a
needed
capability
that
you
don't
have
today
I'll,
say
sync
with
claudio
and
adelina
and
let's
figure
out
where
you
can
contribute.
A
Okay,
the
the
one
thing
I
guess
I
should
have
talked
about
so
for
cluster
api
azure
is
merged,
but
you
haven't
kind
of
taken.
A
stock
of
you
know,
gcp
or
aws
is
like
jay
or
anybody
else
from
the
cabi
team.
Does
anybody
know
what's
the
status
for
aws
or
or
so
there's
three
cluster
api
providers
that
we
care
about
a
lot
right,
so
it's
vsphere,
aws
and
gcp?
E
Speak
for
aws,
the
like
the
provider,
there
are
no
provider
changes
needed
and
the
image
builder
I
sent
out
a
pr
for
and
and
that's
sufficient
to
get
windows
notes
up.
H
I'm
just
closing
out
some
of
the
vsphere
image
builder
stuff
that
james
commented
on
before
the
end
of
the
year
and
then
there's
there's
an
issue
with
nested
hyper
virtualization
hypervisors
that
I'm
trying
to
we're
trying
to
just
get
an
idea
or
a
steer
on
and
james
is
helping
me
with
that
in
slack.
So.
G
What
is
the,
what
is
that?
What's
the
ultimate
conclusion
there,
are
we
going
to
try
to
get
your
hyper-v,
independent,
v-switch,
enablement
stuff
into
upstream,
somehow
or.
H
I've
put
a
request
into
into
the
windows
containers
issues.
I've
put
an
issue
in
just
to
sort
of
talk
about
whether
we
could
add
the
vm
switch
stuff
into
the
containers
feature,
or
at
least
split
it
out.
So
it's
a
separate
thing
to
install
that
would
get
around
the
issue
because
you
wouldn't
actually
have
to
install
hyper-v
for
most
installs
unless
you're
using
hyper-v
isolation.
H
G
A
And
then
jerry
mentioned
that
we
don't
know
if
there's
anybody
looking
at
cluster
api
for
gcp
jeremy's
is
something
that
you
might
be
able
to
nudge
a
few
folks
at
google
and
see
if
we
can
get
someone
involved.
I
For
right
now,
this
hasn't
really
been
a
priority
for
us
I
mean
I
could
look
back
and
see,
but
as
of
right
now
we
haven't
had
any
plans.
We
haven't
scoped
this
at
all.
A
Okay,
all
right,
thank
you.
I
mean
if,
if
this
opportunity
ever
comes
up
for
a
discussion
on
this
journey,
we'll
have
to
see
if
we
can
actually
get
the
scope
out
and
and
add
it
that
way
we
can
have
comprehensive
copy
support
across
the
world
for
windows.
I
G
B
All
right
do
we
want
to,
since,
especially
since
jeremy's
here,
do
we
want
to
add
node
problem
detector
for
windows
to
the
list
of
121
advancements.
I
think
that
there
was
a
proposal
passed
around
kind
of
in
120
gathering
ideas,
and
there
are
hopefully
plans
to
make
some
progress
on
here
on
that
in
the
next
couple
of
releases.
B
I
Yeah,
so
actually
the
it's
now
building
for
windows,
so
the
prs
sorry,
the
pr's,
for
it
are
now
to
get
it
to
build
or
in
the
next
step,
is
to
make
it
run
as
a
windows
service.
And
then
we
do
have
somebody
on
our
side.
I
That
will
be
working
on
this,
though,
if,
if
anybody
wants
to
help
out
with
like
whatever
other
problems
they
want
to
detect,
you
know
feel
free
to
add
to
the
documents
and
contribute
code,
we'll
be
basically
implementing
the
things
that
you
know
we're
mostly
concerned
about
for
now,
mainly
because
we
didn't
really
get
much
feedback.
I
We
got
some
in
terms
of
what
was
important,
so
like
basically
right
now,
whatever
is
implemented,
will
be
like
the
things
that
the
the
the
one
or
two
things
the
community
said
that
was
important,
which
ended
up
aligning
pretty
well
with
google.
A
Once
you
do
that
folks
are
going
to
identify
areas
where
they
need
additional
help
or
they
need
to
basically
and
then
go
and
amend
the
problem,
detector
and
add
additional
things
that
they
will
start
detecting
additional
things,
he's
gonna,
look
for
and
so
on
and
so
forth.
So
I
think
it's
gonna
be
a
almost
like
a
self-fulfilling
prophecy,
once
once
it's
actually
out
there
and
it
adds
value
folks
are
going
to
start
using
it
and
then
they're
going
to
add
their
own
extensions
to
it.
I
I
have
no
problem
yeah,
that's
kind
of
how
I
I
thought
it
was
going
to
go
down
as
well,
so
I'm
not
like
too
concerned
or
anything
like
that,
but
yeah,
the
the
first
version
of
it
will
have
like
a
fairly
small
scope
just
so
that
we
can
get
it
out
and
I
don't
think
it'll
be
tied,
I'm
not
sure
like
how,
since
no
problem
with
detectors
or
kubernetes
adam,
I
don't
think
it's
actually
tied
to
a
particular
release
of
kubernetes,
though
I
I'll
give
a
heads
up
of
when
it's
like
in
a
state
that
it
can
be
used.
I
A
All
right,
the
next
item-
I've
added
it
as
part
of
the
discussions
I've
been
having
with
a
lot
of
you,
are
on
image
builder
and
aws
azure
gcp
windows,
images.
I
have
a
couple
questions
out
of
curiosity
and
I
think
jay
and
I
had
a
discussion
on
this
as
well.
Our
cloud
providers,
gonna
bake
the
cube
blood
into
the
images
or-
and
you
know,
I'd
love
to
have
a
small
discussion
on
this.
J
So
I
guess
I
I
think
I
for
the
windows
at
least
initially.
I
was
baking
cubelet
into
the
image
it
needs
to
be
installed
with
nssm,
to
be
able
to
support
some
of
the
cube
adm
functionality,
and
I
modeled
that,
after
the
same
the
way,
they're
doing
it
on
linux
as
they
install
cubelet,
it's
assumed
to
be
there.
I'm
not
super
familiar
with
cubadium
actions,
so
I
assume
that
provides
a
way
to
be
able
to
do
something
when,
in
the
in
it
gets
called.
A
A
J
Right
now
it
comes
with
cubelet
installed
on
the
image,
so
they
they
can
just
if
they
choose
to
use
the
kind
of
image
that's
baked
into
or
that's
provided
by
cap
c.
It
will
have
cubelet
on
it
and
they
don't
have
to
do
any
modification
of
the
images.
J
Yes,
yep,
it
has
everything
you
need
baked
in
right
now.
This
actually
kind
of
comes
up
to
something
mark,
and
I
were
discussing
a
little
bit
yesterday
that
we
weren't
sure
if
we
had
time
to
talk
about
but
and
we've
had
this
conversation
in
slack
as
well,
but
with
container
g.
J
The
way
that
the
cni
is
installed
with
docker
in
in
cluster
api
right
now
is
it's
being
added
at
provisioning
time
so
and
and
we're
using
the
host
network
like
a
kind
of
a
hack
around
the
host
network
and
that's
causing
issues
when
we
go
to
container
d,
because
container
d
doesn't
behave
the
same
way
as
as
docker
does,
with
the
with
the
flag
of
host
network,
which
is
giving
an
extra
ipa
address,
and
so
one
one
of
the
workarounds
here
is
to
install
the
cni
at
when
you're
building
the
image
via
image
builder.
J
J
The
hope
is
with
the
privileged
container
support
we'll,
because
we'll
be
able
to
run
things
on
the
node,
and
we
also
get
that
host
network
this.
This
kind
of
this
problem
kind
of
goes
away
for
the
cni,
but
I
think
we
need
to
figure
out
if
that's
an
acceptable
thing
for
all
the
providers
or
if
you
do
it,
you're.
J
That's
how
we'd
have
to
do
it
with
container
d
because
of
the
limitation
within
the
host
network,
like
the
pod,
like
the
cni
pod,
won't
come
up
because
it
doesn't
have
an
ip
address,
because
windows
doesn't
have
the
the
host
network
with
with
container
d,
we
kind
of
work
around
that
with
the
host
network
flag
on
the
pod
spec
in
docker,
because
docker
doesn't
really
check
whether
or
not
there's
a
host
network
for
for
windows
it.
J
Just
if
there's
a
if
there's
a
network
already
on
the
host
called
host,
I
think
it
is,
then
it
just
attaches
it
to
that,
and
so
it
gets
an
ip
address.
But
the
problem
is
that
doesn't
work
in
container
d
because
it
doesn't
respect
it
doesn't
kind
of
have
that
like
work
around,
I
think,
doesn't.
B
Yeah,
I
think,
essentially
with
with
docker
and
docker
shim.
If
you
set
the
host
network
to
true
on
the
pod
spec,
it
essentially
just
adds
like
dash
dash
network
equals
host
into
the
when
it
creates
the
containers
and
assumes
that
there's
already
a
network
configured
on
the
host
with
container
d.
B
Things
happen
quite
a
bit
differently
and
we
can
maybe
we
should
have
a
longer
discussion
next
week,
but
for
container
d
there's
a
field
that
says
if
host
network
is
true
that
gets
passed
over
the
cri
calls
to
configure
everything
and
the
windows
code.
Paths
in
container
d
itself
are
basically
saying
this
is
not
supported.
We're
not
going
to
do
anything
here
so
we'll
need
to
likely
make
updates
on
kubernetes
and
in
container
d
to
support
this
for
for
windows,
containers.
A
All
right,
so
I
got
one
last
update
and
then
we'll
switch
to
aravind
for
the
last
one.
So
after
five
years
of
kind
of
hopping
and
shepherding
sick
windows,
I'm
gonna
be
stepping
down
as
a
chair
probably
effective
as
soon
as
reasonably
possible.
I
might
make
the
pr
today,
I'm
actually
leaving
vmware
to
join
a
consulting
and
services
company,
and
you
know
it's
been
a
blast
working
with
a
lot
of
you
throughout
the
years
from
like
some
of
the
folks.
A
That's
been
around
for
many
years
now
to
some
of
the
newer
folks
that
have
joined
our
community
and
kind
of
helped
us
grow.
A
I'm
super
proud
of
all
the
work
that
we've
done
all
of
us
together
and
kind
of
broad
windows
from
obscurity
to
to
a
viable
path
forward
for
folks
that
are
looking
to
modernize
our
workloads
into
kubernetes
and
into
containers
so
very
proud
of
the
work
that
we
did.
Even
though
I
won't
be
a
builder,
I
won't
be
able
to
help
build
this
community
anymore.
A
I
will
be
a
consumer
and
I'll
be
a
huge
advocate
for
not
only
kubernetes,
but
also
the
windows,
effort
that
we've
done
and
some
of
the
other
projects
I'm
actually
going
to
be
stepping
down
as
a
maintainer
for
also
some
of
the
other
projects
I
own,
like
hardboard
and
contour
that
are
in
the
cncf
one
graduated
one
incubating
and
as
part
of
that
change
of
the
guard,
ben
moss
is
also
stepping
down
due
to
some
other
priorities.
He's
gonna
be
stepping
down
from
a
technical
lead
and
we've
gotten.
A
No,
no
objections
to
our
two
proposals,
so
jay
and
javas
and
and
james
are
gonna,
be
our
two
additional
tech
leads
on
sick
windows
joining
a
deep
de
bruy.
A
That's
one
of
the
remaining
old
guard
tech
leads,
so
big,
congratulations
to
gab
to
to
to
sorry
to
jay
and
and
and
and
james
for
the
for
their
contributions
and
their
recognition,
as
technical
leads
will
pr
you
in
very
soon
and
when
I
use
this
also
as
an
open
call,
if
anybody's
interested
in
stepping
up
their
contributions
to
our
seek
and
become
a
chair,
please
contact
mark,
and
he
will
mark
and
the
rest
of
the
tech
leads,
will
work
with
you
and
figure
out
the
path
forward
and
how
to
make
that
happen.
B
Yeah-
and
I
want
to
thank
michael
for
all
of
his
contributions
here-
he's
been
involved
in
this
much
longer
than
I
have,
but
I
know
he
was
very
heavily
involved
kind
of
with
the
initial
effort
to
ga
windows,
containers
and
kubernetes
and
kind
of
has
a
huge,
huge
impact
on
all
impact
or
all
aspects
of
kind
of
windows,
containers
in
the
community
even
outside
of
kubernetes.
So
thank
you
michael,
and
want
to
wish
you
the
best
of
luck
in
your
new
endeavors.
A
I'll
I'll
attend
next
meeting
as
well
and
after
that
I'll
probably
start
scaling
down
since
I'll,
be
starting
my
new
gig
and
but
I'll.
If
you
ever
need
me
ping
me
on
linkedin
or
or
even
on
the
kubernetes
like
I'll,
try
to
monitor
that
on
and
off
all
right
ironman.
You
wanna
give
us
a
quick
update
on
the
windows
container,
support
by
red
hat.
C
Yes,
so,
towards
the
end
of
last
year
december
14th
to
be
or
december
18th,
I
think
we.
H
C
G8,
our
windows
container
support
for
red
hat
openshift,
so
it's
you
know
fully
supported
by
us
now,
it's
out
in
the
open.
I've
also
linked
to
the
blog
that
makes
the
announcement
and
I've
also
linked
to
a
demo,
which
is
more
like
an
a
twitch
session
with
our
pm
and
a
couple
of
my
manager
gabe
who's.
A
Cool
all
right
what
a
time
well,
everybody
have
a
great
start
of
your
year
happy
new
year,
and
I
will
personally
see
you
next
week
as
well,
and
you
know
this
cig
has
done
incredible
work
and
I'm
very
confident.
We're
gonna
continue,
kicking
back
and
and
and
delivering
great
things.
So
I
look
forward
to
one
two
new
one.
Thank
you.
Everybody
bye.