►
From YouTube: Kubernetes SIG Windows 20201020
Description
Kubernetes SIG Windows 20201020
A
Hello,
everybody
and
welcome
to
another
sig
windows
meeting
it's
the
20th
of
october
today.
As
always,
please
adhere
to
the
cncf
code
of
conduct.
This
is
a
recorded
meeting.
As
always.
Let
me
share
my
screen
here
and
we'll
get
started
point
of
reference.
We
did
record
the
sick
windows.
A
Presentation
for
cubicle,
north
america
and
it's
going
to
happen
on
the
20th
of
november
is
our
actual
meeting
time.
I
believe
3
p.m.
Eastern,
if
any
of
you
are
interested
in
checking
it
out
early,
I
will
post
a
link
here
in
a
second,
be
aware
that
that
link
in
that
video
may
not
be
available
forever.
So
if
you
want
to
watch
it
go
ahead
and
download
it
from
the
google
drive,
so
I'm
putting
the
link
here
just
so
everybody
knows
it's
about
25
minutes
of
presentation.
A
After
that
we're
going
to
have
10
minutes
of
q,
a
feel
free
to
show
up
during
that
presentation
in
cubicon.
Hopefully
some
of
you
already
have
passes
and
you
know
we're
gonna
get
some
good
engagement
from
customers.
Usually
customers
and
users
are
gonna,
be
posting
questions
throughout
the
presentation,
so
essentially
the
35
minutes
of
q,
a
all
right
cool
next
item.
B
Hey
folks
just
wanted
to
give
a
quick
update
on
the
work
we've
been
doing
at
red
hat
with
windows.
We
now
have
a
operator
which
we
have
aptly
named
windows
machine
config
operator,
which
allows
you
to
add
a
windows
node
to
an
existing
openshift
cluster.
The
process
is
very
easy.
All
you
need
to
do
is
bring
up
an
openshift
cluster,
either
on
amazon
or
azure.
At
the
moment,
we're
working
on
vsphere
support
you
bring
bring
the
cluster
up.
It
needs
to
have
ovn
hybrid
networking
configuration
bring
up
a
cluster
of
this
nature.
B
The
operator
is
watching
for
machines
that
get
resulted,
get
that
get
created
as
a
result
of
this
machine
set
cr
and
it
does
all
the
work
of
doing
the
cubelet
configuration
the
networking
configuration
and,
as
we
go
along,
you
know
we'll
also
do
logging
and
monitoring,
and
then
you
will
have
a
windows
node
attached
to
your
cluster
as
a
worker.
You
can
deploy
your
windows
workloads
and
we
have
released
it
as
a
community
operator
that
you
can
now
install
from
the
openshift
operator
hub,
which
is
present
on
every
cluster.
B
The
main
issue
that
we
have
run
into
is
the
one
that
I've
linked
there,
which
is
about
the
windows
containers
restarting
on
vsphere
environments.
I
think
mark
has
paid
some
attention
to
it
and
he
asked
us
to
open
it
up
in
a
particular
github
issue:
location
called
windows
container,
so
we've
opened
it
up
there
I
think
we've
been.
We
need
some
help
from
microsoft.
Here,
we've
been
working
with
jocelyn
who's,
been
awesome
and
giving
us
a
lot
of
support,
but
he's
saying
that
we
need
some
help
from
the
microsoft
compute
team.
C
Yeah
yeah-
I
just
mentioned
that
I
think
yesterday
and
I've
just
opened
up
the
issue
and
but
I'll
just
say
that
this
again,
the
windows
that
windows
containers
are
just
for
everybody,
on
the
recording
to
the
windows
containers
repository
under
the
microsoft
organ
github
is
probably
the
quickest
path
to
get
the
to
the
container
platform
devs
at
microsoft.
So
I'll
try
and
follow
up
on
that.
If
there's
not
an
action
on
that
soon,.
B
D
B
So
what
we
are
asking
is
the
customer
or
the
user,
provides
a
so
one
of
the
inputs
in
the
machine
set
is
an
actual.
You
know,
ami
or
you
know,
azure
image
id
that
you
need.
To
put.
We
say
that
it
needs
to
be
a
windows,
server
image
with
the
docker
runtime
install,
and
then
we
do
rest
of
the
work,
but
we
treat
it
as
a
we
treat
it
as
cattle,
so
as
part
of
upgrades
we'll
sometimes
tear
it
down
and
bring
it
back
up.
D
A
I
mean
from
from
a
strategy
standpoint
no
you've
seen
a
lot
of
the
investments
are
happening
here
from
the
team
with
james
and
amber
and
others
leading
the
effort
around
cluster
api.
You
know
how
does
that
affect
your
strategy
right
I
mean
this
is
almost
like
a
duplicative
work
around.
You
know,
standing
up
windows
environments
specifically
for
openshift.
A
You
know
is
that
an
opportunity
for
us
to
kind
of
bond
forces
and
maybe
get
you
guys
across
that
api
as
well?
Is
it
something
that
you
know
so
far,
but
out
there
that
might
not
work.
B
Or
no,
we
have
no
such
you
know
like
we're,
not
saying.
Oh,
we'll
use
only
machine
sets
we're
fine
using
different
apis,
something
for
us
to
keep
in
mind.
We
can
definitely
ban
forces.
I
know
that
folks,
from
red
hat
are
also
involved
on
the
cluster
api
side,
so
we're
definitely
keeping
an
eye
on
those
developments.
A
At
the
very
least,
maybe
the
team
that
built
this
machine
config
operator
that
should
review
the
plaster
api
cap
that
we
have
for
for
windows
and
give
it
back
to
james
and
team.
You
know
you
guys
are
here
first
right,
so
maybe
you
found
something
that
wouldn't
know
about
or
a
gunshot,
maybe
that
that
could
influence
our
design.
A
Thank
you
appreciate
it
all
right.
Mark
next
item
is
yours:.
C
I
wanted
to
bring
attention
to
the
tennessee
windows
optional,
update
package,
which
it
should
be
releasing
today,
because
it
has
a
couple
of
important
fixes
for
windows
containers,
especially
in
windows.
Server
2019.
C
This
fix
should
have
up
our
fixes
for
dns
kind
of
connectivity,
issues
on
windows
or
for
containers
on
windows,
server,
2019
and
also
the
single
file
mapping
support
for
containers
in
windows,
server
2019,
which,
like
we've,
been
waiting
for
for
well
over
a
year.
This,
the
10c
package,
like
all
of
the
c
release
packages,
are
the
optional
updates,
so
you
won't
necessarily
get
prompted
to
install
them
on
the
like,
through
windows
updates.
C
So
I've
provided
a
link
that
folks
can
use
to
follow
or
to
check
on
that,
and
the
these
fixes
will
also
be
rolled
up
into
the
next
cumulative
update
package,
which
is
set
to
release
on
november
10th
as
well.
But
if
folks
are
running
into
any
issues
with
dns
and
want
to
patch
their
windows
nodes,
I
think
at
least
on
the
microsoft
windows
containers
issue
for
the
dns
issue.
We've
been
recommending
to
not
take
the
september
or
october
cumulative
roll-ups
because
of
the
dns
connectivity
issues
there.
So.
C
Yeah,
so
there's
there's
an
issue
that
we've
been
tracking
on.
Let
me
pull
up.
Let
me
just
link
to
the
github
issue.
It's
on
the
windows.
It's
been
reported
a
couple
of
places,
but.
C
Yeah,
I'm
I'm
looking
at
in
the
chat
right
now,
so
there
there
was
issues
with
dns
connectivity
in
containers
on
windows,
server,
2019,
that
that
was
present
in
the
8b
and
or
9b
and
10b
cumulative
update
packages,
and
so
there's
some
guidance
or
there's
there's
more
information
in
there.
I
will
oh
looks
like
it's
already
been
edited
I'll.
Add
it
to
the
okay.
C
It's
already
been
into
the
agenda
and
the
10c
package,
which
is
the
optional
package
that
gets
released
on
the
third
week
of
october,
should
contain
fixes
for
that.
C
Thank
you,
but
I
guess
to
repeat
so:
we've
been
recommending
customers
that
are
running
into
the
dns
issues
to
keep
their
windows
nodes
on
the
or
the
august
cumulative
update
patches
until
it's
resolved
and
not
take
the
september
or
october.
Cumulative
updates.
E
Okay-
and
I
was
also
wondering
about
the
single
file
mapping-
does
that
for
kubernetes
volumes?
Does
that
work
with
docker
shim,
or
does
that
depend
still
on
container
d.
C
So
the
the
single
file
mappings
will
it
only
works
with
container
d
right
now
those
were
changes
to
the
hcs
v2
apis.
Their
support,
I
believe,
support
for
single
file.
Mappings
have
been.
C
There
have
always
worked
in
container
d,
but
there
was
an
issue
where,
if
you
were
trying
to
map
a
file
from
into
a
container
that
already
existed
in
the
container,
the
container
would
just
freak
out
and
fail
to
start,
and
so
now
there's
an
issue
where,
if
you
try
to
map
or
the
fixes
are
in
place,
so
if
you
try
to
map
a
file
into
a
container
that
already
exists,
it'll
just
get
updated
with
whatever
you
wanted
to
map
it
with,
and
this
is
important
for
the
scenarios
where
we
want
to
update
the
etc
hosts
file
in
the
windows
notes,
because
that
file
is
present
but
empty
in
the
windows.
C
Server
based
container
images
and
the
cube
like
will
under
many
circumstances,
try
and
update
that,
and
that's
been
coded
around
and
excluded
in
the
cubelet
for
behaviors
to
work
around
the
issue
of
the
continuous
family
start.
E
F
Ready
staff
yeah-
actually
I
think,
claudu
or
adelina,
is
probably
the
best
person
to
forget
about
it.
G
Yeah,
so
there's
gonna
be
a
couple
of
updates
when
it
comes
to
docker
hub.
I
don't
know
if
you've
been
aware
of
this,
but
I
think
james
linked
an
issue
on
testing
for
regarding
this.
I
think
that's
the
right
link,
so
the
idea
is
that
docker
app
is
gonna
re-limit.
The
number
of
image
pools
people
are
gonna,
be
able
to
do
for
unauthenticated
users
is
going
to
be
100
image,
pools
per
six
hours
per
unique
ip
and
200
image
pools
for
of
the
authenticated
users
for
six
hours.
G
G
So,
basically,
that
rate
limit
is
going
to
be
hit
quite
fast
for
the
regular
kubernetes
test
runs,
there's
also
already
a
mirror
called
mirror.gcr.io,
which
basically
contains
a
mirrored
image
from
dockerhub.
But
from
what
I
saw,
it
only
contains
images
for
linux
md64
it's.
They
are
not
manifest
lists
that
also
contains
for
other
architecture,
types
or
other
operating
systems
like
windows.
G
So
we
are
gonna,
have
a
couple
of
solutions.
First
of
all,
in
the
docker
hub,
faq,
there's
a
paragraph
at
the
end
with
at
the
end,
which
basic
basically
says
that
docker
hub
still
wants
to
offer
dedicated
plans
for
open
source
projects
in
which
we
fit.
Here's
the
the
the
link
regarding
the
the
the
november
updates
and
the
main
suggestion
would
be
to
apply
for
that.
G
That
open
source
solution
that
would
basically
means
that
we
will
probably
have
an
elevated
or
promoted
account,
which
will
be
able
to
pull,
which
will
not
be
affected
by
the
rate
limiting
which
will
solve
all
our
test
runs,
including
the
azure
ones
and
the
google
ones
as
long
as
they're
authenticated.
G
So
that
will
be
the
first
suggestion.
The
second
suggestion
would
be
to
use
a
docker
project,
docker,
mirror
cache
registry
and
configure
the
test
jobs
to
use
that
registry.
Instead,
that
basically
means
that
if
the
cache
doesn't
have
an
image,
it
will
pull
it
from
docker
hub
and,
if
it,
even
if
it
does
have
it,
it
will
check
for
updates
and
checking
for
updates,
is
not
rate
limited,
which
is
great,
and
that
would
basically
save
us
from
the
rate
limiting
issue.
G
G
The
third
solution
will
be
to
finish
the
windows,
support,
pull
requests
for
test
images
which
I've
started
at
I
I
don't
know
last
year
or
something
I
only
have
two
pull
requests
left
and
one
of
them
is
almost
approved.
It
only
needs
approval
for
one
single
file,
which
has
two
lines
of
code
changed.
G
When
that
hammer
happens,
the
image
promoter
image
builder
will
build
the
the
windows
images
as
well
and
push
them
to
the
kubernetes
gcr
io
registry,
and
then
we
won't
have
to
use
docker
hub
for
testing,
and
that
will
also
solve
our
issue.
For
the
most
part,
we
still
have
the
same
issues
as
the
regular
quantities
test
runs,
but
that's
also
something
that
I've
volunteered
to
help
on.
G
A
High
level,
claudia,
thank
you
for
outlining
all
of
these.
You
know
if
we
can
finish
the
windows
image
promoter
and
then
just
get
all
our
images
in
the
gcr
mirror
register
everybody's
using
then
kind
of
snap
to
everything
that
the
rest
of
the
community
does
right.
So
I
think
to
me
that's
ideal,
because
then
it
gives
us
a
middle
registry
that
we
can
run
jobs
anywhere.
We
want
on-prem
or
in
the
cloud,
and
then
it
also
aligns
with
the
rest
of
the
test
infrastructure
for
the
community.
G
The
the
thing
is
that
this
is
a
time
sensitive
issue.
We
are
gonna,
hit
this
problem
very
hard
on
november
1st,
so
we
have
to
have
our
solution
ready
when
that
comes
up.
How
long
do
you
think
you
will
need
to
kind
of
finish
that
work
out
of
curiosity
like
just
kidding,
I
don't
have
any
work
for
those
pull
requests
to
do.
I
finished
up.
I
finished
them
some
time
ago
and
in
in
addition
to
that,
we've
also
merged.
G
Now
some
build
x
implementation
for
the
test
image
building
process,
which
basically
means
we
won't
need
any
any
windows,
build
nodes
to
build
the
the
test
images
at
all
that
merged
last
week.
Actually,
so
all
we
need
is
for
your
pr's
to
go
through
them
is
they're
ready.
A
A
G
I
was
hoping
to
see
windows
leads
to
do
this,
because
this
is
a
sig
windows
issue
and
that
will
cover
azure
and
google
as
well
for
the
entire
community,
not
just
for
us.
I
A
H
Just
one
note
here,
we'll
still
need
to
create
a
docker
hub
user
for
say
windows,
so
because
at
the
moment
the
e3
team
is
under
my
personal
account.
I
mean
it's
an
organization,
but
I'm
the
owner,
and
I
think
I
added
mark
as
well.
H
A
bunch
of
other
people
would
read
write,
so
we
will
still
need
a
dedicated
user
for
this.
Just
to
keep
that
in
mind.
A
I
mean
we
probably
need
a
functional
account
for
sick
windows
that
can
be
created
by
docker,
like
I'm,
assuming
I
don't
know
how
this
process
will
work,
but
docker
will
probably
create
it
as
a
functional
account
for
sick
windows.
They'll
give
them
the
rights
bypass
their
limitations
and
then
deep
will
give
you
the
account
and
we
can
put
in
our
automation,
yeah.
H
I
And
another
clarification
I
think,
there's
another
blog
post
coming
out
shortly
around
clarifying
some
of
this
so
november.
1St
is
not
the
hard
deadline
like
that's
when
I
think
the
throttles
will
start
happening
and
I
think
it'll
be
at
least
like
well
into
january
when
they
officially
like
really
ramp
things
up
to
the
100
limits
over
six
hours.
So
you
still
have
a
little
more
time.
I
think.
G
G
C
A
All
right
so,
by
the
way,
there's
there's
ways
to
solve
this,
also
with
cardboard
and
using
like
a
mirror
of
our
registry
locally.
But
let's
see
what
happens
with
deep
and
you
can
figure
out
what
you
can
do
later
on.
So
there's
five
minutes
folks
and
I
think
we
kind
of
covered
everything
we
had
for
today.
Are
there
any
questions
from
anybody
on
anything.