►
From YouTube: Kubernetes SIG-Windows 20221115
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody
and
welcome
to
the
November
15th
2022
iteration
of
the
kubernetes
windows
community
meeting
as
media
or
as
always,
these
meetings
are
recorded
and
uploaded
to
YouTube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct,
we'll
get
started
with
a
couple
of
announcements.
First,
one
is
I
think
we're
going
to
cancel
next
week's
community
meeting.
It's
a
U.S
holiday.
If
anybody
does
want
to
run
the
meeting
feel
free
to
just
reach
out
in
Slack
or
anywhere,
and
we
can
get
somebody
else
to
run.
A
The
meeting
I
will
be
out
and
I
think
probably
a
number
of
other
people
will
too.
So,
just
let
me
know
if
not,
we
will
cancel
the
meeting.
The
next
announcement
is
Doc
PR's
for
enhancements
need
to
be
ready
for
review
by
today.
A
I
believe
I
have
One
open
for
the
host
process,
containers
which
has
already
gotten
some
reviews.
If
people
want
to
take
a
look
at
that,
I'll
link
that
here
after
the
after
the
meeting
too
Regular
docs
updates
are
just
kind
of
normal
docs
updates.
I,
don't
think
are
need
to
be
open
yet,
but
the
ones
for
enhancements
too
does
anybody
else.
Have
any
announcements.
C
A
C
We
have
three
or
four
from
Sig
Windows
one
on
host
process,
one
for
the
Sig
and
then
there
is
one
from
a
end
user
company
and
they
were
talking
about
how
they
scaled
their
kubernetes
clusters.
A
There's
the
YouTube
channel
I
guess
they're
handles
at
cncf
now
I'll,
try
and
link
to
the
individual
videos
later
good
call
good
call
out,
yeah
I
think
the
last
of
those
went
up
like
two
days
ago.
So
it's
pretty
new.
A
A
So
we
don't
really
have
anything
on
the
agenda.
If
anybody
I
was
wondering
James
and
Fabian,
would
you
guys
want
to
do
like
a
review
of
some
of
those
the
documentation
updates?
Well,
we
have
a
lot
of
people
and
eyes
on
it.
If
or
if
anybody
else
has
anything
they
want
to.
C
Yeah
definitely
after
talking
about
that,
okay,
a
few
comments
on
the
on
the
PR,
but
maybe
we
could
talk
through
him
here.
C
B
C
Cool
so
I
don't
know
how
to
exactly
go
about
this,
but
so
on.
Friday
Fabian
and
thank
you
for
all
the
effort
that
you've
put
in
here
I
know,
there's
a
lot
of
moving
parts,
and
this
is
definitely
challenging
thing
to
get
set
up
and
gonna
be
very
valuable
because
we
get
a
lot
of
folks
looking
for
this
I,
so
I
went
through
it
on
Friday
and
I
started
to
go
through.
Some
of
the
steps
like
I
just
went
through
the
flannel,
workflow
and
I.
C
C
C
They
moved
from
being
deploying
Cube
deploying
flannel
in
Cube's
system
to
moving
it
to
this
Cube
flannel
repository
and
one
of
the
big
things
there
is
that,
like
because
we
deployed
everything
else
into
Cube
flannel,
we
were
having
problems
with
our
back,
which
I
think
you
address
by
adding
the
RVC
like
our
back
files
and
I
think
we
should
deploy
all
the
like
all
our
final
stuff
into
the
same
final
report
like
namespace,
but
one
of
the
big
things
that
gets
in
gets
in
the
way.
C
There
is
the
way
that
we're
mounting
some
of
the
additional
things
to
get
some
metadata
to
be
able
to
set
the
network
up
properly
and
so
I
think
we'll
have
to
address
those.
Those
items
in
particular
and
and
so
I
I,
made
a
comment
here,
but
I
think
we
were
using
Q
proxies
like
Cube
config,
as
as
one
thing
that
we're
pulling
in
which
lives
in
Cube
system
and
so
now
that
if
we
deploy
Cube
flannel
into
the
cube
flannel
namespace,
we
don't
have
access
to
that
config
map.
C
The
only
reason
we're
using
that
config
map
is
because
of
the
way
that
container
mounts
sandboxes
used
and
so
that
once
container
D
1.7
comes
out,
that's
not
a
problem
because
we
can
use
the
in-cluster
config,
but
in
the
meantime
it's
probably
just
makes
sense
to
just
deploy
our
own
version
of
that
and
just
maintain
it
until
we
don't
need
it
anymore
and
then-
and
it
actually
simplifies
some
of
the
logic
inside
the
flannel
script.
And
then
the
other
part
of
that
was.
C
We
were
using
the
Cube
8
AM
config
map,
which
also
lives
in
Cube
system,
which
we
would
again
have
access
to,
and
so,
but
the
only
things
we're
using
it
for
is
the
Pod
cider
and
the
service
lighter.
The
the
Pod
slider
is
in
the
cube,
the
cube
flannel
config.
These
days
it
didn't
used
to
be
there,
so
we
could
just
pull
it
from
there
and
we'll
have
access
to
that
in
the
flannel
namespace.
And
then
the
service
lighter.
C
But
the
service
lighter
across
the
the
cluster
is
always
pretty
static,
and
so
we
could
just
make
an
environment
variable
and
then
we
don't
need
to
pull
those
extra
config
maps
and
that
kind
of
removes
the
need
for
us
to
deploy
into
Cube
system
and
I.
Think
that
would
simplify
some
of
like
the
additional
R
back
and
stuff
yeah.
So
Mark
go
ahead.
A
I
just
checked:
there
is
a
beta.0
release
for
containerdy
1.7,
so
I
think
they're,
probably
getting
close
to
doing
a
one
dots,
seven
release,
Mike
I
wonder:
should
we
like
just
oh,
should
we
just
work
with
the
docs
and
like
update
this
and
say
you
need
continuity17
and
say
for
now
use
continuity,
1.17
beta
zero?
Would
that
simplify
things
works.
C
For
me,
there's
a
lot
of
things
we
get
from
using
containerdes,
1.7,
I,
think
so
and
and
we
know
the
tests
passed
because
we've
been
running
those
as
well
yeah.
C
Not
much
like
we're
gonna
have
to
the
part
that
would
need
to
be
changed
would
be
this
part,
and
we
need
to
change
it
anyways
because
of
this
final
namespace
thing.
C
We
still
have
the
the
stuff
to
get
the
config
right
information
for
the
cube
service
like
service
citers,
and
things
like
that
again.
This
would
potentially
be
fixed
if
we,
if
we
had
some
of
those
things
that
I
suggested
we
do
with
Cube
Cube
proxy
yeah
long
term.
So.
D
B
B
Yeah,
so
so
what
I
noticed
was
that
the.
C
Calico,
the
way
that
the
pr
has
Calico
being
installed
for
Linux
is
using
the
Calico
operator,
which
has
a
whole
different
set
of
name
spaces
and
everything.
Then
the
way
that
I,
the
that
it
was
initially
developed,
which
was
in
against
the
release
the
animals
that
they
provide
so
calco
seems
to
provide
an
operator
as
well
as
a
set
of
release
files.
And
so,
if
you
go
to
those
Calico
like
GitHub
repository
and
pull
off
the
release
files,
they
have
a
different
name
space
that
everything
gets
deployed
into.
C
And
so
if
we
I
think
all
we
have
to
do
is
just
make
sure.
Calico
is
installed
using
those
release,
files
and
I
think
a
lot
of
that
goes
away
for
Calico.
D
C
Yeah
I
think
I
left.
You
know
in
here
in
the
guide.
C
Yeah
right
here
so
this
you
can
get
it
from
this
URL,
whereas
the
operator
has
a
whole
different
set
of
namespaces
in
our
back
associated
with
it.
C
I
think
once
container
D
does
go
stable.
We
can
I,
don't
know
if
anybody's
interested,
but
we
could
potentially
push
the
changes
up
to
Calico,
and
then
we
don't
have
to
maintain
this
Calico
image
anymore,
which
has
always
been
the
long-term
plan,
but
until
when
seven
gets
released,
I
I,
don't
know.
That
makes
a
whole
lot
of
sense
to
do
that,
but
yeah.
C
So
if
anybody's
looking
for
opportunities
to
contribute,
be
a
fun
one,
I've
done
most
of
the
work
in
the
past,
and
so
I
could
help
kind
of
guide.
Where
that
work
would
go.
C
A
C
Cool
yeah
again
thanks
for
the
work
with
all
the
different
moving
parts
and
everything
so.
A
E
Response,
okay,
Tim,
has
basically
said,
do
whatever
Jordan
says
and
then.
A
A
One
of
the
like
this
type
of
issue
is
not
super
common
and
like
one
of
the
other.
Other
people
have
definitely
run
into
it,
and
it
does
seem
to
happen
more
often
when
the
V1
or
stable
apis
are
being
touched,
which
sometimes
is
a
necessity.
One
of
the
suggestions
from
a
couple
of
the
other
leads
was
to
figure
out
how
to
work
with
the
release
manager
to
have
the
release
manager
or
like
to
have
a
dedicated
time
in
the
release
where
some
of
these
large
PRS
that
have
gotten
slipped.
A
Multiple
times
are
prioritized
and
the
only
thing
we
could
really
think
to
do
was
have
the
release
manager
kind
of
help
set
some
priorities
for
some
of
the
reviewers.
To
do
that.
So
that's
one
option
and
we'll
talk
about
about
that
at
the
release
retrospective,
which
I
think
will
happen
after
the
release.
But
if
you
want
to
to
show
up
I
can
it's
you
can.
Definitely
let
me
see
when
it's
scheduled.
B
A
Try
and
figure
out
what
to
do
about
this
type
of
issue:
You're,
Not,
Alone,
but
again
I,
don't
have
any
like
people
are.
People
have
acknowledged
the
problem,
don't
necessarily
have
a
great
solution.
A
Yeah
that
so
that
was
one
thing
that
came
up
to
was:
we
need
more
API,
reviewers
and
I
a
couple
people
and
I
like
pointed
out
that
that
I
don't
know
if
that's
not
like
that,
might
help
the
problem,
but
I
don't
think
it's
going
to
solve
the
problem,
because
some
other
sigs
had
said
that.
Basically,
that
is
that
the
question
like
one
of
the
questions
and
I
didn't
even
pose.
A
You
we
can't
undo
that
so
that
leads
to
people
wanting
to
see
the
implementation
which
leads
to
this
massive
PR
and
then
other
other.
Some.
Some
other
folks
had
mentioned
too
that
like
well.
The
other
side
is,
if
you
try
and
Stage
it
up
into
a
whole
bunch
of
PR,
as
you
get
into
a
case
where
not
all
the
Pier's
land,
and
then
you
have
these
half
enabled
features
which
it
sounds
like
other
cigs
might
have
are
running
into
that
problem
more
and
we
kind
of
discussed
like
okay.