►
From YouTube: Kubernetes SIG Windows 20200317
Description
Kubernetes SIG Windows 20200317
A
Hello,
everybody
and
welcome
to
another
sig
windows.
Meetup,
it's
the
17th
of
March,
like
always,
is
a
recorded
meeting.
So
please
adhere
to
the
code
of
conduct
set
forth
by
the
CN
CF,
a
couple
of
updates
for
today,
some
good
news.
All
our
code,
PRS
and
documentation
PRS
for
1.18
have
been
merged.
That
includes
cubed
I
am
going
to
beta
both
code
and
documentation.
Pr
is
in
thank
you,
big
thanks
to
Ben,
moss
and
and
team
that
kind
of
and
gab
and
others
that
have
gotten
lube
amir.
I
have
gotten
us
through
this
journey.
A
I
really
appreciate
that
runners,
username
and
GMS
a
both
code
and
documentation,
is
in
big
thanks
to
deep
mark,
and
you
know
who
else
on
Adelina
worked
a
little
bit
on
the
on
some
of
the
work
there,
as
well
as
jeremy,
who
kind
of
shepherded
the
most
lot
of
the
process.
There's
I
think
kind
of
the
lead
while
dip
is
out
and
then
the
CRI
container
D
work
a
team
big
thanks
to
the
Microsoft
team
by
Patrick
on
that
one,
both
talks
and
code
is
in
so
guys
these
are
biggest
release
in
a
long
time.
A
A
A
C
C
Thank
you
David
yeah
yeah.
So,
basically,
you
know:
we've
been
working
on
this
feature,
that's
related
to
load
balancers,
basically
for
Windows
containers.
We
currently
have
two
different
load:
balancing
notes
may
be
free,
but
the
main
ones
are
DSR
and
non-user.
C
The
non
DSR
one
is
one
that
is
used
by
default.
Today,
all
services
use
this
load
balancing
on
Windows
and
basically
you
know,
there's
an
implementation
difference
between
them
in
non
DSR.
So
in
the
default,
current
configuration
any
kind
of
service
or
load
balance
traffic
as
to
go
through
a
host
Vinick
that
we
created
previously
during
setup,
which
kind
of
acts
as
a
mocks
for
annual
bouncing
operations
and
contains
all
the
naturals
and
VfB
to
select
basically
right
back
end
tip.
Now
in
this
design.
C
There's
there's
some
side
effects
like
the
sewer
spotter
IP
could
is
being
obscured
if
you
try
to,
for
example,
curl
a
currently
service
from
within
who
have
been
inside
the
pod
and
there's
other
implications
such
as
scalability
and
latency
there's
voice
in
this
setup,
and
so
you
can
imagine
that
the
sore
spot
IP
being
is
obscured
cause
some
surprises
the
users,
because
it's
different
than
on
Linux.
If
they
try
to
do
some
use
cases
like
applying
network
policies
on
monitoring
packet
flows,
the
DSR.
C
What
does
it
stand
for
Peter
is
asking
stands
for
direct
server
return,
so
yeah
I'm
done
in
DSR
mode.
This
is
the
new
mode
we're
introducing,
and
this
is
where
the
all
the
IP
fix,
fix.
Ups
and
all
the
knotholes
per
container.
We
switch
port
directly
and
the
service
traffic
arrives
with
about
the
source
IP
being
masqueraded,
so
it
arrives
with
the
originating
part
IP
and
it
promises
a
lower
latency.
D
C
Also
enables
large
larger
number
of
services
to
be
created
and
clustered.
So
there's
a
couple
issues
going
around
where
people
are.
You
know
specifically
Windows
users
trying
to
scale
up
to
hundreds
and
hundreds
of
services
and
they're
hitting
these
bottlenecks
default
non
DSR
configuration
because
it
only
allows
creation
of
so
many.
It's
only
scaleable
to
a
certain
amount
on
your
couple
hundred
services
to
be
created
until
you
run
out
of
portables
and
then
DSR.
C
E
C
That's
how
the
issue
usually
manifests
so
like
every
node
has
to
keep
track
of
all
the
services
in
the
cluster.
So
even
if
you
create
a
large
number
of
like
Linux
Windows
nodes
through
needs
to
plumlike
rules
to
know
that
there
exists
the
service,
so
you'll
be
like
bottle
bottlenecked,
even
if
you're
creating
many
Linux
services,
the
windows
pods
themselves,
yeah.
E
C
B
C
Implementation
that
Callie
is
saying
you
know
in
1903,
where
they
shipped
an
initial
version.
You
sorry,
but
only
like
continued
to
container
service
traffic
work
like
post
a
note
port.
Like
no
note
part
access,
would
work
and
host
to
service
or
host
about
remark,
but
we're
all
of
that
should
be
working
with
backward.
So
it's
a
large
large
back
board.
A
E
A
D
C
C
A
What
we'll
do
next
week
is
I
want
everybody
to
start
thinking
about
what
kind
of
priorities
or
kind
of
things
they
want
to
work
on
for
119
as
I
see
it,
there
are
two
highest
priorities
and
they're:
not
in
order
is
container
D
and
cluster
API.
So
we
want
to
continue
making
advancements
there,
some
of
the
other,
and
if
it's
gonna
kill
me
for
not
here,
si
si
si
si
as
well.
A
So
so
those
are
our
three
biggest
priorities,
but
obviously,
if
there
is
features
or
capabilities
anybody's
interested-
and
they
really
want
to
come
in
and
help
us
deliver-
that
we
can
talk
about
that.
So
I
think
that
you
know
obviously
additional
things
like
server
performance
and
and
scale
who've
made
significant
improvements
on
that
through
the
work,
tomorrow's
led
to
the
stats
and
some
other
capabilities.
Well,
we
will
continue
working
on
those
as
we
find
items
that
are
critically
important
to
improving
our
performance.
A
We'll
do
that,
but
at
the
high
level,
from
new
features
that
are
gonna,
take
windows
to
new
areas
and
new
capabilities
within
the
kubernetes
ecosystem.
Those
are
the
three
main
priorities
so
think
about
that
and
next
week
you
can
go
over
that
and
start
figuring
out
who
might
take
the
lead
on
some
of
these
areas
and
how
to
move
forward.
F
For
a
container
D
priority
to
graduate
to
beta,
which
is
target
for
119,
there
are
two
items
that
caught
my
attention:
one
is
the
GMs,
a
mara
tea
with
dr.
shim,
the
other
one
is
slipping
my
mind.
Marv,
do
you
remember
which
one
was
it
I
think
it's
the
the
named
pipes
yeah,
if
the
name
pipes
the?
What
what
I'm
wondering
is?
Is
there
any
item
in
the
backlog
for
these
two
specifically
for
cracking
dinner,
D.
A
A
F
F
I
was
hoping
to
talk
in
the
last
two
weeks,
but
unfortunately
kind
of
was
swamped,
but
I'm
mark
and
I
have
been
working
a
little
bit
about
doing
it
a
little
bit
more
priority
like
attaching
labels
and
stuff
so
next
week,
hopefully
I
will
be
able
to
show
you
just
a
different
view
of
the
backlog
in
terms
of
priority,
so
it
may
be
a
little
bit
more
clear,
but
next
next
week,
I'll
do
I'll
be
talking
about
that.
Alright.
A
Then
obviously,
part
of
the
part
of
taking
continuity
to
to
bed
I'll
probably
have
to
have
a
deeper
look
around
hyper-v
isolation
and
seeing
that
something
that
you
can
enable.
So,
yes,
I,
don't
think
that's
that's.
A
heavy
requirement
for
beta
I
think
would
be
a
nice
to
have
since
most
folks
still
use
Windows
server
containers.
But
it's
something
that
we
should
think
about.
So
I
guess
the
way
I'm
saying
is
that
you
know
refers
to
go
to
beta
when
you
have
significant
amounts
of
the
existing
features.
So
someone
that's
using
container
D
versus
dr.
A
A
All
right,
well,
everybody!
Thank
you
all
for
all
the
work
you
did
for
118
tremendous
tremendous
release.
It's
gonna
go
very
well
with
our
community.
Please
stay
safe
practice,
social
distancing.
We
don't
want
anybody
every
way
to
be
safe
and
sound,
including
your
families
in
this
new
climate
and
we'll
see
you
all
next
week.
Bye
thanks.