►
From YouTube: Kubernetes SIG Windows 20201208
Description
Kubernetes SIG Windows 20201208
A
A
A
It
it
does
it
just
keeps
you
warm,
I
you
know
we
have
a
couple
of
other
folks
from
the
valero
team
earlier
they're
like
in
the
like,
10
20
degrees,
temperature
and
their
fingers
are
freezing.
So,
let's,
let's
go
to
our
agenda
folks.
One
thing
I
want
to
kind
of
paul
really
quickly.
You
know
this
year
has
been
pretty
tough
for
everybody.
You
know
with
covet
virtual
confidences
everything
that's
going
on
with
work
and
personal
lives
throughout
the
world.
A
A
C
Yes,
so
there's
a
new
windows,
server,
sack
or
semi-annual
channel
release
that
came
out,
I
think,
in
november
last
month.
I
wouldn't
have
normally
raced
that
as
a
announcement
here,
but
there
is
a
little
bit
of
confusion
that
this
has
caused.
Due
to
a
naming
update
that
I
wanted
to
call
out
and
just
mention
to
everybody.
C
So
previously
the
sac
releases
were
named
like
year
year
month
month,
so
you'd
have
1809
and
2004,
and
we've
decided
to
rename
the
sac
updates
to
be
the
the
year
year
and
then
the
half
of
the
year
that
it
got
released
in-
and
this
is
done
largely
to
avoid
some
of
the
confusion
around
the
fact
that
windows
server
2019
is
older
than
when
the
server
2004,
despite
the
numbers,
are
newer.
C
So
there's
a
new
naming
scheme
and
that's
what
it
is
and
the
confusion
that
this
is
raised
is
there
have
been
some
images
for
windows,
containers
that
are
published
to
the
mcr
and
other
places
that
are
still
using
the
2009
kind
of
nomenclature,
and
I
wanted
to
just
let
everyone
aware
that
those
two
tags
are
interchangeable.
C
So
if
you
see
some
container
images
that
are
published
with
the
2004
tag
and
some
that
are
published
with
the
2009
tag,
you
can
like
chain
those
together
in
a
build
and
run
them
on
on
the
windows.
Server,
2000,
h2,
host.
A
Thank
you
mark
and
very
likely
mark,
probably
the
next.
I
don't
know
we
didn't
change
1.20,
so
we
still
support
2004
in
1.20
right.
We
didn't
change
the
official
docs
yeah.
C
We'll
we'll
update
er,
we
have
some
changes
in
the
works
to
start
building,
all
of
the
container
images
that
are
used
by
kubernetes
and
update
those
in
the
121
vrs
yeah,
and
also,
I
believe
that
they're
I'm
trying
to
find
some
public
documentation
I'll
drop
that
announcement
in
the
sig
windows
channel
too,
when
I
find
it
so
folks
can
see
that
there.
A
Yeah,
if
you
search
for
20
h2
there,
there's
like
some
of
the
documentation
on
microsoft,
is
basically
saying
that
this
is
the
the
two
sac
releases
for
this
year:
2004
and
20
h2.
I
like
the
20
h2
better
than
2004.
I
was
like
windows,
server,
coma
version,
2004
yeah,
all
right.
Let's
go
to
anything
else
on
that
mark.
C
A
Excellent
amber
you
give
us
an
update
on
privilege,
containers.
D
Yeah,
so
this
is
mostly
to
follow
up.
On
the
last
time
we
brought
up
privileged
containers
to
kind
of
give
people
an
overview
from
the
very
beginning.
You
know
we
took
a
different
kind
of
approach
where
we
wanted
to
use
job
object,
containers
and
that
allowed
us
to
kind
of
get
access
to
a
lot
of
different
host
services
or
networking
and
and
so
on
and
so
forth.
D
One
of
the
kind
of
issues
that
was
brought
up
later,
as
we
were
trying
to
approach
120,
was
the
question
about
different
service
mesh
scenarios
where
originally
our
proposal
was
that
we
were
going
to
go
and
require
every
like
pods
that
contain
privileged
containers
to
be
entirely
privileged
folks
brought
up
different
scenarios
in
which
aligning
a
privileged
container
with
the
pod
compartment
and
also
then
also
enabling
that
privileged
container
in
a
pod
networking
compartment
to
have
net
admin
access
to
be
able
to
modify
the
host
so
within
that
same
pod
is
something
that
was
essential
for
certain
service
mesh
scenarios.
D
D
After
a
lot
of
investigation
internally,
we've
decided
to
scope
that
out
of
this
initial
kind
of
push
that
we're
going
to
go
through
for
privileged
containers,
we're
not
you
know,
there's
some
assessment
going
on.
If
this
is
something
we
could
ever
bring
in
being
able
to
align
with
the
pod
compartment.
But
as
it
stands
right
now,
it
would
kind
of
involve
a
lot
more
investigation
and
work
on
our
side
and
is
not
kind
of
in
scope
for
this
current
approach.
D
So
moving
forward
with
121,
we
kind
of
scope
down
to
the
for
alpha
we're
going
to
either
go
out
with
runtime
classes
or
have
a
kind
of
pod
spec
update,
sort
of
included
in
alpha
for
121..
All
of
this
work
is
kind
of
already
included.
It's
prepared.
We
had
this
prepared
for
120,
but
we
did
the
assessment
of
this
kind
of
pod
compartment
alignment
at
the
time
and
then
once
we
go
into
alpha
in
that
stage,
we
are
going
to.
D
You
know
continue
to
do
investigation
on
pod
compartments,
but
at
least
at
output.
It's
going
to
be
scoped
here
and,
and
the
kind
of
goal
is
to
make
sure
that
we
get
at
least
the
basic
functionality
for
privileged
containers
out
in
the
open
for
people
to
be
able
to
use
and
start
being
able
to
to
work
with.
A
Very
cool
safe
to
assume
that
you're
on
track
for
review
and
alpha
in
version
1.21
doing
all
the
things
you
just
mentioned.
D
Currently,
there's
nothing
that
would
block
that
now,
but
you
know
there's
still
some,
like
you
know,
logistical
stuff
that
we'd
have
to
get
so
we're
completely
prepared,
but
we're
getting
to
that
stage
and
going
through
that
process.
Now.
Luckily,
we
did
a
lot
of
this
work
or
some
of
the
work
already
in
120
and
there's
some
updates
and
refinement
that
we'd
have
to
do
to
the
cap
and
so
on
and
so
forth.
But
otherwise
I
think
you
know
we're
we're
pretty
much
on
track
for
that.
A
You'll
probably
push
the
cap
for
review
as
soon
as
possible.
Yeah
like
go
like
I
mean.
I
know
the
cab
review
deadline
will
be
sometime
in
january.
I
don't
know
if
they've
published
them,
yet
they
haven't
seen
an
email
yet,
but
once
they
publish
them
will
probably
be
sometime
in
january.
We
should
try
to
see
if
we
can
have
the
cab
ready
to
roll.
Even
now
in
december,.
D
Yeah
yeah
and
that
that's
kind
of
our
goal
right
now,
so
we
kind
of
went
to
the
process
of
coming
to
this
decision.
All
the
work
that
we're
kind
of
doing
for
the
remainder
of
anytime,
actually
be
people
being
available
in
december,
is
going
to
be
spent
on
on
getting
us
prepared
for
that
and
we'll
roll
into
january,
with,
hopefully,
every
all
the
ducks
in
the
row.
So.
E
Perfect,
thank
you
appreciate
it
and
we
would
need.
I
think,
windows
help
in
once.
We
have
the
alpha
out.
We
probably
want
to
test
the
scenarios
that
we
think
works
and
we
have
validated
or
like
we
are
propos
putting
in
the
proposal,
but
we
want
to
make
sure,
because
privilege
container
has
a
lot
of
dependency.
E
I
know
deep
you're
thinking
about
csi
proxy
there's
a
lot
of
other
stuff
coming,
so
you
want
to
make
sure
like
what
works
in
alpha,
because
now
we're
moving
quicker
and
faster.
Like
ahead,
we,
you
know
we
we
can
get
the
feedback
earlier
as
well
yeah.
So
I.
A
I
mean
I
consider,
like
you
know
with
every
feature.
Sometimes
you
have
to
think
about
the
killer.
App
right.
You
know
for
windows,
the
killer,
arable's
office.
You
know
that's
how
windows
and
office
kind
of
banded,
together
and
kind
of
dominated
a
big
part
of
the
industry
for
this
scenario,
the
killer
for
this
previous
containers,
your
killer
app,
is
csi
proxy.
To
begin
with,
I
don't
know
if
there's
anything
around
the
world
of
cni.
That
would
be
necessary
here.
A
That's
that's
worth
exploring,
but
you
know
we
need
to
find
a
killer,
app
and
kind
of
push
for
it
and
csi
proxy
seems
to
be
that.
So
you
know
we
need
to
band
together.
For
that
absolutely.
B
I
have
a
quick
question
amber
the
so
I'm
just
trying
to
make
sure
that
we
relay
what
we
need
to
do
with
sig
network.
So
are
you
all?
Did
you
all
decide
that,
like
we're
not
gonna
block
on
any
api
changes.
D
I
mean
so
almost
the
changes
for
this
particular
implementation
with
the
runtime
class
would
only
kind
of
be
contained
in
like
our
hcs
shim
layer.
Basically
so
it
doesn't,
it
doesn't
touch
too
many
things
in
like
we
can't
get
it
out
in
a
form
which,
wouldn't
I
think
we
wouldn't
wouldn't
necessarily
require
too
much
assessment
from
like
thing,
networking
or
so
forth,
so
we're
trying
to
go
out
with
like
the
lowest
friction
form
in
alpha.
D
I
think
what
we've
found,
at
least
in
a
lot
of
discussions,
is
that,
like
there's
a
lot
of
information
that
you
know,
people
bring
up
scenarios.
People
like
provide
different
inputs,
but
we're
kind
of
all
spinning
our
wheels
trying
to
like
think
about
how
this
thing
would
work
in
reality.
I
think
it'll
be
a
lot
easier
once
we
get
it
out
and
alpha
for
people
to
kind
of
approach
this
with
something
in
hand.
So
that
is
the
that's
the
goal
right,
we're
not
trying
to
put
too
many.
D
We
will
see
if
we
have
a
good
guess
of
what
api
changes
or
pod
spec
changes.
We
have
in
mind
that
we
can
propose
for
alpha
if
people
have
a
huge
allergic
reaction
to
whatever
we
propose.
We
do
have
the
alternative
of
runtime
classes.
Of
course,
like
our
team
is
still
looking
at,
where
the
best
place
within
the
prospect
to
have
it
in
for
alpha
is,
but
we
do
have
a
couple
of
options
here,
and
it's
mostly
like
we
want
to
get
like
our
goal
is
to
get
this
out.
D
D
D
Exist
in
the
pods
back
right
and
that's
when
things
you
know
get
real.
I
guess,
and
in
that
case,
like
we're
kind
of
you
know
we're
trying
to
unblock
ourselves
or
you
know
not
stop
spinning
our
wheels
and
just
trying
to
get
to
alpha.
We
understand
that
there
is
something
to
assess
between
alpha
and
beta
and
like
we're
developing
a
checklist
for
this.
We
have
like
our
in-between
checklist,
at
least
in
our
own
minds,
and
some
of
that
includes.
Like
okay.
D
You
know:
we've
made
a
decision
on
certain
networking
scenarios,
such
as
the
one
regarding
the
service
mesh
scenarios
or
aligning
to
a
pod
compartment.
We've
decided
to
rule
this
out,
which
is
the
way
that
I
think
networking
was
leaning
towards
anyway,
at
least
at
this
time,
but
just
making
sure
we
kind
of
have
alignment
there
just
to
do
the
meeting.
Cadences
and
like
you
know,
we're
we're
now
in
the
holiday
season
trying
to
get
to
january
to
get
us
rolling
into
121.
D
B
A
C
Entertaining
and
what
we're
kind
of
kind
of
circling
around
is
having
a
privileged
field
on
the
windows
security
context
and
not
messing
with
any
of
the
other
pod
security
policies
like
postports
host
network
and
and
that,
at
this
point,.
B
C
B
A
F
A
F
Yeah,
yes,
okay,
great
yeah,
so
I
actually
have
a
little
graphic
here
so
for
windows
and
in
coop
proxy
we
have
two
different
load
balancing
modes,
one
is
called
dsr
and
one
is
called
non
dsr
or
well.
It
really
doesn't
have
a
name
because
it's
the
default
configuration
and
the
original
implementation
so
in
to
give
an
overview
of
what
the
traffic
flows
look
like
before
we
jumping
into
advantages
and
disadvantages
and
the
non-dsr
flow
and
the
default
flow
that
q
proxy
uses
for
service
traffic.
F
F
The
other
advantage
is
that
there's
a
better
data
path,
performance
and
you
know,
reduced
network
latency.
Since
you
don't
have
to
go
for
the
intermediary
hop
basically
here
and
do
all
these
expensive
operations,
the
other
advantage
is
that
there's
improved
transparency
of
the
network
traffic
flows,
so
the
traffic
flow
is
just
simpler.
You
can
see
on
the
right-hand
side
and
it
is
also
recommended
for
kubernetes
network
policies.
F
So
if
you're
using
you
know
calico
or
some
solution
like
that,
you
should
probably
run
coproxy
in
dsr
mode.
Otherwise,
you
know
the
obscuration
that
happens
here
can
can
get
cause
issues
when
you're
trying
to
apply
network
policies
and
you're
trying
to
reach
out
through
a
kubernetes
service
and
the
ip
gets
obscured,
and
also
the
dsr
mode
is
required
for
advanced
network
configurations
such
as
client,
ip
preservation
or
destination
ip
preservation.
F
F
Any
questions
on
that
point.
Basically,
you
have
to
stop
couproxy.
Then
you
have
to
remove
hns
policies
and
then
you
can
remove
the
network
previously.
Sometimes
you
know
you
could
just
delete
the
network
and
the
non-dsr
for
non-dsr
policies.
That
would
be
fine.
It
would
clean
up
the
policies
itself.
Everything
would
be
gone,
but
in
the
dsr
load,
balancing
policies
you
have
to
clean
them
up.
First,
before
deleting
the
associated
network.
E
F
Next,
the
next
thing
to
consider
is,
if
you're,
using
dsr
mode,
there's
a
special
case
where,
if,
if
a
pod
tries
to
access
a
service
and
the
service
redirects
back
to
the
pod
itself,
you
need
to
add
a
configuration
to
your
cni.
I
actually
have
a
link
here.
It's
called
loopback
dsr
and
you
have
to
basically
set
this
config
inside,
like
you
have
to
set
the
setting
or
this
field
in
your
cni
config.
Otherwise,
the
traffic
will
get
dropped
for
this
special
case.
F
So
that's
that's
one
thing
that
has
caused
some
confusion
as
to
why.
Why
is
traffic
being
dropped?
If
it's
being
redirected
back
to
itself,
you
know
usually
users
try
to
like
spin
up
one
service
and
it's
like
redirecting
back
to
itself
and
they
try
to
demo
it
and
say
well.
Pod
access
doesn't
really
work
so
yeah.
This
needs
to
be
added
in
the
dslr.
F
This
well,
that's
how
you
enable
it
originally
so
yeah
you
need
to
pass.
This
is
how.
F
C
F
F
So,
basically
what
that
means
is,
if
you
have,
if,
if
you
have
like
the
the
traffic,
if
the
traffic
hits
a
given
node,
you
need
to
make
sure
that
the
pod
is
running
locally
on
that
node,
since
the
traffic
will
not
be
forwarded
across
nodes
because
the
client
ip
needs
to
be
preserved.
F
Basically,
the
note
to
note
forwarding
is
disabled,
so
usually
you
might
have
you
know
some
pods
here
and
you
might
have
you
know
no
like
the
traffic
could
be
directed
to
either
node
right
and
what
I'm
saying
is:
if
traffic
gets
redirected
here,
it's
not
going
to
forward
to
this
guy
because
the
client
ip
needs
to
be
preserved.
If
we
did
this,
then
the
client
ip
would
need
to
change.
F
A
Hey
david,
we're
almost
running
out
of
time
here
as
well,
so
so
a
couple
of
things
came
up
david.
You
should
submit
a
session
for
this
on
cubicon
independently
to
talk
about
networking
in
windows
and
advances
if,
for
some
reason
it
doesn't
get
approved,
then
we'll
probably
dedicate
either
half
or
then
tires
in
windows.
Talk
for
the
next
cubicon
for
you
to
just
do
a
networking
deep
dive.
Like
you
know,
level,
400,
networking.
F
Okay,
okay,
yeah,
okay,
then
the
last
thing
just
really
quickly.
You
need
kubernetes
1.20
for
client,
ip
preservation,
1.19
just
for
dsr
mode,
and
you
need
to
make
sure
you're
running
windows,
server,
2019
and
you
know,
use
at
least
the
november
patch
tuesday
update-
and
this
is.
D
E
So
if
somebody's
running
120
with
2019
november
patch
or
2004,
they
should
have
the
the
client,
ip
preservation
and
dsr
everything
should
work
for
them.
If
they
are
configuring,
it
right
right.
F
A
F
F
C
A
A
A
Okay-
and
we
can
probably
talk
since
the
shortest
kind
of
coin,
but
the
shortest
item
is
the
no
problem.
Detector
jeremy,
give
us
a
quick
update.
G
Sure
so,
basically,
I'm
I'm
starting
to
submit
their
first
couple
of
pr's
for
it
and
then
I'll
be
handing
it
off
to
somebody.
That'll
actually
be
working
on
it
for
the
portion
of
our
time,
but
just
real
quick.
If
anybody
has
any
feedback
or
anything
they
want
to
add
to
it
or
if
they
want
to
contribute.