►
From YouTube: Kubernetes SIG Windows 20200929
Description
Kubernetes SIG Windows 20200929
A
B
So
just
a
quick
question
like
anyone
know
how
to
is
there
a
tool
to
read
like
a
dump
file,
so
I
create
some
like
a
dump
file
and
how
to
read
it.
A
What
kind
of
did
you
create
a
dumb
file
from
a
like
a
kernel,
a
kernel
dump
from
from
a
vm
or
from
a
container.
B
Library
so
yeah
from
I
use
these
two
called
proc,
proc
dump
or.
C
There's
a
tool
suite
called,
I
think
you
can
just
search
online
for
when
dbg
was.
C
Yeah,
there's
a
there's,
a
tool
suite
for
debugging
windows
and
one
of
the
tools
is
when
dbg
and
you
can
load
in
a
dump
file,
either
like
a
mini
dump
or
a
process
dump
or
a
full
dump
and
debug
with
that
and
there's.
It
should
come
with
a
bunch
of
documentation.
B
So
when
dbg
like
like
to
install
because
I
only
have
like
command
line,
can
I
install
through
command
line.
A
There
is
a
command
line
version
of
when
windy,
which
is
a
graphical
tool,
there's
a
command
line
version
as
well.
I
included
a
link
here
link
on
the
chat
where
the
debugger
tools
are.
There
is
a
command
line
tool
as
well.
Kb
is
the
command
line,
tool
or
cbd
as
well
cdb
as
well.
B
A
D
A
And
I
believe,
mark
microsoft
has
a
github
page
for
for
folks,
working
with
containers
and
otherwise
to
be
able
to
file
issues
right
so
so
jin
depending
on
you
know
where
she
gets
that
she
can
probably
file
a
ticket
there
right.
It's
what
github.com,
microsoft,
containers
or
something
yeah.
B
A
And
I'll
probably
add
them
to
the
notes
as
well.
That's
the
only
way
that
other
folks
can
see
them
when
they're
not
part
of
the
initial
meeting.
All
right,
let
me
share
my
screen
here
and
by
the
way
folks,
just
fyi.
This
meeting
gets
recorded
from
the
moment
that
the
first
person
joins.
So
if
you
join
before
everybody
else,
just
have
video
off.
If
you
like,
or
you
know,
don't
assume
that
because
nobody
else
is
on
the
meeting
that
you're
safe,
so.
E
A
So
all
right
here
so
a
lot
the
things
paging
okay.
So
let's
take
our
first
agenda
item
so
by
the
way,
like
I
said,
meeting
always
recorded,
always
adhere
to
the
cnc
code
of
conduct.
Kalia
you
want.
Are
you
online?
I
can't
see
the
list
of
everyone.
That's
on.
You
know
talk
about
privilege,
containers.
F
Okay
yeah,
so
basically
in
the
past
week
or
so,
we
identified
another
scenario
for
privileged
containers,
and
so
we've
been
talking
with
the
container
platform
team
at
microsoft.
So,
basically,
this
scenario
is
regarding
service
meshes.
F
So
right
now
the
container
networking
team
is
working
on
bringing
windows
support
for
service
meshes
with
envoy,
and
that
is
in
its
very,
very
early
stages.
There's
nothing
that,
like
there's
nothing
to
test
or
try
out
yet,
but
we're
just
kind
of
identifying
all
of
the
scenarios
and
like
working
through
the
design
and
basically
the
one
that
I've
been
looking
at.
The
one
that
I've
been
working
on
is
open
service
mesh,
I'm
not
totally
familiar
with
how
other
meshes
work,
so
other
people
have
context
of
that.
F
F
So
basically
like
we're
having
those
discussions,
and
so
if
someone
else
has
any
scenario
that
kind
of
models
that
like
it
would
just
be
good
to
know
about
that
and
comment
on
the
cap,
because
right
now
on
the
cap,
basically
like
all
of
the
containers,
have
to
be
completely
unprivileged,
because
there's
no
linux
capabilities
equivalent
in
windows.
A
That's
a
good
point
by
the
way
I
have
heard
through
reliable
sources
that
the
envoy
released
in
october
will
have
better
support
for
windows
as
well,
and
I
know
there's
a
team
from
microsoft
as
well
as
some
ex
pivotal
folks
that
have
been
working
on
that
and
they're.
The
key
drivers
of
that
capability.
G
D
H
The
overall
problem
is
that
you
can
only
have
certain
containers
in
a
pod
b.
Privilege
cannot
be
privileged
correct
with
the.
I
So
I
can
elaborate
kind
of
the
larger
problem
in
kind
of
two
directions.
One
is
that
in
our
original
cap,
we
established
that
if
the
kind
of
sandbox
container
was
privileged
and
everything
would
have
to
be
privileged
due
to
aligning
all
the
ip
addresses
in
the
pod,
which,
at
least
from
our
standpoint
is
you
know,
pretty
pretty
core
to
some
of
the
kubernetes
like
pod
concepts.
I
I
What
we
were
trying
to
investigate
was
we
realized
that
this
posed
a
couple
of
challenges,
because
there
are
scenarios
where
they
do
require
mixed
mixed
cases,
and
so
we
were
looking
at
if
there
were
ways
for
us
to
be
able
to
enable
like
a
non-privileged
pod
having
another
privileged
container
added
into
it.
That
could
be
aligned
with
the
network
compartment
of
the
pod
itself,
so
that
would
address
kind
of
the
kubernetes
paradigms
where
it
gets
kind
of
challenging.
I
With
the
scenario
that
kalia
I'm
kind
of
demonstrated
or
described,
is
that
at
least
in
windows,
it's
hard
for
us
to
reveal
host
networking
if
we
actually
align
it
with
a
network
compartment
of
a
non-privileged
pod,
and
in
that
scenario,
so
we're
kind
of
in
this
trade-off.
Where,
in
the
current
world,
you
could
just
allow
for
like
the
break
in
the
concept
of
a
pod
having
a
consistent
ip
address.
I
So
you
could
have
a
non-privileged
pod
and
still
deploy
this
init
container
and
there'll
just
be
a
very
short
moment
in
time
where
that
pod
has
several
ipeatresses
working
within
it.
I'm,
though,
we
wouldn't
necessarily
advise
this
for
things
that
are
like
less
ephemeral
than
the
inet
container
scenario,
but
if
we
start
doing
things
where
by
default,
we
align
the
privileged
container
to
whatever
the
pod
network
compartment
is
then
that
by
default
would
make
things
like
the
init
scenario.
I
The
init
container
scenario,
not
possible
or
we'd,
have
to
find
some
other
workaround
peel
to
be
able
to
expose
that
hose
networking
for
these
particular
instances
and
we're
kind
of
investigating
those
portions
right
now.
A
All
right
so
yeah
that
makes
it
even
more
sense,
so
it's
all
about
mixing
privileged
and
non-privileged
containers
in
the
same
pod.
The
from
yeah
keep
us
updated
on
this.
I
think
that's
that's
a
very
interesting
scenario,
and
I
see
here
the
value
that
why
some
and
different
tools
might
require
that.
H
I'm
just
going
to
add,
it
seems
just
like
scanning
through
some
of
the
csi
plug-ins.
They
typically
also
have
just
the
main
plug-in
container,
be
privileged
and
the
rest
of
the
you
know
the
sidecars
be
non-privileged.
H
So
that's
a
model.
That's
typical!
I'm
not
sure
if
making
the
whole
pod
privileged
will
be
a
huge
security
problem,
but
just
just
a
data
point
here
on
this.
F
Can
you
send
me
links
to
like
the
pod
stacks
deep,
on
slack
sure,
cool
thanks.
G
And
this
is
the
exactly
kind
of
feedback
we're
looking
for
if
there
anybody
else
has
worked
with
any
other
open
service
mesh,
or
you
know
this.
Thank
you
for
bringing
that
up.
That
would
be
good
to
to
have
a
data
point
as
well,
because
I
think
that
the
thing
we're
trying
to
avoid
is
we
don't
want
to
just
solve
it
for
one
particular
scenario
and
then
avoid
others,
because
then
we
start
looking
really
deeply
into
these.
You
know
how
to
set
these
network
comp.
G
You
know
how
to
access
the
host
network
and
all
those
kind
of
capability.
But
if
you
know
there
are
other
scenarios,
you
probably
want
to
look
into
as
well.
So
if
anybody
has
other
scenarios,
please
let
amber
myself
kalia
anybody
know
as
we're
looking
into
into
this
and.
C
Then
trying
to
add,
I
think
we
were
talking
a
little
bit
about
what
this
meant
for
the
the
timelines,
especially
with
enhancement
freeze
coming
next
week,
and
I
think
the
current
plans
are
we're
going
to
try
and
get
the
that
google
doc
it
to
a
markdown
and
merge
that
as
I
kept
and
keep
it
in
the
provisional
state.
Unless
we
can
answer
some
of
these
questions,
just
to
make
it
easier
to
for
people
to
find
and
iterate.
I
A
But
I
want
to
mention
one
thing,
so
you
know
we
could
iterate
on
this
right.
So
maybe
the
first
release
of
privilege
continues
on
windows
which
could
be
alpha
or
beta
or
alpha
likely.
It
doesn't
have
support
for
mix
and
match
of
privilege
and
unprivileged
containers
in
a
pod
right.
Maybe
the
first
release
only
has
one
and
then
over
time
we
get
feedback
and
an
iterate
on
it.
So
so
doesn't
necessarily
you
know
the
kept
that
you
have
today.
Maybe
that's
good
enough
for
the
alpha
release
right
to
just
start
getting
feedback.
C
Yeah,
I
think
so.
We've
been
also
talking
with
some
folks
from
signode
and
it's
desirable
to
settle
on
the
cri
changes
that
are
needed
before
any
of
the
implementation
starts.
The
cri
api
is
kind
of
susceptible
or
that's
kind
of
they're,
very
hesitant
to
have
changes
go
in
knowing
that
they're
going
to
be
reworked
because
of
how
it
gets
distributed
from
the
kubernetes
repo
into
a
vendored
into
the
other
repos.
C
So
we
were
kind
of
considering
that
and
I
think
for
any
of
the
kubernetes
apis.
That's
acceptable.
We're
trying
to
find
a
good
balance
for
trying
to
not
have
introduce
knowingly
changes
that
are
going
to
change
the
cri
api
in
the
future.
A
Make
sense
kali
anything
else.
A
D
That
being
said,
I
actually
heard
yesterday
of
someone
reporting
an
issue
or
a
regression
on
this,
so
when
I
am
a
little
bit
hesitant
in
this
conversation,
and
I
want
to
investigate
that
reported
problem
a
little
bit
in
more
detail
but
regardless,
if
we
want
to
promote
dsr
to
beta,
it
would
be
good
to
have
end-to-end
tests
running
for
for
this,
as
well,
at
least
across
one
networking
solution,
or
at
least
one
configuration.
D
A
I
mean
the
test
here
would
really
be
a
stress
test
right
both
for
dsr
and
endpoint
slices.
I
think
both
of
them
together
doesn't
the
linux.
You
know
on
the
linux
side
of
the
house,
I
mean
they
did
they
added
tests
for
endpoint
slices.
Is
it
possible?
We
can
leverage
those
on
windows
and
then
also
add
some
extra
tests
for
dsr.
So
you
know
you're
scaling
the
number
of
containers
per
pound,
the
number
of
endpoints
per
node.
Like
you
know,
I
mean
I'm
sure
they
have
those
tests.
D
Right
I
mean
for
the
dsr
case,
I
think
in
just
in
general
testing
that
none
of
the
current
scenarios
and
none
of
the
current
tests
would
result
in
failing
as
a
result
of
the
change,
would
also
be
useful,
but
yeah
enabling
the
endpoint
slices
would
also
be
useful
if
you
want
to
promote
that
feature
to
beta
as
well
so
yeah,
that's
something
we
have
to
look
into
before
before
we
can
promote
it.
A
J
I
I
think
I
would
use
more
familiar
with
the
employee's
license
test.
D
Yeah
last
time
I
checked
we're
actually
passing
the
endpoint
slices
tests.
A
So
david,
I
don't
know,
maybe
it's
worthwhile
david
for
you
to
kind
of
maybe
spend
a
little
bit
of
time
and
seeing
what
those
tests
do
for
endpoint
slices
either.
We
can
suggest
some
modifications
and
add
a
few
more
extra
tests
that
are
very
specific
to
dsr
or
if
the
end
point
slices
tests
are
good
enough
and
they're
passing
on
windows,
then
maybe
that's
a
good
signal,
but
you
know
obviously
you're
you're
the
expert
here
that
can
tell
us
that.
D
C
Yeah
yeah
yeah:
we
could
just
set
up
another
test
cluster
config
and
have
a
periodic
job
just
run.
All
the
tests
on
some
hosts
with
dslr
enabled.
J
C
I'll
file,
an
issue
in
the
windows,
testing
repo
and
we
can
figure
out
who
gets
to
it
first,
but
yeah,
either
adeline
or
myself
or
ernest
on
our
team,
has
been
doing
a
lot
of
the
test.
Cluster
configs
can
take
care
of
that.
E
C
A
That's
it.
Thank
you
all
right,
jeremy,
no
problem
detector
issue
here.
K
Hello,
so
yeah,
I
just
wrote
a
like.
I
mentioned
this
like
a
couple
weeks
back,
and
I
you,
like
you
also
said
mention
that
just
write
a
proposal
for
it
so
pretty
much
here.
It
is
so
for
all
that
I
don't
know.
No
problem.
Detector
is
basically
a
kubernetes
add-on
that
runs
in
a
kubelet
or
in
a
windows
node
in
this
case
that
we're
proposing,
and
basically
it
gives
signals
about,
like
kind
of
the
more
in-depth
state
of
of
the
node
itself.
K
So
you
know
you
can
have
you
can
have
it
basically
scan
the
node
for
various
problems
like
on
on
linux.
It
looks
like
it's
scanning
the
journal
d
for
like
kernel,
kernel,
faults
and
stuff
like
that,
and
then
those
get
raised
to
the
cubemaster
to
respond
to
so
things
like.
Oh
decommissioning,
not
necessarily
decommissioning
a
node
but,
like
you
know,
kind
of
like
jailing,
the
node,
so
that
pods
don't
get
scheduled
from
a
new
victim
pods.
K
That
may
be
there
if
there's
there's
hardware
failures
and
so
on,
so
that
that
that
part
is
actually
called
a
remedy
system.
But
this
proposal
is
about
just
surfacing
those
signals
to
coopmaster,
so
that
kubernetes
can
basically
say.
Oh
this
problem,
this
node
has
problems
in
it,
so
maybe
not
schedule
stuff
there.
K
So
there's
there's
a
summary,
but
the
the
meat
part
of
it
is
actually
the
proposed
signals
to
raise.
K
There's
some
of
them
that
are
probably
pretty
questionable
there,
like
I
just
kind
of
throw
out
like
everything
I
can
think
of,
but
you
know
the
comments
are
welcome
on
that
part,
especially
for
for
things
like
hey,
like
some
of
the
things
that
are
like
proposing,
there
is
like
hey
what,
if
the
note
doesn't
have
windows
activated,
what
if
windows
update,
is
running
on
it
and
there's
an
eminent
eminent
reboot,
because
of
that?
K
What
if
there
are
windows
defender
finds
stuff
in
it
on
the
node,
like
you
know,
is
the
node
compromised
in
some
way?
Is
that
appropriate
to
you
know
evict
pods,
because
the
node
may
be
unsafe
to
run
new
workloads
or
the
existing
workloads
on
it?
You
know
things
like
that.
K
So
it
goes
into
details
on
that.
In
fact,
there
there
is
a
clarification
somewhere
in
this.
Basically
node
problem
detector
runs
in
two
modes.
It
runs
either
at
least
on
linux,
as
a
regular,
I
believe,
looks
like
systemd
service
or
some
some
demon
on
on
the
when
on
the
linux
node
or
it
could
run
as
a
daemon
set.
There
is
some
clarification
here,
I'm
like
oh
right
now.
K
A
K
So
I'm
doing
mostly
investigation
right
now
myself,
but
hopefully
we
can
get.
Somebody
like
on
our
side
to
actually
work
on
this.
K
K
That's
you
know
aligned
for
for
things.
We
want
to
do
in
the
future.
A
K
Yeah
and
mostly
just
comment
on
like
the
things
that
should
actually
be
in
it.
That's
probably
like
the
the
biggest
area.
That's
not
really
well
understood
the
the
problems
to
to
actually
find.
C
A
Yeah,
I
guess
a
good
point
here:
there's
not
much
on
the
networking
side,
so
maybe
david.
If
you
can
take
a
look,
maybe
there's
some
gotchas
on
the
networking
side
that
someone
might
want
to
take
a
look
at
okay,
even
from
starting
from
something
as
basic
as
dns
to
to
other
things
like
making
sure
that
the
routes
are
clear.
D
A
A
A
Next
tuesday
is
the
deadline,
so
so
remember
we
have
container
d
going
to
going
to
ga,
possibly
we
have
csi
also
potentially
going
to
ga.
We
have
privileged
containers
going
to
alpha.
We
have
clustered
api
going
to
alpha
anything
else.
Major,
I
think
that's,
those
are
the
four
five
major
ones,
but
if
you
have
anything
that
you
want
us
to
review,
let's
get
out
there
quickly
cool
all
right.
Everybody
see
you
all
next
week.