►
From YouTube: Kubernetes SIG Windows 20220920
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
september
20th
2022
iteration
of
the
kubernetes
windows
community
meeting.
As
always,
these
meetings
are
recorded
and
uploaded
to
youtube
so
be
sure
to
adhere
to
the
cncf
code
of
conduct,
let's
jump
right
into
announcements.
First
announcement
is
still
the
kubecon
north
america
contributor
summit.
If
anybody
hasn't
signed
up
and
is
planning
on
attending,
please
sign
up
and
there's
still,
I
think
the
call
for
proposals
for
talks
for
that
is
open
until
the
end
of
this
month.
A
I'm
not
sure
the
exact
date,
but
I
think
there's
another
week
and
a
half
so
if
anybody's
interested
in
speaking,
there
go
ahead
or
please
go
ahead
and
submit
a
request.
A
The
next
announcement
here
is
that
we've
released
another
version
of
the
windows
debug
image
that
I
think
james
has
put
together
wired
up
with
the
crew
plugin
and
have
demoed
here
before
too.
This
new
image
includes
some
more
tools
for
doing
diagnostics
and
debugging
for
network
related
things
and
also
includes
the
wrpr
file
that
lists
all
of
the
hcs
events
to
collect.
If
people
want
to
try
and
debug
any
continuity
or
hcs
issues,
does
anybody
else
have
any
announcements
or
questions.
A
B
A
Skip
that
part
and
go
right
into
the
agenda,
I
added
this
real
quick.
This
is
more
of
an
fyi
for
everybody,
but
this
pr
here
is:
let
me
open
it
this
pr
here
that
a
meme
is
starting
is
removing
the
user
space
portions
of
the
cube
proxy
for
windows.
A
C
Hello,
I
already
doing
the
support
for
the
in-place
vertical
scaling
and
I'm
working
on
the
e3
tests
for
it
right
now
on
the
pr
in
place.
What
vertical
scaling
is
just
for
linux
and
I've
started
working
on
the
windows
sport,
for
it.
A
D
There
are
a
couple
of
things
to
note
which
might
be
interesting
is
the
fact
that
what
was
it
that
cpu
scaling
doesn't
work
for
2019
only
for
2022,
or
I
think
it
was
for
twenty
zero
four
on
one
onward,
which
we
do
not
support
anymore,
so
2022,
but
the
other
one
should
still
work
for
2019..
D
That's
a
good
question:
it's
been
a
while,
since
I
looked
into
that,
I
think
it
was
erroring
out,
but
I'm
not
sure
I
think
we
do
have
integration
tests
for
resizing
the
containers
in
container
d,
but
it's
been
a
while,
since
I
saw
those
those
so
I
can
take.
I
can
take
a
look
for
next
time
if
needed.
D
E
A
Yeah
so
they've
signed
has
been
trying
to
get
this
merged.
I
think
since
maybe
122
for
a
while.
The
latest
is
at
the
end
of
125.
A
They
merged
all
of
the
cri
changes
so
that
they
can
get
the.
I
think
they
were
having
a
lot
of
trouble
trying
to
juggle
lack
of
like
landing.
The
cuba
changes,
the
cri
changes
and
then
the
container
run
time
changes
and
especially
trying
to
get
any
sort
of
test
coverage
on
that
they
have.
A
They
were.
I
think
they
were
pretty
close
at
the
end
of
125
with
cryo
support,
and
then
there
was
some
private
container
d
packages
that
supported
it
for
linux
and
but
at
the
end,
everybody
in
signo
just
decided
just
merge
the
cri
api
changes,
because
they'd
been
kind
of
reviewed
and
almost
merged
for
a
couple
releases,
and
the
hopes
are
that
it
will
go
to
alpha
in
1
26
and
then
I
have
no
idea
if
it's
going
to
stay
in
alpha
for
a
while.
A
E
D
You
mean
the
integrations
that
I
mentioned.
Those
are
in
container
d,
you
know,
basically,
what
we
did
in
continuity
was
just
the
plumbing
which
basically
passes
through
asia's
shame
all
the
right
bits
that
were
necessary
for
it,
and
now
what
fabian
did
was
basically
the
plumbing
from
kubernetes
api
to
container
d,
basically
just
passing
them
through.
D
So
I
think
pretty
much
regardless
of
where
the
cap
actually
goes
for
in
place.
Both
scaling.
C
D
A
D
Sir,
can
you
repeat
the
question?
Oh.
A
E
A
Yeah,
so
the
the
this
big
pr
that
hasn't
merged,
yet,
I
think,
is
in
charge
of
doing
all
of
the
making
sure
that,
like
handling
and
deciding
when
and
what
to
resize
them,
the
the
resources
to
making
sure
that
the
container
hashes
are
either
updated
or
not
updated
properly.
I
know
that
was
a
big
issue
and
also
making
sure
the
api
objects
are
updated
correctly.
The
the
seems
like
this
pr
just
handles
after
that's
all
been
done.
A
A
Anything
anything
else
about
that
claudio
fabian.
A
Okay
sounds
good
and
keep
us
updated
david.
Did
you
want
to
talk
about
this.
B
Sure
I
have
a
short
demo
prepared
by
control
dots.
A
Okay,
let
me
stop
my
screen
share.
Are
you
able
to
share?
Do
you
need
somebody
to
give
you
permission.
B
Okay,
yeah
yeah,
I
wanna
I
have
some
few
background,
slides
as
well:
okay,
yeah,
so
thank
you
so
yeah.
I
wanted
to
show
a
project
today,
our
summer
intern,
nico
hobart
worked
on
to
help
improve
our
troubleshooting
tooling.
Now
we
wanted
to
share
this
today,
since
we
thought
it
might
help
others
in
the
community
as
well,
so
just
to
give
a
little
bit
of
background.
So
to
date
we
have
a
few,
mostly
powershell
scripts,
that
we
use
to
debug
container
networking.
B
B
So
all
these
partial
scripts,
they
all
require
direct
access
to
the
nodes
like
overstage,
rdp
and
actually
many
of
our
users
and
customers
struggle
with
obtaining
this
direct
node
access
and
at
least
for
production
clusters.
B
The
scripts
can
also
only
be
run
at
one
node
at
a
time
like
you
have
to
ssh
or
open
up
some
exec
window
into
every
single
node
and
run
it
there.
However,
kubernetes
is
kind
of
a
distributed
system
right,
so
we
want
observability
into
multiple
moving
pieces
and
the
other
problem
is
that
these
scripts
are,
you
know,
they're
they're
meant
to
log
kind
of
everything,
so
we
don't
miss
anything
so
there's,
obviously
a
lot
of
activity
in
kubernetes
or
granular
or
selective
observability.
B
And
lastly,
these
scripts,
don't
really
have
that
context
of
kubernetes.
So,
like
you
have
to
work
on
figuring
out
what
the
ip
address
is
for
a
given
quad
at
a
given
point
in
time,
etc.
All
of
these
things
are
moving
pieces,
so
that
makes
things
more
complex
so
just
to
demonstrate
that
further
there's
a
few
examples
I
have
so,
let's
take
a
take
a
look
at
two
simple
examples.
B
These
are
real
world
examples
as
well.
So
the
first
one
is,
you
know,
imagine
you
have
some
user
that
has
a
service
exposed
to
the
load,
balancer,
so
very
basic,
but
imagine
the
pods
of
the
services
spread
across
16
nodes,
and
so
this
user
reported.
You
know
when
the
external
client
tried
to
connect
the
load
balancer.
Five
percent
of
the
requests
were
not
were
timing
out
or
they
were
having
issues
so
now.
The
question
is
which
node
has
the
problem
in
it
and
since
there's
16
of
them?
B
Obviously
the
partial
scripting
approach
doesn't
work
very
well
so
need
better
tooling.
At
the
cluster
level,
number
example,
number
two
would
be
you
know,
even
just
on
one
given
node
there's
a
lot
of
activity,
and
you
know
what
what
if
we
want
to
take
a
look
at
a
particular
pod.
Look
at
it
in
detail
so
running.
That
is
not
very
simple
or
it's
possible,
but
there's
no
good
users.
B
Another
example
could
be
hns
and
many
other
components
run
on
each
of
these
nodes
independently,
but
any
one
of
these
could
crash
or
misbehave
at
a
particular
node
and
impa
impact
the
overall
cluster
health.
So
we
need
tooling
to
check
the
consistency
of
the
state
across
the
notes
as
well.
B
So
the
tool
that
we
came
up
with
is
called
wc
inspect.
It's
a
first
attempt
at
improving
this
experience,
so
it
allows
you
to
gather
the
network
state
from
different
nodes
allows
you
to
do
a
packet
capture
at
a
node
or
a
pod
level.
It
also
allows
you
to
retrieve
packet
counters,
either
from
actively
running
captures
packet
captures
or
the
vfb
port
counters
of
pods.
B
B
So
how
does
it
work
very
quickly?
B
We
don't
have
much
time
left,
but
it's
basically
deployed
as
a
daemon
set,
so
we
have
a
server
that
gets
created
a
daemon
pod
that
gets
created
on
each
of
the
windows
nodes
and
it's
implemented
as
a
grpc
service
to
allow
for
more
efficient
communication
and
it
exposes
kind
of
some
server
streaming
rpc
methods
which,
when
called,
will
collect
any
requested
information
from
the
nodes,
etc
and
kind
of
stream
it
back
to
the
client.
So
we
have
a
cli.
B
That
is
the
client
which
kind
of
processes
the
output
from
all
of
these
nodes
and
displace
them
to
the
user,
so
about
further
delay,
let's
jump
into
the
demo,
so
I
have
a
demo
environment
where
it's
basically
just
a
basic
aks
cluster.
Where
I
applied
the
daemon
set
already
I
applied
this
wc
inspect
demon
set,
so
I'm
gonna
jump
into
my
rdp
session
here.
B
This
is
a
jump
box
and
the
same
network
as
the
sdiks
cluster.
B
You
can
see
the
my
rgp
window
right,
yeah,
okay,
good,
okay,
so
so
here
on
the
right
hand,
side
I
have
the
wc
inspect
cli.
So
and
here,
on
the
left
hand
side
you
can
see
some
of
the
resources.
So
I
have
a
sample
19
service
with
a
sample
19
pod
and
I
have
another
win
web
server.
B
B
B
Then
we
have
capture
functionality,
so
that's
the
more
important
features.
So
let
me,
for
example,
run
a
curl
on
this
external
ip
right,
so
I'll
set
that
up.
I
have
a
load
generator
here,
so
it'll
kind
of
run
a
curl
once
each
second
and
up
here
I
will
define
a
capture,
so
let
me
show
that
we
can
define
filters
either
on
ip
addresses
or
you
know
the
protocol
type
so
I'll
be
using
those
filters.
B
You
can
see
the
five
tuple
being
received
here,
so
you
can
see
this
ip
addresses
from
the
client.
I
don't
know
if
you
can
see
that
and
then
this
was
the
external
ip,
so
the
loop
balancer.
B
B
Another
example:
let's
I
could
show
you
the
east
to
west
traffic
as
well,
so
we
can.
The
capture
command
can
also
run
on
pods.
So
let
me
open
up
I'll
exec
into
the
wind
web
server
pod.
B
B
B
So
we
can
see
here
also
that
we
get
the
only
the
packets
that
are
coming
from
the
pod
that
match
the
filter.
So
this
ip
address
here
is
the
response,
for
example,
of
the
sample
19
service
right
here.
B
Lastly,
the
final
thing
that
I
want
to
show
you
is:
we
also
have
port
counters,
so
stop
this,
so
you
can
see
the
vfp
counters
of
the
pod
directly
as
well.
There's
a
lot
of
stats,
kind
of
network
level,
stats
that
belong
to
the
pod
that
are
on
the
vfp
port.
B
So
it
can
also
indicate
you,
the
network
health.
So,
for
example,
I
can
see
some
tcp
stats
about
the
pod
as
well,
in
both
the
in
and
the
out
direction
so
yeah,
that's
all
I
had
to
show
today
any
questions.
A
No
questions,
but
that's
very
cool.
That's
a
lot
too
to
go
over.
Is
there
a
lot
of
like
a
how's,
the
readme,
on
this
repository?
It
seems
like
there's
a
lot
of
functionality
in
this
tool.
B
Yeah
yeah,
we
tried
to
go,
give
some
basic
examples
of
all
of
these
commands,
so
we
have
some
docs
as
well
as
how
to
build
it,
and
it's
not
very
detailed
like,
but
the
cli
has
a
lot
of
help
commands
on
each
of
the
settings
stuff,
so
yeah
reach
out
to
me.
If
you
have
any
questions
about
this,
I
mean
this
is
just
a
pre-release
tool
and
I
should
also
say
important
to
note.
B
This
is
not
a
kind
of
a
silver
bullet
to
troubleshoot
all
network
networking
issues,
but
it's
more
of
a
tool.
That's
meant
to
enable
you
to
identify
where
possible
problems
could
be
like,
for
example,
which
node
or
which
pod
is
misbehaving,
at
which
point
you
can
use
more
deeper,
deeper,
diving
tooling,
like
the
you
know,
the
debug
logger
that
james
has
has
worked
on,
or
you
know,
periscope
or
other
approaches
to
inspect
things
more
at
more
depth.
So
this
is
more
at
a
cluster
level
to
help
identify
problem
areas.
A
E
I
I
think
this
is
really
cool,
especially
because
we've
been
having
trouble
running.
You
know
packet
captures
with
windows
nodes.
I
think
the
biggest
use
I
see
of
this
tool
is
the
fact
that
it
takes
away
the
complexity
of.
I
don't
know
knowing
windows,
you
don't
need
to
get
on
the
node
to
do
anything.
E
It's
everything
is
either
community
specific,
and
then
you
have
the
cli
right
that
you
can
run
from
from
a
linux
instance
also,
which
is
as
far
as
your
readme
goes,
who
says
it
says
the
client
you
can
run
on
linux,
so
yeah
this
is
really
cool.
I
I
think
this
will
really
help
with
network
debugging.
B
A
I
was
just
going
to
ask
if
you'd
be
willing
to
post
insight
windows
announcing
this,
but
maybe
we
can
wait
until
the
release.
Artifacts
are
there,
but
I
think
we
should
definitely
kind
of
announce
this
and
say
windows
once
it's
ready.
I
think
a
lot
of
people
will
be
very
interested.
B
D
B
D
C
E
We
can
link
to
this
video
if
you
don't
want
to
record
another
one
for
the
demo
and
the
reason
yeah.
So
I
just
wanted
this.
Is
we've
been
working
on
this
one
for
a
while
we've
been
back
and
forth
on
the
approach
that
we're
going
to
take,
but
I
think
we
finally
landed
on
it.
I've
updated
the
cap.
I
just
wanted
to
have
sick
windows,
take
a
look
at
it
or
anybody
here
make
sure
everyone's
in
agreement,
in
particular,
hey
david.
E
Actually,
you
might
be
interested
around
like
we're
going
to
add
windows,
specific
fields
to
cry
and
in
particular
the
network
interface
usage.
There's
some
fields
we
weren't
filling
out
that
I've
added
in
so
maybe
take
a
quick
look,
make
sure
those
make
sense.
But
this
allows
us
to
eventually
add
windows,
specific
stats
that
we
may
be
able
to
use
to
make
various
eviction
decisions,
and
things
like
that.
But
right
now
we're
just
trying
to
add
support
for
the
cry.
Only
kep,
which
is
in
alpha.
B
Yeah
this
looks
good.
I
mean
this
seems
to
map
pretty
nicely
to
the
endpoint
stats
that
we
have
in
the
hns
api
at
other
cursory
class.
A
Yeah,
I
think
I'll
also
I'll
just
comment.
I
think
we've
commented
on
this
before
too,
but
the
goal
here
is
these:
the
fields
that
are
in
the
cry
api
are
really
designed,
and
I
think
we
want
to
keep
them
scoped
to
things
that
the
cubelet
would
actually
be
able
to
act
on
for
eviction
or
just
resource
management.
A
I
know
that
there's
a
potentially
a
lot
of
other
types
of
stats
that
we
would
want
to
collect,
and
at
least
for
now
sig
note
has
decided.
Those
are
mostly
out
of
out
of
scope
for
what
we
put
into
these
resource
fields
in
the
cry
api.
So
just
keep
that
in
mind.
If
anybody's
commenting
on
this
and
yeah.
Thank
you
james
for
bringing
this
up.