►
From YouTube: Kubernetes SIG-Windows 20210921
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
get
into
the
all
right.
I
don't
really
have
any
announcements
today.
If
anybody
has
any
announcements
feel
free
to
raise
your
hand,
add
it
to
the
chat
or
just
speak
up
in
the
meeting.
We
are
in
the
middle
of
the
coding
milestone,
so
I
think
everybody's
kind
of
busy
with
that
and
most
of
the
planning
has
already
been
done.
A
I
have
one
agenda
item
today.
I
believe
ibrahim.
He
wanted
to
discuss
this.
Let
me
open
it
up.
Are
you
on
the
call
today?
I
thought
I
saw
you
yeah,
I'm
here.
Okay,
let's
see
so
all
right.
Do
you
want
to
give
a
brief
overview
of
the
issue
you're
seeing
and
then
we
can
kind
of
discuss.
C
B
Yeah
earlier
with
with
docker
so
yeah,
if
you
try
to
create
boards
on
multiple
boards
on
the
same
node
with
yeah
highest
robot
like
in
this
example,
it
creates
one
pod
per
one.
Second,
then
yeah
after
creating
yeah,
more
pods,
you
start
seeing
that
yeah
it's
taking
longer
and
then
the
cubelet
think
lube
itself
will
start
time
outing
and
then
it
will
say
that
yeah
that
plague
is
unhealthy.
It's
unable
to
check
on
the
status
of
the
created,
sandboxes
and
sometimes
yeah.
B
This
was
yeah
the
original
case
as
in
this
issue,
and
we
did
some
testing
recently
with
container
d
as
well,
where
I
saw
similar
behavior,
the
node
doesn't
go
completely
in
an
unhealthy
state,
but
I
still
see
the
same
problem
with
yeah
when
I
create
pods
with
highest
robot,
even
one
pod
per
five
seconds,
or
so
I
start
seeing
yeah
when
it
reaches
20
volts
that
yeah
it
will
start
throwing
errors
of
what
was
it
the
context
time
out,
or
something
like
that?
B
B
B
It
was
eight
cores,
I
don't
recall
the
memory,
but
I
can
yeah
check
it,
but
but
looking
at
the
the
monitoring
dashboard
really
the
cpu
was
yeah
under
50.
So
I
guess
it's
more
about.
A
If
you're,
looking
at
the
monitoring
dashboard,
are
you
just
using
like
this?
The
stats
endpoint.
A
A
That
could
be
happening
that
we've
seen
a
lot
is
when
the
cpu
load
gets
too
high.
The
stats
and
points
start
timing
out
so
anything
that
you'll
see
in
like
cube,
control
top
nodes
or
the
or
the
monitoring
is
inaccurate.
B
A
B
E
E
A
C
To
be
honest,
this
is
consistent
with
ryan's
scene
container.
I
mean
I'm
testing
just
container
beyond
the
load
and
it's
kind
of
consistent,
but
I
believe
mark
asked
about
the
the
size
of
the
vm.
I
didn't
catch
if
yeah.
A
A
We
added
a
flag
to
the
cubelet
to
run
as
a
higher
thread
priority,
and
I
I'm
looking
for
that.
I
opened
up
a
pr
for
that's
still
open
now
in
the
cubelet
to
run
the
cubelet
in
a
job
object,
so
on
windows,
and
this
this
may
help,
because
so
on
windows,
there's
the
different
thread
priorities,
but
on
windows,
if
you're
running
at
a
normal
or
above
higher
higher
than
normal
thread
priority.
If
you
spawn
another
thread,
your
threads
will
go
well
stay
at
normal
thread
priority
it's!
A
What
happens
is
the
so
we
want.
I
think
we
want
to
make
these
changes
in
both
the
cubelet
and
in
container
d,
because
we
want
container
d
spawns
in
hdsm
process
and
we
want
to
make
sure
that
that
has
higher
thread
priority
than
a
lot
of
the
other
priorities
that
are,
processes
that
are
running
on
the
windows
system
may
be
worth
trying
that
I'll
try
and
link
some
of
the
pr's
for
all
of
those.
Those
are
still
a
work
in
progress.
A
James
has
been
doing
a
lot
of
research
on
her
investigations
around
like
the
time
out
in
the
stats
endpoint.
Have
you
seen
anything
like
this
james,
where
the
node
goes
unhealthy,
especially
like
the
pid
pressure
and
insufficient
memory
in
cuba's
unresponsive?
A
I
haven't
the.
F
The
stuff
inside
the
metrics
endpoint
was
specifically
around
cpu
load
and
stress.
A
So
here's
the
initial
pr-
this
is
what
I
I
I
don't
know
that
these
are
going
to
help,
but
here
it
might
be
worth
setting
this
flag.
B
One
more
open
question
yeah
just
in
general,
I
guess
I
see
that
we
suppose
these
cases
yeah.
We
even
see
the
problems
when
we
create
one
port,
every
10
seconds
or
five
seconds.
So
I
believe
if
we
go
lower
and
lower
yeah,
maybe
like
on
linux
side,
you
are
trying
to
create
one
part
per
second
or
10
buds
per
second.
B
A
I
also
think
that
there's
a
lot
of
kind
of
research
that
we
could
do
on
windows
to
better
define
these.
Let
me
I
know
that
sig
scalability,
I'm
sharing
this
is
working
on
a
couple
of
different
slos
slis
and
these
scalability
thresholds.
A
A
lot
of,
I
think
the
slos-
I
don't
know
if
there's
an
slo
specific
to
I'll,
add
these
to
the
notes
in
a
minute
specific
to
the
number
of
pods
per
second
going
onto
a
node.
But
that
seems
like
a
something
similar
to
all
of
these
types
of
questions,
whereas
they're
tracking,
like
both
like
latency
for
api,
calls
over
past
five
minutes
must
stay
above
a
certain
pass
percentage,
their
mutating
there.
There
were
some
things
specific
about
pods
coming
in.
I
think
at
least
pod
state
changes
here.
A
Okay,
this
is
defining
kind
of
overall
maximums
or
minimums.
A
I
unfortunately
don't
think
we
have
a
great
answer
for
this,
except,
I
guess,
is
anybody
else,
seeing
trying
to
do
kind
of
very
rapid
or
like
one
pod
per
second
testing
on
windows.
A
H
G
B
So
it
is
excluded
here
yeah,
so
this
test
the
one
on
container
d
at
the
bottom.
This
uses
a
gce
provider
startup
script
and
we
do
disable
defender
for
container
d.
In
that
case,.
F
Yeah
for
defender
so
about
a
month
ago,
I
did
some
research
into
some
of
the
defender
performance
issues
and
we
actually
resolved
quite
a
few
of
them
for
the
various
they
were
scanning,
various
things
inside
the
container,
so
those
should
be
resolved.
A
bunch
of
them
should
be
resolved.
F
Now
you
are
still
going
to
see
container
like
defender,
spikes
and
things
when
the
at
least
as
far
as
like
the
slos
go
for
linux
versus
windows,
you
have
to
keep
in
mind
like
they're,
going
to
be
different,
because
a
windows
container
has
8
to
10
processes
that
run
inside
the
container
and
need
to
spin
up,
unlike
it
is
in
in
linux.
So
there
is
going
to
be
a
slower
startup
time
and
and
more
pressure
on
the
system
to
be
able
to
start
these
containers
up,
as
well
as
the
pause
container.
F
It
has
10
processes
that
run
in
it
and
all
it
does
is,
you
know,
set
up
the
network,
and
so
the
there
will
be
a
difference
here
and
we
have
we.
We
are
creating
those
processes,
so
I
think
we
there's
probably
there's
definitely
room
to
improve
some
of
this
stuff,
but
there
will
be
a
difference
between
windows
and
linux.
A
Also,
I
know
you
you
mentioned
that
you
didn't
see
the
cpu
go
above
50.
Do
you
happen
to
have
any
disk
metrics,
at
least
on
windows?
Starting
containers
is
extremely
disk
intensive
and
if
you're
starting
containers
that
fast,
I
wonder
if
that
could
be
a
bottleneck.
Do
you
know
what
kind
of
iops
your
your
disc
was
is
getting.
B
Right
from
the
test
now
but
yeah,
I
can
easily
replicate
again
and
check
out
the
disk
as
well
yeah.
As
I
said
I,
I
ran
this
one
against
the
open
source,
so
it's
it's
very
easy
to
replicate
just
one
directly
yeah,
creating
the
the
node
pool
the
windows,
node
pool
and
then
executing
yeah
the
script
to
create
the
pods.
I
can
try
out
and
see
if
this
is
the
case,
but
I
remember
that
yeah
the
machine
was
really
a
very
beefy
machine.
B
So
I
believe,
even
if
our
recommendation,
for
example,
is,
I
believe,
two
cores.
So
this
was
a
really
advanced
machine
compared
to
what
our
customers
usually
use
and
how
it
how
the
problem
started.
Yeah.
It
was
one
of
the
customers
actually
doing
some
testing
and
even
creating
more
hundred
pods
on
a
single
node
with
just
one.
I
Second
delay,
so
this
is
a
great.
I
D
A
B
No,
it's
only
server
core,
but
I
actually
have
it
pulled
first
on
the
node,
so
the
image
exists.
So
what
I
noticed
as
well
is
if
the
image
doesn't
exist-
and
I
do
start
the
creation
of
the
40
or
50
mods,
or
so
it
will
be
more
worse
than
this.
Actually,
in
that
case,
because
many
of
these
will
start
trying
to
pull
at
the
same
time
the
image
as
well,
but
the
one
here
was
actually
yeah.
The
image
existed
on
the
node.
B
J
A
F
I
I
don't
know
the
exact
number,
but
it's
pretty
similar
across
both
the
containers.
Okay,
because
there's
just
a
bunch
of
system
processes
that
need
to
run
for
everything
to
start
up
correctly.
E
You
could
still
go
with
jamie's
proposition.
Can
you
try
running
the
same
test
with
with
the
nanoserver
images?
Instead,
I
have
seen
some
differences
between
the
server
core
image
and
the
now
server
one
in
the
past.
I
wonder
if
that's
the
case
here
as
well,
so
that
might
be
interesting
to
observe,
and
secondly,
can
you
also
provide
cubelet
logs,
so
we
can
take
a
look
at
what's
taking
a
lot
of
time
in
terms
of
operations
that
qubit
is
doing.
K
I
might
also
be
curious
to
see
the
from
the
linux
side
what
the
load
is,
because
this
amount
of
podcare
and
regardless,
if
it's
windows
or
linux,
can
topple
over
some
smaller
cube
api
and
cube
scheduler,
and
you
just
have
resource
contention,
cube
api,
basically
spikes
up
and
then
all
of
a
sudden
everything
else
that's
running
on
there
just
starts
to
slow
down.
If
not
just
stop
responding.
K
K
Absolutely
yeah,
if
there's
not
enough
iops
for
xcd
and
you
have
crazy
amount
of
pod
churn
yeah
between
cube
api
and
fcd.
It
will,
if
it
doesn't,
have
the
resources
between
I
o
cpu
memory.
Yes,
you'll
absolutely
see
that
slow
down
and
then
I'm
sorry,
sorry,
sorry
go
ahead.
C
I
want
to
say
that
it
will
be
good
to
have
logs
from
container
d
as
well,
so
I'm
working
on
scale
testing,
but
for
container
d
right
now
granted
I'm
not
using
server
core
image,
so
I
should
and
again
with
high
load
on
my
nodes,
I
see
the
exact
same
performance
degradation
so
for
two
seconds
to
start
podcast
container
I
get
to
10
15
seconds
or
non-responsive,
so
it
would
be
interesting
to
see
the
continuity
dogs
as
well.
A
B
Oh
sorry,
no,
I
just
wanted
to
clarify
that
not
going
unhealthy
didn't
happen
with
container
d.
This
was
what
happened
with
rocker
earlier
okay,
it
keeps
fighting
for
a
long
time.
In
one
case,
it
kept
going
into
the
loop
of
container,
run
error
or
create
error
up
to
60
minutes,
but
yeah
all
came
up
at
the
end,
so
it
stays.
F
A
A
A
Service
run
container
d
manually
and
then
it
will
log
to
standard
out
standard
error
or,
and
then
you
can
change
the
verbosity
with
I
believe
v
or
you
can.
There
should
be
arguments
for
setting
up
like
where
to
log
to
I
think,
but
if
you
just
follow
the
setup
like
container
d
register
service,
I
see
you.exe
start
container
d,
you
won't
get
logs.
E
Yeah,
if
you
use
an
ssm,
you
can
specify
where
to
output
the
standard
error
and
start
that
out
outputs
of
container
d.
B
B
If
mark
can
scroll
yeah
back
to
the,
I
think
I
pasted
the
part
yeah
in
one
of
those
sections.
There's
yeah
the
one
below
yeah,
there's
one
there
where
I
pasted
the
error
from
the
cubelet.
A
Yeah,
so
I
think,
there's
a
couple
things
we
need
to
decide
on.
One
is:
is
this
something
that
we
think
that
we
can
actually
fix
and
or
are
we
like?
Are
we
going
to
say
like
one
pod
per
second?
Is
too
quick
for
windows
nodes
and
if
that's
the
case,
how
can
we
kind
of
mediate
or
slow
that
down
and
to
it
like?
A
L
Brian
one
thing
I
would
suggest
is-
and
we
have
suggested
that
in
the
past
to
our
customers
as
well,
have
you
tried
the
staggered
approach
and
then
put
in
some
delay,
so
you
still
scale
up
to
the
same
nodes
and
what
mark
was
saying
one
bot
per
second
like
go
to
like
one
part
per
two
seconds
or
three
seconds
and
see
at
what
point.
It's
not
happening.
B
Yeah,
so
I
I
did
that
already
so
at
10
seconds
in
this
test,
when
I
create
40
pods
on
eight
core
machine,
starting
from
having
yeah
one
pod,
every
10,
second
or
15
seconds,
I
don't
see
the
problem
anymore
yeah.
I
think
I
saw
that
here
so
the
minimum
I
was
able
to
get.
It
was
10
seconds
one
time,
but
then
one
other
time
it
didn't
work
15
seconds.
It's
always
fine
and
I'm
able
to
get
the
40
pod
started
from
the
first
time.
B
So
it
definitely
yeah
will
go
well
if
you
increase
the
number,
but
the
problem-
I
guess,
as
mark
mentioned
as
well,
it's
not
easy
to
come
up
with
one
number
that
works
on
all
vms
and
everywhere
and
for
all
images.
So
it's
kind
of
tricky.
What
should
we
tell
exactly
or
communicate
to
customers?
Is
it.
A
So
yeah
some
of
one
one
of
these
documents
in
from
six
scalability,
talked
about
that
a
little
bit
and
they
like
recommended
doing
this,
wasn't
the
exact
document,
but
they
do
recommend
like
if,
if
you
are
publishing,
you
know
scalability
limits
or
anything
like
pivot.
I
like
do
do
math,
based
on
whatever
the
limiting
factor
is.
If
it's
cpu,
you
know
memory
disk.
A
B
A
B
One
thing
we
had
earlier
was
kind
of
using.
I
I
believe,
if
peter
on
the
call
he
can
correct
me,
but
I
guess
we
had
kind
of
one
pod,
every
30
seconds
or
20
seconds
as
kind
of
yeah
suggestion
in
general,
but
yeah.
B
Play
it
safe
and
come
up
with
a
big
number
that
that's
good
for
customers,
but
on
the
other
hand,
it's
not
easy
for
customers
as
well
to
enforce
this,
because
today,
there's
no
way
for
this
to
be
handled
automatically,
we
don't
have
yeah
some
kind
of
something
hooked
to
the
scheduler
that
will
prevent
this
from
happening.
So
customers
needs
to
manually
do
this
adjustment,
which
is
probably
not
the
best
kind
of
experience,
in
my
opinion,.
F
F
So
yeah,
so
you
can't
rely
on.
A
F
But
the
underlying
implementation
does
a
lot
of
stuff
with
c
groups,
and
so
it
would
need
to
be
refactored
to
to
work
for
windows.
L
I
do
feel
like
mark
this
is
becoming
as
ibrahim
is
saying,
this
is
becoming
important
for
customers.
Where
there
are,
you
know
when
they're
getting
to
windows
is
becoming
like
you
know,
when
customers
trying
to
run
like
next
level
of
workloads,
thousands
of
parts
and
stuff,
this
is
going
to
become
increasingly
important.
So
I
don't
know
what
in
the
next
steps
are,
but
like
should
sig
windows
spend
some
time
here
in
trying
to
create
some
baseline.
A
Or
I
like
jay's
suggestion
of
maybe
we'll
start,
maybe
we'll
start
with
some
ed
tests
and
see
if
we
can
see
well
one
we
can
see
if
this
is,
if
we
see
similar
numbers
across
all
like
different
environments
across
you
know,
gce,
aks,
gke
or
tke,
like
all
of
those
see,
if
it's
how
consistent
it
is
or
if
it
does
vary
based
on
you
know,
node
configuration
and
then
also
at
least
start
to
tune
in
what
that
recommended
value
is
and
then
try
and
fix
it
but
yeah.
I
agree.
A
J
J
To
know
for
me
like
which
base
images
do
they
produce
different
results?
Why
and
and
just
narrow
it
down,
because
you
know
the
general
recommendation
right
now
is
to
use
certain
images,
but
you
know
if
you're
a
dot
net
core
developer.
They
tell
you
to
use
nano
server,
but
what?
If
we
find
that
a
different
image
actually
produces
a
little
bit
better
results
for
a
particular
type
of
application,
then
the
recommendation
wouldn't
be
blanket
run
your
your.net
core
on
nanoserver
right.
J
It
would
be
if
you
got
this
workload
use
this,
and
I
think
that
would
be
cool
to
give
some
guidance,
because
we
we
just
did
a
master
class
and
that
came
up
a
lot
was
what
types
of
apps.
What
do
you
recommend?
What
are
best
practices?
And
you
know
it-
there's
there's
nowhere
to
tell
people
where
to
go
to
a
point.
It's
a
lot
of.
It
is
just
based
on
what
you
need
api
exposure
or
rap
wise,
not
just
performance,
because
the
the
tnis
connections
thing
for
is
images
based
on
sku
air
quote.
J
So
so
they
say
on
the
website
when
they
tell
the
different
base
images,
they
say
window
server,
no
limit
on
high
s
connections,
but
they
don't
tell
you
what
the
limit
is
on
nano
on
server,
core
or
windows,
which
is
10..
F
F
A
J
D
L
A
G
G
I
I
just
think
that's
a
good
place
for
it.
If
we
can,
we
kind
of
don't
want
to
certify
that
it
definitely
works,
but
we
want
to
have
some
kind
of
operational
conformance
that
we
know
it's
kind
of
like
what
we
expect
our
baselines
to
be
as
a
community,
and
that
allows
us
to
kind
of
point
people
at
that
direction.
G
A
Yeah,
I
think,
having
so
kind
of
going
back,
having
an
ede
test
or
something
else
that
I
think
different
people
can
plug
in
different
values
and
just
try
running
locally
would
be
a
big
help.
Is
that
something
that
you
could
work
on
ibrahim
is
publishing
like
what
you
have
as
an
ed
test
or
something
else
yeah?
I
can
pick
up
this.
B
Yeah
and
try
to
see
having
it
as
endless.
Definitely.
A
D
D
B
Yeah,
I
love
that
for
sure,
and
thanks
for
all
the
pointers
yeah
meanwhile,
in
parallel
as
well
use,
while
I'm
looking
into
the
end-to-end
test
I'll
try
as
well
looking
into
different
images,
see
if
this
helps
but
yeah.
My
guess
is:
it
won't
be
different,
I
believe
with
full
images.
Maybe
it
will
be
worse.
Full
images
are
bigger,
but
I
will
test
one
run
with
full
and
one
with
nano
to
see
as
well.
If,
if
this
would.
D
Help
yeah,
as
far
as
how
you're
producing
these
I
see
you're
using
cluster
loader
too.
Does
anybody
know
how
much
I'm
looking
through
the
code
in
cluster
loader
and
I'm
like?
I
see
some
windows
stuff,
but
it
seems
kind
of
scattered
around
like
it
seems
like.
I
can't
tell
if
it's
gce
specific
or
what.
C
It's
it's
not
gc
specific,
it's
the
code.
There
is
very
confusing,
but
we
used
it
like
some
time
ago,
maybe
a
year
or
more,
probably
more,
to
run
centesan
against
windows
and
it's
usable
with
some
caveats.
B
I
guess
this
is
used
for
the
docker
one.
I
didn't
do
the
initial
issue
with
container
d
testing,
which
I
did
I
just
yeah,
did
it
manually
yeah
just
with
patch
script
that
actually
yeah
just.
I
I
D
I
H
Yeah
we
just
this
was
a
couple
years
ago.
We
just
wanted
to
find
out.
Can
we
run?
You
know
100
pods,
on
a
windows
node,
and
there
was
already
this
framework
that
we
knew
some
other
teams
were
running,
that
sort
of
a
density
test
for
linux,
and
so
we
were
able
to
use
it
for
windows.
Also.
D
So
I
remember
cluster
loader
I
it
was
a
long
time
ago.
So
tim
st
claire
worked
on
this
over
at
red
hat
like
five
years
ago,
and
then
I
think
jeremy
eaters
team
took
it
over.
I
kind
of
worked
on
it
a
little
bit
with
them
too
in
the
early
days,
and
then
I
didn't
know,
people
still
used
it.
So
I'm
just-
and
I
definitely
didn't
know
it
supported
windows.
D
So
I'm
just
like.
Are
we
contributing
to
this
as
a
sig
or
are
we
looking
at
it
or
is
it
just
something?
That's
just
sort
of
maintained
outside
of.
D
D
D
C
A
A
There
are
some
windows,
specific
tests
that
are
tagged
with
sig
windows,
and
then
there
are
plenty
of
other
tests
that
will
like
that
were
originally
authored
for
linux,
that
just
work
with
linux
or
work
with
windows.
A
K
I
mean,
I
guess
the
my
main
point
would
be
if
we're
going
to
as
a
sig
or
as
a
community.
You
know
we're
making
we're
making
a
choice
to
use
a
performance,
testing
tool
or
anything
like
it
would
be
wise
to
actually
have
that
as
a
discussion
prior
to
someone
just
choosing
it,
because
we
might
just
choose
a
random
tool,
no
one
that
has
experience
with
or
we
might
find
a
tool
that
a
bunch
of
people
are
like.
Oh
I've
worked
with
that
and
it
should
also
be
easily
accessible
for
the
general
public
too.
K
It
shouldn't
be
ridiculously
complicated
because
I
mean
as
a
customer.
If
I
want
to
do
load
testing,
I
would
say:
oh
I'm
going
to
go
upstream
to
kubernetes,
so
just
consider
agreed
on
all
of
those
points.
F
I
I
I
just
stress
that
we
need
somebody
to
lead
this,
this
effort,
so
anybody
here
or
anybody
else
that
that
you
know
within
your
companies
that
want
to
take
this
on
and
drive
this
please
you
know,
step
up
and
and
just
take
the
horns
and
and
go
because
this
is.
This
has
come
up
several
times
and
it's
just
a
matter
of
getting
some
some
bandwidth
to
to
complete
it.
So
then,
obviously
it
would
have
a
huge
impact.
So.
D
Yeah,
that's
maybe
james.
Maybe
ross
is
the
maybe
maybe
ross
this
is
you
found
your
calling.
K
D
I
A
It's
just
if
you
have
availability
now,
putting
together
some
plans
or
recommendations
for
how
to
move
forward,
that
other
people
can
follow,
and
maybe
even
people
with
less
windows
experience
would
be
able
to
help
pick
that
up
too.
If
there
was
a
kind
of
a
road
map
to
follow.
Okay,
yeah,
absolutely.
F
We
could
create
maybe
a
subgroup
that
I
would
think
we
discussed
various
subgroups
that
we
could
create.
So
maybe
we
could
create
one
of
those
and
start
to
formulate
some
plans
and
even
even
have
like
spin-off
meetings
for
that
topic,
specifically
like
we've
done
in
the
past
for
tmsa
and
the
the
plug-in
for
csi.
D
K
K
K
D
The
tricky
thing
here
is,
I
just
don't
know
where
to
start.
It's
like
there's
tools,
there's
like
there's
stuff.
That's
there
there's
like
do
we
start
by,
like
you
know,
I
think
the
most
concrete
place
to
start
here
is
to
see
what
this
thing
does
because
clearly,
there's
code.
Clearly
people
are
using
it
to
performance
test.
D
Maybe
the
easiest
bounded
thing.
If
anybody's,
especially
folks,
are
time
ban
time,
limited
is
figure
out
what
this
thing
does
and
whether
it's
useful
and
like,
if
we
could
just
start
with
knowing
that
that
might
give
us
some
intuition
on
what
road
to
go
down
you
know,
do
we
need
to
build
stuff
or
not?
You
know.
A
We
go
to
the
sig
scalability
community
meeting
one
week
and
say:
hey.
We
have
a
group
of
people
who
are
interested
in
doing
you
know
all
of
this
benchmarking.
What
are
what
are
your
like?
We
can
help,
but
we're
looking
for
guidance.
Do
you
know
if
there's
any
issues
with
these
tools
running
on
windows
or
are.
C
The
cluster
loader,
for
example,
that
one's
actually
quite
nice,
it's
different,
not
difficult.
It's
complicated
on
linux,
because
it's
never
designed
with
it
in
mind
in
the
first
place,
but
it's
actually
quite
nice.
I
mean
you,
send
the
com,
you
pass
it
config
file
where
you
design
your
tests,
so
I
want
to
create
x,
amounts
of
pods
and
y
amount
of
batches
and
stuff
like
that
and
get
some
metrics.
It's
quite
nice.
D
The
reason
originally
the
reason,
the
reason
we
did
the
cluster
loader
thing
originally
was
to
break
etcd,
and
that's
why
I'm
dubious,
because
I'm
like
well,
I
know
what
it
was
built
for
originally
and
I'm
wondering
like
so
adelina
when
you
use
it.
What
is
the
thing
that
it
does
for
you?
That's
super
useful,
it's
it's
the
metrics
or
is
it
the
like?
What
what
is
this
windows
specific
thing
that
you
get
out
of
it.
C
I
mean
it's
not
with
a
specific
thing,
but
I
needed
a
tool
to
just
hammer
a
note
to
see
when
it
breaks
and
how
it
breaks.
So
for
me
back
then,
when
we
used
it,
I
just
wanted
something
that's
already
made,
so
I
didn't
write
some
scripts.
H
C
Create
a
hundred
no
100
pods,
but
it's
so
again,
as
is
a
tool
for
this
to
see
where
the
outer
limits
of
the
node
are
it's.
It
can
work.
D
D
F
Cool
yeah
about
six
months
ago,
I
played
with
it
as
well
and
got
it
working
for
windows,
and
it
gave
you
pod
startup
times
kind
of
like
out
of
the
box,
and
then
I
tweaked
it
to
grab
metrics
of
the
top
like
system
processes.
So
I
was
able
to
get
hns,
hcs
cubelet
and
this
the
cpu
loads
of
those
and
then
displayed
in
a
graph
over
time
so
and
it
gave
you
the
90th
95th
percentile
kind
of
thing
across
those
things
and
so
for
for
what
I
was
trying
to
do.
That
was
fairly
helpful.
F
So
I
think
there's
there's
some
room
to
tweak
that
I
had
to
like
change
the
code
a
little
bit
to
do
it,
but
it
wasn't
significant.
It
was
pretty
minor.
C
J
D
D
J
D
All
right,
so
here's
what
I'll
do
I'm
to
look
into
this
cluster
loader
thing
and
if
other
folks
want
to
go,
find
all
the
hundreds
of
other
performance
tools
for
windows
that
exist
and
compare
and
contrast
them,
I'm
all
for
it.
I
just
it
looks
like
we
got
something
that
works.
I
want
to
see
if
I'm
going
to
see
if
it
works.
J
D
F
Dropped
a
link
in
here
so
when
I
was
looking
into
some
of
this,
I
did
like
an
analysis
on
various
tools
as
well.
Oh
great,
and
I
put
it
into
a
hack
mde,
so
it
could
be
a
starting
point.
I
kind
of
looked
at
some
various.
Oh
maybe
I
put
the
wrong
link
but
I'll
I'll
make
sure
I
drop
it
in
slack
here,
but
the
one
I.
D
F
F
The
the
difference
of
those
two
but
I
have
I
have
another
link
and
the
one
I
ended
up
choosing
was
the
cluster
loader
because
it
was
being
contributed
to
and
it
had
support.
So
it
had
all
the
different
components.
So
anyways
I'll
drop
the
link
in
slack
and
link
it
to.
K
And
we
can
obviously
add
things
as
we
go.
I
mean
we
can
add
a
module
for
hms.
We
can
add
module
specifically
for
windows.
It's
there's
a
lot
of
options
for
expanding
it
as
we
need
to.
D
Yeah,
like
it's
like
it
seems
like
what
we
need
to
do
is
just
build
a
community
around
this
and
document
it
and
make
it
so
that
you
know
it.
You
know
it
it's
an
it's
an
obvious
thing
for
people
to
use
right
if,
if
it
works,
but
I
I
don't
know
for
myself
that
it
works,
it
seems
like
everyone
else
and
then
yeah.
Does
anybody
want
to
follow
up
with
six
scalability.
A
A
D
E
A
K
D
At
me
as
well,
okay.
I
I
Are
you
abraham,
hamid
right?
No,
so
it's
ib
abou.
A
Was
craig
vinicius?
Is
there
anybody
from
the
microsoft
that
might
they
would
be
interested
in
this
too?
They
could
help.
D
I
know
some
of
the
folks
on
the
perf
and
scale
team
at
redhead.
I
can
ask
them
also
because
that
was
originally
where
all
this
came
out
of.
A
D
Yeah
so
cool,
so
we're
gonna
sort
of
organically
sort
of
warm
around
this,
but
anybody
in
the
in
the
in
the
meeting
here
or
if
you
know
anybody
who'd
like
to
lead
this
initiative
for
sig
windows
like
just
yeah
just
get
up
in
there
like
we
we'd
love
to
I
I
think,
but
I
think
organically.
We
we
all
are
kinda
gonna,
move
towards
this
goal:
anyways,
but
yeah.
This
would
be
a
great
work
group
or
sub
project
or
something.