►
From YouTube: Kubernetes WG Batch Weekly Meeting 20220804
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone,
my
name,
is
abdullah
I'll,
be
moderating
this
meeting
today.
Please
know
that
this
meeting
is
recorded
and
will
be
uploaded
to
youtube.
Please
adhere
to
kubernetes
code
of
conduct
for
public
meetings,
so
today
is
august
4th
we
have
a
presentation.
A
About
qflux
by
claudia
just
before
giving
the
mic
to
claudia,
I
would
like
to
announce
that
we
have
a
batch
date
in
kubecon
north
america
in
november,
so
I'm
just
gonna
basically
call
for
papers
on
the
chat.
The
deadline
is
august
8th.
A
A
Last
time
it
was
in
it
was
the
first
time
in
in
europe.
I
think
it
was
a
very
successful
one.
We
had
a
lot
of
good
discussions,
so
I
encourage
everybody
to
submit
a
proposal.
Hopefully
this
one
will
be
even
bigger
because,
typically
the
one
in
north
america,
the
the
event
is,
is
bigger
than
the
one
in
europe.
A
B
I
hope
you're
I'm
showing
the
right
screen.
Yes,
is
that
good?
Okay?
Thank
you!
So
much
hello,
hi
everyone,
I'm
I'm
claudia
misali,
I'm
from
ibm
research
in
yorktown
heights,
new
york.
I
will
be
presenting
today
some
work
that
we
are
doing
around
scheduling
mainly
but
in
general,
in
running
hbc
workloads
on
kubernetes.
B
B
Meetings
for
to
the
batch
seek,
and
so
I
tried
to
stick
to
that
as
much
as
I
could.
It's
not
a
lot
of
slides,
but
hopefully
we'll
get
to
to
the
point
so
for
the
the
first
slides
were
about
motivation
and
kubernetes
gaps
that
we
are
facing.
While
we
try
to
run
the
those
hpc
workloads
so
for
this
applies
also
to
ai
workloads,
I
would
say,
but
for
now
we've
been
just
running
traditional
hpc
workloads.
B
B
Those
are
the
benchmark
needed
to
be
run
to
for
the
pre-existing
computing,
supercomputers
acceptance
so
for
the
coral
systems,
and
so
we've
been
building
those
into
in
containers
and
and
run
them
on
both
aws
cks
and
open
shift
on
on
ibm
cloud.
So
these
are
mpi
workloads.
They
can
be
either
the
one
that
we
we've
used.
We
had
a
mix
of
network
intensive
and
compute
intensive
for
to
enable
the
mpi
workloads.
B
We
use
the
mpi
operator
from
kubeflow,
which
we
changed
a
little
bit,
but
this
is
not
the
place
where
to
talk
about
it,
so
those
workloads
basically
required
to
have
an
mpi-enabled
kubernetes
cluster
for
what
concerns
the
communication
patterns.
So
that's
again
all
old-school,
mpi
communicators,
and
so
we
have.
We
can
have
a
mix
of
reduce
or
reduce
all
total
communication.
B
Stencil
computation,
nearest
neighbors
gather
gather
so
any
possible
combination
of
communication
that
can
be
happening
in
any
mpi
application
that
what
you
would
usually
get
into
the
traditional
hpc
workloads
requirement.
B
B
Another
feature
that
is
preferable
is
having
topology
awareness,
while
you
place
the
the
workloads
in
the
cluster,
so
you
want
for,
of
course,
that
depends
also
on
the
kind
of
application
that
you
need
to
run,
so
some
can
have
they.
They
have
different
requirements.
B
So
if
you
have,
for
instance,
a
an
application
that
is
more
like
network
intensive,
maybe
you
want
to
put
them
closer
to
each
other
so
that
the
network
is
not
doesn't
cause
big
problems,
or
I
mean
slow
downs
in
the
in
performance
and
having
also
topol
knowing
the
topology
of
the
cluster,
where
those
applications
should
run
can
help
the
scheduler.
B
You
want
that
to
be
exclusive
used
for
for
a
certain
workload,
so
you
want
to
share
or
not
share
network
interfaces
and
another
thing
that
we
that
that's
that's
something
that
of
course
like
in
traditional
hpc.
B
When
you
run
traditional
workload,
workloads
on
traditional
hpc
on
on-prem
clusters,
with
some
job
scheduler,
so
let's
learn
melissa
to
say
to
name
just
like
some
famous
names,
you
need
those
kind
of
give
you
the
consistency
of
the
placement.
So
if
you
have
a
certain
set
of
requirements
and
unless
the
the
cluster
is
being
allocated
to
other
users
etc,
but
in
general
the
the
feature
of
having
repeatable
and
consistent
results
when
you
execute
those
workloads,
that's
something
that
is
a
feature
preferable
in
in
in
kubernetes
clusters.
That's
the
main
topic!
B
B
B
So,
for
instance,
if
you
want,
if
there
is
a
application
that
is
more
compute
intensive,
then
maybe
you
want
to
rather
spread
the
the
pods
in
the
in
the
nodes
so
that
they
can
have
a
better
use
of
the
resources
and
do
not
interfere
with
each
other.
So
having
a
way
of
saying
like
if
I
would
like
to
use
like
a
pack
or
spread
just
to
name
something
that
is
familiar
to
everyone,
that
that
would
be
that's
something
that
is
not
immediately
available
in
kubernetes.
But
you
know
there.
B
Improve
scheduling
with
having
more
schedulers
with
a
scheduled
framework
and
another
thing
that
it's
in
the
requirements
and
that
is
sort
of
a
gap,
but
there
are
tools
in
kubernetes
that
help
you
to
fill
the
gap
for
about
topology
awareness
in
scheduling
so
there
in
the
node
objects.
B
There
are
set
of
labels
that
we
can
use
to
to
define
more
topology
where,
when
we
want
to
allocate
the
the
workloads
in
the
cluster,
so,
for
instance,
we
have
the
region
and
zone
labels
and
those
can
be
used
to
to
make
a
more
topology
aware.
Scheduler.
B
Of
course,
the
scheduling.
Of
course
the
topology
awareness
can
be
more
or
less
detailed,
so
you
can
go
from
like
region,
node
zone
and
node.
So
that's
like
a
three
level
topology
that
you
can
define,
but
then
you
can
go
also
go
farther
in
in
details
in
in
that,
so
you
can
go
socket
or
normal
aware.
Scheduler,
like
the
normal
hour
schedule
plug-in.
So
there
are
different
level
of
topology
awareness
that
you
can,
that
you
can
specify
some
are
there.
Some
can
have
to
be
done.
B
Who
is
implementing
some
more
features?
One
thing
that
we
have
that
we
have
phase
when
trying
to
run
those
hpc
workloads
was
to
so
we
with
a
little
good
contact.
So
we
tried
to
use
the
default
scheduler
as
as
best
as
possible
to
the
best
of
our
abilities
and
use
all
the
possible
features
that
are
available
in
kubernetes
so
like
we
did
really
extensive
use
of
affinity,
especially
at
the
zone
label
level,
and
but
this
works
up
to
a
certain
point.
So
if
we
want
to
get.
B
An
allocation
on
certain
nodes
that
have
that
belong
to
a
specific
zone.
If
you
have
multiple
a
cluster
that
spans
over
multiple
zones,
and
you
want
to
make
either
I
mean
I
wrote
their
exclusive
use,
but
can
be
also
shared
that
doesn't,
that
does
apply
to
both
scenarios.
B
B
B
Pack,
placement,
or
on
just
two
zones
out
of
three
in
that
case
with
the
with
kubernetes,
how
you
would
do
is
when
the
pod
or
deployment
yaml
file,
you
would
specify
that
the
those
parts
need
to
land
on
nodes
that
have
availabilities
on
one
and
two,
so
the
workload
will
land
on
those
these
two
set
of
of
nodes.
This
works,
of
course,
but
the
thing
that
doesn't
there
wasn't
enough
for
to
make
those
workloads
perform.
B
B
So,
for
instance,
if
you
want
to
if
we
want
to
have
a
pack
placement
for
the
workloads
into
those
two
zones,
once
you
select
the
two
zones,
the
default
scheduler
would
place
either
on
either
node
in
any
zone
in
any
of
those
two
zones
randomly
until
the
if
the
if
the
nodes
are
empty,
of
course,
and
until
the
resources
then
are
all
full
with
the
need
of
having
a
like
plug-in
or
schedule
that
scheduler
placement
that
does
packing
is
that
you
would
feel
one
note
in
one
zone
first,
then,
you
fill
up
the
entire
zone,
one,
and
then
you
go
to
zone
two.
B
So
that's
a
gap
that
we,
where
there's
something
we
were
not
able
to
to
do
with
with
the
default
scheduler
and
the
in
the.
B
Affinity
labels
that
we
were
given
and
that,
of
course,
then
is
like
a
side
effect
that
grows
in
complexity
when
scaling
when
having
large
clusters,
because
if
you
can
rely
in
a
small
scale
with
affinity
or
tension
toleration,
but
you
have
to
basically
do
that
by
hand
in
the
yaml
files
at
the
larger
scale.
Did
this
becomes
more
and
more
of
a
burden
on
on
the
user?
B
So
if
the
scheduler
could
do
whatever
it
does
and
solve
that
for
you,
that
would
that
would
be
easier
on
the
user
perspective,
but
also
the
application
would
perform
as
well.
B
Also,
the
the
last
point,
that's
kind
of
mixed
with
the
with
the
one
that
I've
been
just
talking
about,
so
having
also
a
predictable
and
consistent
allocation
of
the
pods.
When
we
have
these
certain
types
of
work,
because
this
doesn't
apply,
of
course,
to
all
the
workloads.
But
in
this
specific
hpc.
B
Area
having
the
same
placement
consistent
and
especially
if
you're
running
workflows,
and
not
just
one
application
but
multiple
hpc
applications
that
build
a
workflow
having
a
consistent
allocation
of
the
pods
that
also
guarantees
that
the
application
have
consistent
performance
and,
oh
sorry,
there's
a
typo.
B
That
should
be
a
pit
and
also
the
we
also
had
experienced
when
deploying
the
workflows
that
give
the
fact
that,
given
an
empty
cluster
and
trying
also
to
take
advantage
of
affinity
and
anyway,
the
group
scheduler,
when
the
resources
are
all
the
same
to
it,
it
will
randomly
pick
a
node
among
the
sets
of
the
nodes,
and
this
may
impact
also
the
startup
time
of
the
entire
application
and
application
work
for
workflows.
B
And
these,
of
course,
then
impact
the
overall
performance
so
and
yeah.
So
we've
been
experiencing
this
these
gaps
and
we
let
me
move
to
this,
oh,
and
so,
since
we
are
in
the
business
of
scheduling.
So
we
are.
We
have
implemented
this
scheduler
plugin,
that
is
called
coop
flux,
and
this
is
based
on
a
flux.
B
The
cluster
management
and
jobs
scheduler
system
that
that's
implemented
and
maintained
by
the
livermore
lab
for
any
further
question
that
then,
can
definitely
help
out
here
since
he's
the
flux
expert.
So,
but
I
will
try
my
best
so
flux's
big
project
has
several
components,
since
it
doesn't
do
scheduling
only,
but
also
does
cluster
management.
So
for
this
particular
work
we
are
using
fluxion.
That
is
this
scheduler
component.
In
the
flux
framework
we
used
it.
We
com.
B
We
built
that
as
a
shared
library
that
we
use
from
the
scheduler
plug-in
through
a
golang
binding
that
we
implemented.
So
it's
quite
straightforward.
B
One
thing
that
it's
it's
very
useful
in
in
flux
is
that
the
the
representation
of
the
resources
of
the
cluster
is
graph
based.
B
So
the
the
the
good
aspect
of
this
is
that
the
any
placement
that
flux
does
with
any
of
the
placement,
algorithms
that
are
available
in
the
library
are
automatically
topology
aware
and
the
topology
of
the
cluster
can
be
more
or
less
detailed
accord,
depending
on
the
labels
that
are
available
in
the
node
objects.
For
instance.
If
if
the
cluster
is
like
a
cluster
running
on
a
cloud
deployment
and
like
on
vpc
that
spans
over
multiple
availability
zones,
that
label
is
present
is
in
the
node
object.
B
So
we
can
take
the
information
and
build
the
cluster.
That
also
has
a
the
level
of
topology
awareness.
At
its
own
level,
you
can
go
in
more
fine-grained
details
according
to
what
what
is
available
and
that
we
can
implement
so
that
we
are
kind
of
kubernetes
compliant
in
the
sense
that
we
are
not
trying
to
inject
artificial
or
our
own
labels
into
the
node
objects,
because
we
want
this
thing
at
least
to
work
on
any
kubernetes
cluster.
B
So
we
use
whatever
kubernetes
gives
us
and
we
can
go
even
more
details
using
the
no
node
feature,
discovery
operator
that
is
implemented
maintained
by
by
reddit,
and
that
also
gives
more
information
about
the
hardware
on
the
nodes
gpus.
B
So
we
also,
if
that's
deployed
in
the
cluster,
we
also
collect
the
labels
that
are
created
by
the
not
feature
discovery
operator
and
attached
to
the
node
objects,
and
we
build
the
cluster.
Did
the
graph
based
representation
of
the
cluster
also
using
that
information,
so
it
can
be
more
or
less
detailed,
depending
on
on
what
we
can
find
the
side
effect.
That
is
a
problem
that
we
are
living
unsolved
on
purpose.
B
That
means
that
the
coop
flux
scheduler
has
an
internal
state
that
is
maintained,
and
this
is
because
it
will
make
placement
decisions
based
on
what's
available
in
that
graph
of
the
resources.
So
it's
like
a
graph
matching
algorithm
that
is
executed
every
time
we
need
an
allocation
of
pods
on
the
cluster,
so
is
not
bidirectional
in
the
sense
that
burnett
is
always
knows
all
the
pods
that
are
created
and
are
executed
and
allocated
by
coop
flux,
but
is
not
true.
B
We
decided
not
to
go
through
the
let's
say:
pain
of
implementing
state
consistency,
so
the
because
it's
a
complex
problem
to
solve
so
the
to
have
the
best
use
of
of
the
cluster
along
with
coop
flux.
It's
needed
to
have
to
let
flux
manage
a
certain
set
of
nodes.
B
B
So
how
about
how
this
is
implemented?
So
we
are
using
the
scheduler
framework
for
that
and
we
are
implementing
two
of
the
extension
points,
the
pre-filter
and
filter
where
in
in
the
pre-filter.
That's
the
point
when
we
issue
the
allocation
request
to
the
fluxion
production
component,
which
runs
as
a
sidecar
container
in
the
coop
flux
pod.
B
B
So
we
to
deploy
this
the
scheduler,
so
we
basically
follow
what
we,
what
the
what's
provided
in
these
scheduler
frameworks
over
at
whatever
works
for
any
other
scheduler
plug-in.
That
applies
to
group
flux
as
well.
The
only
difference
is
that
we
have
the
sidecar
container
in
the
in
the
deployment,
but
that's
that's
not
compromising
the
the
the
deployment
process.
B
So,
of
course
it
runs.
No
privileges
are
needed.
We
don't
really
need
to
install
anything
other
than
the
pod
group
custom
resource
definition
that
we
use
for
group
scheduling.
B
We
didn't
want
to
reinvent
the
wheel,
so
we
use
what
the
community
provides
us
and
that
that's
just
been
working
perfectly
great
for
us
and
so
to
enable
group
scheduling.
You
would
need
to
use
to
do
that
as
you
would
do
in
the
co-scheduling
plug-in
if
you're
familiar
with
that.
B
So
yeah,
as
I
mentioned
in
the
previous
slides
we
have,
there
is
no
state
consistency
guarantee.
So
we
nodes
should
be
dedicated
to
cook
flats
as
any
other
scheduler
plugin.
B
You
would
just
name
the
scheduler
name
in
the
in
the
emails
file
of
the
workloads
that
are
to
be
deployed
and
the
installation
is
just
a
set
of
yellow
files.
It's
just
our
back
service
account
config
network
for
the
scheduler
and
a
deployment
file
for
for
the
the
container.
A
Can
I
just
interrupt
quickly
here
library.
A
B
Yeah
yeah
just
last
bit
is
that
still
not
part
of
help
me
install,
but
we
will
be
working
on
that
soon
and
that
was
the
last
10
things.
A
Thank
you
so
much.
We
have,
I
think,
about
10
minutes,
for
is
it
two
minutes
yeah
about
10
minutes
left
for
for
questions?
Anybody
has
any
questions
to
to
claudia.
I
have
a
few,
but
I
want
to
give
it
to
the
community
first
a
chance
to
ask.
C
So
I
do
you
run
like
two
different
scheduler
one
for
optimize
and
working
with
q,
flux
and
other
default
scheduler
for
normal
workloads,
or
you
just
run
one
default
scheduler
in
your
cluster.
I.
B
Don't
know
that
that
can
that
when,
when
we
run
our
experiments,
we
were
not
removing
the
other
overwriting,
the
other,
the
default
scheduler,
so
that
would
be
just
there
even
because
we
use
it
to
allocate
the
cool
flux
spot
as
well
and
yeah.
So
it's
it's
not
supposed.
B
I
mean,
that's
then,
on
the
user
preference,
but
it's
not
supposed
to
be
a
replacement
for
coupe
scheduler.
C
C
But
do
you
have
your
own
priority
model
or
some
model
by
which
you
control
hey?
I
want
this
extra
job
to
leave
than
others.
B
No,
no
we're
not
managing
any
of
that.
If
I
got
your
question
correctly,
are
you
talking
like
about
queueing
or
doing
reaction?
B
Okay?
So
now
we're
not
handling
that.
So
we
have
two
options
that
we
are
looking
into.
One
is
to
integrate
the
scheduler
with
mcat
the
multi-cloud
app
dispatcher
or
with
q,
so
that
we
would
like
to
separate
the
queueing
mechanism.
You
would
have
in
a
batch
scheduler
system
with
the
scheduler,
so
we
want
to
decouple
those
two
and
integrate
them.
B
Another
option
is
to
use
to
integrate
the
qa
mechanism
in
the
flux
framework
and
put
that
into
one
like
an
extension
of
maybe
mcad
or
something,
but
we
don't
want
to
yeah
to
to
have.
We
want
to
have
the
separation
here.
C
Yeah,
I
had
a
question.
Thank
you,
claudia
for
sharing
the
implementation
of
kuflux.
You
talked
a
little
bit
about
the
gaps
and
the
expressiveness
of
the
of
the
job
with
regards
to
you
know:
allocation
restrictions
or
like
across
zones.
C
How
are
you
are
you
able
to
express
that
in
your
jobs
and
is
it?
How
are
you
going
about
expressing
that,
so
that
you
can,
it
can
be
mapped
properly
in
the
in
the
graph?
That's
included.
B
Right
so
that
that
that's
also
managed
by
so
let
me
take
back
what
I
say
so
we
can.
That
can
be
done
in
two
ways.
One
way
is
just
letting
the
scheduler
do
the
the
the
placement
based
on
the
placement
algorithm,
that's
using
so
what,
for
instance,
we
create
the
class,
the
graph
of
the
resources
and
which
has
three
availability
zones.
B
So
if
you,
if
you
select,
then
to
have
a
pack
placement
algorithm
flux
will
figure
out
by
itself
that
the
sub
graph
within
the
first
zone-
that's
where
it
has
to
go
first,
then
it
will
go
to
the
other
one.
So
at
the
user
level,
you
don't
really
need
to
do
anything
other
than
selecting
the
the
placement
algorithm.
B
That
would
fit
better
how
you
would
like
to
the
the
the
pods
to
be
placed
another
way
that
we
that
we
have
enabled,
but
that
can
that's
still
pretty
quite
simple.
We
we
had
as
from
the
yaml
file
in
the
the
job
that
you
want
to
run.
B
You
can
specify
that
you
want
like
exclusive
views
of
a
specific
label
like
it
can
be
zone
or
node,
so
that,
if
you
specify
that
you
want
exclusive
use
of
a
zone,
then
that
workload
would
again
put
into
the
exclusive
exclusively
into
a
specific
subset
of
the
graph
that
belongs
to
that
to
a
specific
zone.
For
instance.
So
you
can
do
that
in
two
ways,
but
the
easiest
and
is
to
just
let
the
scheduler
figure
that
out.
A
So
a
big
one,
like
you
mentioned,
that
the
cube
scheduler
doesn't
support
like
packing
and
spreading
or
like
there's
one
of
the
gaps.
But
you
could
do
like
you
know
resource
based
like
we
have.
We
have
a
plug-in
that
is
basically
tries
to
escorting
plug-in
that
tries
to
either
spread
the
pods
based
on
node
utilization
or
pack
them
it's
kind
of
similar
to
what
you
just
described,
which
is
basically
a
global
policy
right,
like
whether
everything
is
going
to
be
it
packed
or.
C
A
We're
going
to
spread
everything,
so
I
you
might
want
to
take
a
look
at
that
and
see
like
maybe
tuning.
It
might
be
helpful.
Actually,
some
of
these
some
of
these
semantics,
but
yeah.
B
A
The
the
other
question
that
I
have
is
like
what
scale
do
you
run
like
if
every
pod
that
you're
trying
to
schedule
you're
making
a
call
out
to
flux?
I
guess
that
I'm
assuming
that's
what's
happening
right,
like
the
plug-in,
is
basically
making
a
call
rpc
or
something.
C
A
Flux
is
that
a
concern
in
terms
of
overhead
latency
and
how
many
pods
do
you
expect
to
schedule
per
second,
and
how
big
is
your
cluster
like?
Can
you
give
us
a
sense
of
what
your
experiments
scale,
where.
B
Yeah,
so
for
the
latency
added
when
doing
the
rpc
grpc
calls
so
so
far
we
really
didn't
have
any
didn't
experience
any
down
in
that,
and
that's
always
in
the
order
of
milliseconds.
B
If
I
remember
that
right,
that's
you
know
yeah.
I
hope
that's.
What
I
remember
is
correct,
so
even
having
very
like
more
fine-grained
location,
rather
than
having
large
asking
for
a
large
set
of
resources
so
having
a
burst
of
allocation
requests.
That
was
always
in
the
order
of
milliseconds,
so
that
that,
for
the
test
that
we've
been
doing
some
time
ago,
that
wasn't
a
an
issue,
the
size
that
the
cluster
size
that
we've
been
running
lately
is
in
the
order
of
thousands
of
virtual
cpus.
B
C
It
was
32
nodes
and
3008
usable
cpus,
but
we've
scaled
much
larger
than
that.
That
was
simply
for
the
the
mpi
application
testing.
We
did
hello
world
testing
at
greater
than
16
000
ranks
and
as
as
kind
of
a
side
note,
that
is
that's
a
comment
on
the
kubeflux
scalability.
B
Right,
thanks
for
adding
that
and
so
the
yeah
for
our
tests,
we
went
up
to
yeah
3
000
mpi
ranks
in
total
on
aws
and
about
1800
on
the
openshift
club
on
ibm
cloud
that
was
just
a
smaller
cluster
and
yeah.
It
depends.
B
We
had
different
scale
of
a
resource
request
from
the
application
more
like
smaller
mpi
worker
or
larger
npr
workers,
so
that
was,
they
really
depended
on
on
the
workloads,
but
either
fine
grain
or
larger,
larger
grain
of
resource
request.
B
There
wasn't
a
an
issue
there,
of
course,
the
whatever
time
you
it
takes
to
flux,
to
get
an
allocation,
then
that
time
has
to
be
put
into
the
entire
scheduler
machinery
that
is
behind
the
scheduler
framework.
So
of
course
there's
also
that
that
overhead,
but
that's
what
you
get
anyway,
so
we're
on.
On
top
of
that,
like
baseline,
we
are
not
adding
much
so
that
that's
been
working
good
for
us.
C
C
B
Okay,
yeah
so
for
the
preferred
and
the
fixed
affinity
preferences.
So
for
it
it
depends
on
the
workloads.
But
since
we
were
running
a
workflow
so
different
applications,
they
we
had
to
force
the
use
of
subsets
of
the
of
the
availability
zones
to
make
the
affinity
work
good
and
because
with
the
preferred,
then
it
would
anyway
span
over
the
other
zones.
B
So
the
preferred
doesn't
really
work
most
of
the
time,
even
because,
if,
if
you
do,
they
prefer,
then
it's
kind
of
like
you're,
just
at
the
end
using
the
entire
cluster,
so
you're
not
really
doing
affinity,
as
you
would
like
that
spreading
over
all
the
zones
that
would
work
for
amd,
amg
workflow,
for
instance,
that's
more
compute
intensive,
so
it
doesn't
really
care
about
latency.
B
So
the
preferred
in
that
case
would
work
but
doesn't
work
for
other
applications
and
either
way
also
having
preferred
placement
preferred
affinity,
then
the
allocation
of
the
pods
would
be
not
like
the
pack
or
spread
that
could
not
really
always
work.
Fine
spread,
yes,
pack,
no,
because
it
would
anyway,
do
like
a
spread
on
on
all
the
all
the
zones.
So
it's
really
a
matter
of
know
your
application
and
if
you
know
the
application,
then
you
know
how
to
what's
needed
to
make
it
work
and
use
the
tools
that
you
have.
A
Thank
you
so
much
claudia
we're
five
minutes.
B
C
A
Thanks
so
this
this
meeting
will
be
uploaded
and
please,
if
you,
if
you
want
to
share
the
slides
as
well,
we
can
post
them
in
the
notes.
A
Live
I
just
if
you
can
convert
them
into
google
slides
and
then
you
can
just
share
link
that
that
would
be
okay.