►
From YouTube: Kubernetes WG Batch Weekly Meeting for 20220331
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
guess
that
started
hi.
Everyone
today
is
31st
march
2022
and
we
are
kubernetes
batch
working
group.
We
have
two
items
in
the
agenda
today.
First
one
is
from
hwang
and
then
check.
Please
take
it
away.
B
A
C
B
All
right,
hi
everyone
I'm
way,
and
today
I'm
going
to
co-present
the
pubg
api
proposal
to
the
working
group.
So
basically
it's
not
a
brand
new
proposal.
So
if
you
are
familiar
with
the
app
stream
so
basically
park
group
is
a
proposal
we
have
presented
like
two
years
ago,
but
by
that
time
scheduling
framework
wasn't
ready
and
that
proposal
was
based
on
a
sub
project
called.
B
B
Okay
and
the
motivation
is
pretty
straightforward-
is
because
that
kubernetes
lacks
api
to
have
a
guarantee
to
ensure
that
we
can
achieve
a
semantics
to
run
a
group
puzzle
together
or
run
enough
so
due
to
that
without
this
guarantee,
so
for
sure
in
sometimes
you,
your
group
of
parts
can
be
scheduled,
but
sometimes
the
chances
are
past
may
get
partially
scheduled
or
even
worse.
The
path
the
different
groups
of
parts
can
run
into
dialogue
among
the
different
groups
so
that
they
fall
into
a
pending
state
for
each
group.
B
This
the
goal
is
that
we
want
to
enforce
this
kind
of
part
group,
api's
schedule,
scheduling
primitive
so
that
it's
acting
as
a
building
block
as
long
along
with
other
scheduling
directed
primitives,
like
pot
affinity,
topology
spread,
so
because
we
have
seen
a
lot
of
use
cases
pretty
depending
dependent
on
the
part
group.
Semantics
and
also
some
complex
scenarios,
like
some
tensorflows
actually
synthetically,
are
using
the
semantics
part
group,
as
well
as
other
existing
scenarios,
like
part
affinity.
B
So
one
case
is
that
a
tensorflow
wants
to
sched,
because
the
lacking
of
this
per
group
api
they
schedule
the
the
ps
parameter,
servers
first
and
then
schedule
the
the
other
trainer
or
worker,
pass
with
the
pod
affinity
with
the
ps
part.
But
you
can
see
that
it's
probably
because
in
theory
they
want
the
ps
and
the
worker
running
all
together.
B
So
in
theory,
if
we
can
provide
the
part
group
fundamental
group
api,
they
can
just
use
it
for
group
api
as
a
pd
block
and
just
stack
whatever
other
scheduling
primitives
just
to
fit
their
workflow.
So
this
is
why
we
want
to
propose
it
in
the
upstream
kubernetes
kubernetes,
instead
of
like
crd
and
some
other
rebel
and
another.
B
B
So
basically
we
want
to
ensure
this
is
a
very
fundamental
scheduling,
primitive,
so
and
in
terms
of
implementation
we
want
to
we.
We
can
support
different
kinds
of
implications,
so
the
upstream
can
have
its
origin
and
also
the
other
like
volcano
and
the
unicorn
can
use
the
same
api
but
have
different
augmentation.
B
So
we
just
ensure
that
we
have
the
efficient
scheduling
and
in
terms
of
pre-action
I
mean
we
just
be
aware
of
the
product
group.
So
the
stress
preemption
strategy
should
be
take
part
group
as
a
whole
unit
to
consider
its
preemption
strategies
and
the
last
one
is.
We
have
to
have
a
clear
life
cycle
of
how
to
manage
the
part
group,
so
this
is
basically
the
the
goals,
any
questions
so
far
in
the
comments
so
far,.
B
Okay,
if
not,
I
will
take
a
very
quick
look
at
the
existing
definition
of
file
groups.
So
basically
the
scheduling
plugins
is
the
first
one
is
a
ripple
that
we
access.
We
exercise
the
scheduling
framework
to
host
the
different,
a
bunch
of
plugins,
contributed
by
different
companies,
and
there
is
one
called
code
scheduling.
So
we
already
have
a
group
definition
here
and
a
volcano
is,
as
I
mentioned
itself,
the
it's.
B
B
B
It's
it's
made
that
we
can
schedule
this
bunch
of
parts
out
together
and
minimal
resources.
It's
a
field
that
it's
like
a
pre-check.
It's
like
a
fast
fair
pass.
Is
that
means
you
can
define
the
minimum
resources,
and
if
this
minimal
resource
is
not
satisfied,
we
fail
the
scheduling
of
the
whole
path.
So
it's
more
like
a
faster
check
to
fail
fast,
but
in
theory
you
pass
the
mean
resources
check
doesn't
mean
your
whole.
Bunch
of
parts
can
be
all
scheduled
together.
B
The
last
one
is
that
we
define
the
scheduling
timeout
to
set
a
threshold
to
abort
the
scheduler
attempt
for
the
whole
skip
for
the
whole
part
group.
So
this
is
the
thumbnail
settings.
Okay,
you
can
see
that
some
settings
are
coding
this
in
different
scheduling,
vendors
that
sermon
up,
but
this
one
is
the
sorry
is,
that
is
mean
resources
for.
B
Yes,
it's
just
a
very
primary
primary
resource
check,
just
a
resource.
Yes,.
E
But
how
does
that
work
with
auto-scale
environments,
where
you
don't
know
how
much
resources
you
could
get
in
the
future
like.
B
Yeah
it
doesn't
it
doesn't,
so
you
can
see
that
later.
I
will
propose
that
we
don't
introduce
this
field
because
it's
not
the
mandatory
field
and
it
doesn't
well
passing
this
check
doesn't
mean
your
whole.
Bunch
of
parts
can
be
scheduled
so
yeah,
but
let's
just
put
it
pretty
downhill,
because
some
other
vendors
also
use
this
field,
and
but,
in
my
opinion,
yeah,
it
doesn't
matter
too
much
to
have
this
field
or
not.
B
So
if
we
look
at
the
definition
of
volcano,
so
a
lot
of
fields
are
almost
the
same,
like
minimum
numbers
like
the
minimum
resources.
Basically
the
same
so
additionally,
they
have
the
priority
class
name.
So,
in
my
opinion,
it's
in
most
cases
it's
guaranteed
that
the
paths
associated
with
the
part
group
should
have
the
same
priority.
So
that
means.
B
So
I
think
q
is
a
good
field
to
have
because
right
now,
although
the
internal
scanning
framework
doesn't
support
multiple
queue,
but
we
can
have
this
field
so
in
one
side
that
we
can,
we
can
enable
different
scheduler
vendors
to
spot
it
well
internally
and,
on
the
other
hand,
in
the
future,
if
we
are
able
to
schedule
framework
are
able
to
support
the
multiple
queue
we
can
just
natively
support
that
for
for
for
better
scheduling
efficiency.
So
this
view
I
think
we
can
add
the
other.
The
other
one
is
a
bit
interesting.
B
B
So
in
volcano,
the
role
is
mapped
by
the
key
and
then
that
kind
of
role
can
have
a
specific
minimum
member
settings,
for
example,
for
spark
job.
You
have
two
kinds
of
rows:
one
is
the
driver.
One
is
the
executor.
Usually
they
have
the
different
requirements
from
the
minimum
members
so
that
in
the
panel
you
can
define
like
driver.
One
executor
three
like
that,
but
in
my
opinion,
is
that
the
string
is
a
little
weak
to
link
to
the
particular
row.
B
So
in
my
later
proposal
I
will
make
it
more
make
it
more
less
standardized
and
have
some
other
field
to
work
for
you.
Can
I
make
a
suggestion.
E
B
Sure,
yeah,
basically
I'll
finish
the
comparison.
So
this
is
the
definition
for
volcano,
and
this
is
for
unicorn.
Unicorn
is
not
a
style,
but
the
core
field
here
is
also
minimum
member
and
the
minimum
resources,
but
it
has
some
other
fields,
so
it's
like
they
are
mixing
some
part
attributes
into
the
tasker
group,
which
I
don't
think
is
appropriate.
B
This
is
the
definition
for
okay.
Let's
go
back
to
the
definition
I
want
to
propose,
so
the
one
I
want
propose
is
just
considering
all
the
current
scheduler
being
this:
let's
define
the
product
particular
pay
api,
so
the
packet
will
have
a
spec
and
status
so
for
status.
I
don't
think
today
we
have
enough
time
to
discuss.
We
can
leave
it
to
them
next
time
or
just
discuss
offline,
so
we
I
just
want
to
come
to
a
consensus
on
the
spec
definition.
B
Very
likely,
I
think
we
can
have
it,
although
it
just
can
have
a
just
default
empty
strings,
and
that
means
nothing
for
the
internal
internal
schedule.
Framework
implementation.
Okay,
the
next
field
is
subset,
so
each
subset
subset.
We
can
also
name
it
like
sub
per
group
or
sub
group.
It's
it's
a
represent
represent
is
represent
the
group
for
path.
B
B
Group
and
per
group
can
link
to
the
to
the
particular
object
and
then
among
the
parts,
because
this
path
belongs
to
different
rows
right
and
this
part
should
have
a
way
to
tell
the
scheduler
that
which
kind
of
path
that
belongs
to
this
row
and
which
kind
of
path
belongs
to
another
row.
So
that's
why
we
introduce
the
label
selector
so
that
to
distinguish
the
different
kinds
of
paths
within
the
same
part
group.
B
So
in
terms
of
the
spark
we
can
define
that
one
rose
driver
in
the
labor
selector,
you
can
have
a
unique
set
of
labels
to
identify
this
kind
of
driver
the
same
for
the
executive
pass.
So
this
is
the
subset
so
that
we
have
a
bunch
of
subsets
and
the
schedule.
Schedulability
is
that.
B
Is
that
we
need
to
satisfy?
We
need
issue,
all
the
subsets,
all
the
parts
in
the
subset
to
be
co-scheduled,
and
then
we
schedule
the
whole
the
the
entire
park
group
other
than
otherwise
we
don't
schedule
any
of
that
and
also
schedule.
Timeout
is
a
setting
that
is
like
a
hot
limit
threshold
to
abort
the
whole
attempt
to
schedule
a
part
group.
So
this
is
the
basic
core
of
the
new
group
spec.
You
want
to
define
any
questions.
F
Sorry,
I
I
have
a
question,
that's
more
like
if
you
can
explain
again
what
this
support
group
is.
B
What
this
part
group
is
sub
sub,
oh
pro,
but.
B
Yeah,
this
is
in
comparison
with
the
current
schedule.
Plugin
definition,
which
is
just
one
part
group,
is
supposed
to
be
one
row
right,
but
in
a
real
case
is
that
usually
one
part
group
mapping
to
a
job?
The
job
has
different
roles.
If
you
think
about
a
real
case,
which
annoying
job
is
that
a
swag
job
may
have
one
drivers
and
a
couple
of
executors.
So
this
kind
of
definition
is
to
simplify
that
doesn't
represent
the
real
case
of
the
questionnaire
in
jobs.
B
That's
why
we
want
to
introduce
the
level
fields
called
subset
or
subgroup,
and
in
each
subset
or
subgroup
you
can
define
what
kind
of
role
it
is,
and
it
has
this
specific
minimum
member
requirement,
and
also
the
label
sector
is
to
for
you
for
scheduler
to
locate
the
particular
row
of
the
pass.
So
this
requirement.
B
In
theory,
most
of
the
maintenance
job
had
this
few
sets,
but
it's
also
possible
that
you
backfill
this
kind
of
running
workers
alone.
You
have
the
like
one
kind
of
flow
already
running
there.
It's
also
possible,
I
would
say
so-
alex
still
have
seen
a
scenario
that
we
have.
Minimum
network
equals
zero.
E
Might
be
easier
to
describe
what
this
is
needed
for
so
the
label
selector
identifies
the
group
of
parts
that
belong
to
this
subset,
but
because
you
don't
know
whether
these
puzzles
being
created
or
not
like
all
the
positive
that
are
should
be
created
has
been
created.
E
The
min
member
basically
tells
the
whatever
the
scheduler
or
any
any
controller.
That's
working
on
this
structure
to
wait
for
at
least
this
number
of
pods
to
be
created
with
this
label
selector
the
the
that
leads
me
to
my
question,
because
this
is
kind
of
like,
if
you
think
about
it,
it's
backwards.
E
It's
basically
you're
creating
the
pods
and
then
you're
trying
to
group
them
together,
define
the
grouping
of
these
parts
and
then
schedule
them
together,
yeah
so
and
and
to
me,
like
I
argued
about
this
multiple
times
before,
it
would
have
been
much
better
if
we
didn't
even
create
the
pods
in
the
first
place
until
the
resources
are
available
totally.
B
E
Yeah
yeah,
I
agree.
I'm
just
saying
that
this
approach
in
general-
I
know
it
is
it
is
compatible,
but
even
on
its
own,
it
could
be
problematic.
Yes,.
B
If
we
can
leverage
some
external
controller
like
q,
q
is
k
that
to
admit
a
bunch
of
paths
at
the
same
time,
so
the
life
will
be
easier
for
the
scheduler
to
schedule.
This
kind
of
bunch
of
paths-
yes,.
G
Actually,
I
have
a
question
around
the
apis
as
such,
like,
for
example,
abdullah
and
others.
They
have
defined
the
queued
workload
api
right.
I'm
sure
that
you
might
have
thought
about
using
the
same
api
like
did
you
think
about
it
and
can
we
introduce
because
I
I
look
at
some
of
the
definitions.
I
think
there
are
more
or
less
similar
to
the
definitions
that
are
proposed
there,
but
obviously
we
need
something
else.
G
B
Yeah,
that's
a
good
question.
I
thought
of
that
as
well,
and
also
in
the
beginnings
that
I
also
suggest
that
abdullah,
that
can
we
have
a
neutral
name
for
the
cure,
the
workload
and
so
that
this
doesn't
doesn't
necessarily
mean
it's
a
controller
behavior.
Instead
that
you
can
have
both
controller
behavior
and
scheduling
behavior,
but
it
ends
up
with
that.
B
Mod
represents
that
particular
semantics
that
runs
outside
is
associated
with,
like
caller
with
automation,
so
I
want
api
to
fit
two
purposes
that
can
be
possible,
but
in
in
practical
I
don't
think
this
involves
additional
efforts
to
maintain
this
to
to
fit
to
purpose
this
thing,
so
that
the
additional
and
the
redundant
fields
had
to
be
set
in
one
api
and,
on
the
other
hand,
is
that
I
do
want
to
propose
this
in
kubernetes,
electric
kubernetes
and
for
cure
the
web
cloud.
G
So,
to
be
clear,
what
you
are
saying
is
you
would
like
to
propose
this
api
in
kk
yeah,
it's
given.
It
is
yes,.
G
Level
api.
Yes,
yes,.
H
H
Seems
to
be
targeting
only
the
scheduler
part,
yes,
because
this
is
only
to
tell
scheduler
that
it
should
be
that's
how
you
should
be
treating
the
pods
coming
from
here
and
if
so,
it's
very
decoupled
from
the
pod
template
or
in
general
and
pod.
So,
for
example,
if
I'm
creating
a
job
or
any
kind
of
the
workloads
resources,
I'm
required
to
create
the
paw
group
separately.
Yes,
have
you
considered
embedding
the
information
about
the
polygroup
inside
of
the
pod
template?
B
Feels
actually
composed,
in
this
case
different
roles
of
the
past
right,
virtually
virtually
it's
not
that
the
particular
define
this
kind
of
images
for
different
kind
of
flows.
It's
virtually
that
organized
different
kinds
of
raw
paths
if
we
define
this
to
a
part
template
so
that,
like
the
driver
pass
and
the
executor
pass,
don't
have
a
way
to
be
virtually
organized
as
a
unit,
so
that
can
be
easily
scheduled
by
the
schedule.
So
that's
the
main
concern
for
why
not
define
it
as
embedded,
not
top
level
api
to
the
past
stack,
and
I
think.
E
B
But
there
are
some
top
still.
Some
top
level
can
not
be
defined
easily
like
if
we
have
q
name,
you
have
to
duplicate,
define
that
and
also
the
term
10
seconds
at
the
top
level
is
not
subside
level.
So
that's
also
a
problem.
B
Sorry,
can
you
say
that
again,
so
basically,
yes,
label
select
can
help.
Me
can
help
schedule
locate
the
past,
but
there
are
some
top
level
in
the
job
level.
In
the
group
level
settings
like
q,
name
and
schedule
timer
seconds
here
so
yeah.
Definitely
we
can
duplicate
these
settings
to
two
different
kind
of
part
templates.
E
E
B
For
core
kubernetes
is
that
we
want
to
have
a
very
basic
building
block
fundamental
ability
to
express
this
kind
of
semantics.
It's
not.
I
will
call
it
additional
good
head
feature,
it's
more
for
you
to
build
more
complex,
virtual
clothes
right
so
for
now,
because
of
this
limitation,
a
lot
of
workflow
is
building
a
very
weird
way
and
you
know
not
guaranteed
a
way
to
achieve
the
whole
batch
batch
workflow.
So
that
is
basically
I've
been
a
big
keeping
to
proposing
a
circuit
of
kubernetes.
Okay.
H
So
currently,
as
it
stands,
I'm
leaning
towards
what
abdullah
is
saying
that
it'll
be
very
hard
to
get
this
accepted
in
the
core.
If
that
would
be
something
that
would
be
embedded
inside
of
a
pot,
the
existing
pod
spec
pod
template
whatever,
where
you
can
easily
bubble
it
up
and
reuse
within
the
current
controllers,
and
you
have
a
little
bit
more
control
over
how
the
pods
are
being
created
for
your
application,
no
matter
whether
that's
a
damon
said
deployment
or
job
or
whatever
else.
H
If
this
is
in
the
way
you
are
presenting
it.
This
just
currently
looks
like
a
entirely
separate
thing
and
I'm
pretty
convinced
that
the
pushback
will
be
that
if
this
is
a
separate
thing
that
just
exists
and
only
the
scheduler
would
be
the
only
one
using
it
that
could
be
easily
acrd
and
the
scheduler
extension
being
a
plug-in
that
you
just
have
running
on
you
on
your
in
your
environment.
H
If,
if
we
would
push-
or
if
you
would
want
to
have
that
one
in
the
core,
I'm
I
think
you
would
need
to
have
a
tighter,
tighter
coupling
with
the
existing
apis
to
be
able
to
get
it
accepted
as
it
stands.
Currently,
I'm
pretty
convinced
that
any
api
reviewer
will
reject
this.
B
D
I
would
like
to
give
some
context.
I
think,
as
you
said
like
it
would
be
ideal
if
we
could
embed
these
fields
into
existing
apis,
so
naturally
the
the
api
that
fits.
It
is
probably
the
job
because
it
already
groups
a
lot
of
parts
right
and
it
you
could
set
these
configurations
there,
for
example,
but
the
reality
is
that
not
the
job
cannot
satisfy
all
the
all
the
requirements
that
different
types
of
batch
workloads
have.
D
So
we
need
something
in
between
to
be
able
to
represent
arbitrary
workloads
that
ultimately
need
to
be
scheduled
together
right.
D
H
I
mean
what
I'm
asking
is
if,
if
we
will
be
just
take
a
step
back,
is
it
something
that
we
all
we
would
only
benefit
for
jobs
and
nothing
else
with
something
with
something
similar
or
something
along?
Those
lines
would
be
used
in,
I
don't
know
deployments
or
or
demon
sets
or
replica
sets
or
any
other
of
the
other
workloads.
B
It's
a,
I
think,
it's
universal,
so
I
could
give
another
I'll
give
an
example
of
non-batch
workloads
that
internally
we
have
a
team.
They
want
to
run
pod
affinity,
but
due
to
limited
resources,
the
power
limited
usually
falls
into
a
situation
that
the
first
couple
of
paths-
okay,
guess
scheduled
to
the
same
note,
but
the
latter
one
can
now
comes
in
get
pending
there,
because
the
limited
resources
on
the
ld
node.
B
So
in
terms
of
their
requirement,
they
want
to
use
part
affinity
and
they
do
want
all
the
paths
using
the
power
affinity
to
be
co-scheduled
to
the
node,
but
it
doesn't
mean
they're
necessary.
They
are
using
a
batch
workflows,
they
are
maybe
using
the
standard
deployment.
So
that
is
so.
That's
why
I
say
this
can
be
a
universal
requirement
and
I
think
it's
a
fundamental
scheduling.
The
clustering.
E
Right
like
the
example
that
we
give,
I
think
it's
imagine
you
have.
I
don't
know
if
you,
if
you're
familiar
with
istio,
for
example,
you
have
a
proxy
associated
with
each
service
pod
and
for
security
reasons.
You
don't
want
that
proxy
to
be
a
container
inside
the
sim
pod.
E
The
proxy
is
trusted
component,
but
the
main
worker
is
not,
and
so
there
are
what
what
usually
people
want
to
do
is
to
split
the
proxy
and
but
proxy
has
to
be
running
on
the
same
node
and
they
would
communicate
via,
like
you
know
locally.
So
that's
why
the
other
case
here
is
for
other
than
jobs.
E
Some
actually
batch
workloads
use
deployments
in
a
sense
because
they
are
kind
of
like
imagine
the
workers,
for
example
in
spark
you
could
deploy
them
as
a
deployment
that
the
driver
is
communicating
with
to
give
them
workloads
to
give
them
like
tasks
basically
to
to
chew
on
so
I
would
agree
that
this
is
not
limited
to
the
job
api
in
general.
It
could
be
something
more
like.
E
More
abstract,
I
have
another
proposal
here
that
I'll
share
with
the
community
along
the
same
lines
in
general
and
the
the
the
idea
at
the
higher
level
is
like
right.
Now
we
have
the
pod
representing
two
things:
it
represents
the
application
itself,
what
you
want
to
run,
and
it
also
represents
the
provisioning
requirements
right
like
what
exact
resources
you
want.
E
Imagine
if
we
have
a
resource
called
reservation
represent
that
is
not
doesn't
contain
like
it's
not
it's
a
part
that
doesn't
start
basically,
that
the
scheduler
is
aware
of
cluster
autoscaler
is
aware
of
cubelet
is
aware
of,
and
so
you
could
create
a
reservation
that
the
scheduler
schedules
cubelet
also
takes
it
into
account
when
it
does
its
admission
logic,
cluster
quarter
scale
takes
it
into
account
when
it's
doing
its
calculations
to
whether
or
not
provisioning
new
resources
and
this
reservation
resource
you
create
it
before
you
create
your
your
application
part
once
this
reservation
is
and
and
so
basically
after
you
you,
when
you
create
the
part
you
create
a
imagine,
the
part
has
an
affinity
to
that
reservation,
and
so
the
scheduler
job
here
would
be.
E
If
that,
if
a
pod
has
a
an
affinity
to
a
reservation,
it
would
just
basically
schedule
the
part
on
the
same
place
where
that
reservation
is,
if
it
doesn't
exist,
then
it's
just
going
to
continue
to
be
pending,
and
and
so,
if
you
have
that
decoupling
of
resource
provisioning
from
from
the
application,
you
could
do
even
more
powerful
stuff.
E
I
think
you
could
have
a
a
controller
that
creates
a
reservation
set,
a
reservation
gets
you
know,
and
the
scheduler
would
be
still
working
at
the
not
at
the
group
level,
but
works
at
you
know
at
the
instant
like
reservation
level,
like
one
one
is
like
assuming
that
the
reservation
is
a
single
part,
for
example,
but
I
mean
I
don't
want
to
confuse
people
with.
Just
like
you
know,
wishy-washy
hand-waving
talk
I'll
I'll
share
something
along
the
same
lines
as
well.
E
I
think
it's
really
interesting,
like
you
know,
area
in
general,
like
how
do
we?
How
do
we
solve
this
like
all
or
nothing
is,
is
a
is
a
really
big
problem
that
we
need
to
solve.
I
I
completely
agree
with
way
here,
and
this
is
a
really
good
start,
like
the
pod
group
idea,
like
resurfacing
it
trying
to
discuss
it.
Maybe
we
can
continue
discussion
on
the
document
that
you
have
way
there
and
and
see
where
we
can
go
from
there.
E
Maybe
we
can
fine-tune
it,
make
it
more
potentially
acceptable
to
the
api
reverb.
Maybe
we
can
get
some
of
the
api
viewers
to
take
a
first
pass
at
it
as
well
yeah
exactly
what
is
that
yeah?
What
things
that
they
could
spot.
B
I
I
I
like
I
want
to
see
I
like
to
add
something
I
learned
from
the
users
I
find
about
eight
to
eighty
percent
of
the
users
who
choose
to
use
volcano
or
another
custom
schedulers
just
because
they
just
want
to
use
the
gun,
scheduling
and
it's
already
be
a
basic
feature
for
the
batch
group
or
especially
in
distributed
training
of
the
deep
learning.
I
I
think
if
we
can
support
it
in
the
kiki,
and
it
would
be
a
good
opportunity
to
have
to
bring
the
users
back
to
the
native
kubernetes
system
yeah,
I
think
no
one
really
likes
to
run
through
scheduler
in
the
same
cluster
yeah
it
all.
It
also
can
lead
a
lot
of
problems,
so
I
think
it.
It's
also
the
the
goal
of
our
work:
growth,
yeah.
B
Yeah
well,
another
thing
is
that
with
this
api,
if
we
can
make
it
into
business
like
kubernetes,
it's
also
beneficial
to
other
scheduled
community,
because
they
don't
want
to
the
user,
doesn't
want
need
to
diverge
into
a
different
kind
of
version.
Sorry,
api
definition:
it's
just
what
kind
of
implications
they
choose.
They
can
choose
the
default
scheduler,
they
can
choose
the
volcano
so
but
the
workplace
is
consistent
across
different
implementations.
B
B
Okay,
lastly,
there's
still
one
piece
I
want
to
discuss
is
the
power
group's
life
cycle.
Also,
it's
personally
discussed
discuss
in
message
statements.
So
how
is
this
paragraph?
Is?
Api?
Object
is
associated
with
the
path
spec,
so
I
also
discuss
with
this
alex
offline
there's
a
couple
of
ways,
but
in
terms
of
the
the
best
way,
we
think
is
that
the
part
spec
has
a
field
like
called
power,
grip,
name
or
part
group,
and
that
part
group
is
associated
with
a
top
level.
Api
object
called
the
top
group.
B
B
A
So
I
think
we
have
another
item
as
well
and
we're
not
probably
going
to
get
to
that.
So
I
apologize
to
absolutely
now
I'll,
probably
move
that
meeting
to
the
to
next
week
and
we'll
discuss
that
and
we
thank
you
so
much
for
coming
and
presenting.
This
was
an
interesting
discussion
and
we
can
continue
the
conversation
on
slack
or
come
continue.
Commenting
on
the
document
that
you
have
here.
If
you
want
to
share
it
and
or
put
a
link
in
the
in
the
agenda,
yeah.