►
From YouTube: Kubernetes WG Batch Weekly Meeting for 20220310
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
yeah
hi
everyone.
So,
as
you
know,
this
meeting
is
recording,
I
will
be
uploaded
to
youtube,
so
I
was
thinking
that
we
could
go
through
the
initial
roadmap
that
we
had.
Let
me
try
to
present.
A
A
All
right
sounds
good,
so
I
I
like
try
to
really
sketch
a
a
road
map,
and
maybe
a
world
map
is
a
strong
word
at
this
point
is
mostly
like
you
know,
headlines
and
high-level
ideas
of
what
we
want
to
pursue.
We
want
to
fine-tune
those
and,
as
I
discussed
last
time,
we
have
three.
I
was
calling
them
pillars
or
work
streams.
A
If
you
can
check
here,
the
outline
we've
got
the
job,
api,
job
management
and
specialized
hardware,
and
one
thing
that
we
discussed
last
time
was
actually
trying
to
do
like
determine
the
state
of
the
art
like.
Where
are
we
right
now?
A
So
that's
one
of
the
things
that
we
could
do
as
well
and
specifically
with
regards
to
the
job
management.
The
suggestion
was
to
start
by
inviting
you
know,
people
who
already
built
frameworks
on
top
of
kubernetes
to
manage
jobs
like
volcano
unicorn
and
cad
batch
plugins.
We
have
as
well
in
the
scheduling
framework
to
present
their
frameworks
and
info,
like
the
probably.
The
outcome
here
is
to
have
some
sort
of
like
collect.
A
Where
what
kubernetes
is
missing,
what
exactly
are
they
trying
to?
What
are
the
gaps
we're
trying
to
bridge
and
what
things
that
we
believe
should
be
pushed
upstream?
A
So
I
don't
know
if
this
thing,
like
you
know,
discussing
the
state
of
the
usher,
applies
to
all
work
streams.
I
think
in
the
third
world
scene
it
does
as
well
marlo
from
intel.
I
think
she's,
already
working
on
a
dock,
to
document
the
state
of
the
art
related
to
the
current
state
of
what
cubelet
supports
for.
A
I
think
if
this
could
apply
to
the
job
api,
we
could
try
to
discuss
the
state
of
some
common
custom
operators
like
mpi
job
tf
operator,
spark
operator.
C
Those
the
different
solutions
that
you
showed
in
the
second
section
would
actually
influence
what's
in
here
as
well,
so
you
know
as
we
learn
about
these
different
solutions,
I
think
they
will
highlight
some
additional
features
that
weren't
there
and
why
these
systems
were
built
and
how
we
can
add
to
this
new
api
or
enhance
the
api
is
probably
a
better
way
to
see
it.
A
A
Maintainers
to
present
their
frameworks
and
discuss
missing
features.
B
B
Basically,
I
don't
want
this
to
turn
into
a
shop
where
everybody
is
trying
to
send
their
product.
We
we
want
to.
We
want
to
answer
the
specific
questions
right,
so
we
we
in
the
template,
we
could
have
things
as
a
why
yeah,
why
x
is
is
not
enough.
Why?
Why
did
he
introduce
this
or
maybe
what
what's
your
target
audience.
A
Right,
like
I
would
say,
like
more
of
operation.
B
E
A
So
I
think
I'm
going
to
assign
this
to
aldo,
propose
a
template
and.
A
Okay,
that
sounds
good.
I
think
this
covers
it
for
this
initial,
like
you
know,
step
towards
what
we
want
to
do
with
this
work
stream,
but
this
is
a
first
step
that
would
inform
the
next
ones
I
don't
do.
We
want
to
discuss
more
here
beyond
this.
At
this
point,.
D
I
think
another
thing
to
include
in
the
site
templates
is
that
how
they
support
current
the
job
framework
or
job
operators
like,
as
far
as
I
know,
what
volcano
did
a
lot
of
integration
work
to
support
current
stuff.
So
in
my
evaluation
we
might
want
to
evaluate
how
that
kind
of
integration
behaves.
D
So
I
think
I
would
design
go
is
that
we
don't
want
to
sort
of
do
the
integration
for
every
single
yeah
job
controller,
whatever
right
we
want
to
edit
the
integration
be
much
more
smooth
and
extensible,
so
otherwise
we
have
to
maintain
a
large
pool
of
operators.
We
can
support,
and
that
is
not
that
extensible.
A
Right,
I
think
this
is
a
good
like
high
level
idea,
which
is
whether
they,
for
example,
support
us,
a
job
api
that
they
propose
and
then
they
have
they
and
every
anybody
that
wants
to
use
it.
They
have
to
somehow
model
their
workloads
using
that
api,
or
there
is
some
sort
of
like
again
like
volcano
there's,
an
annotation
that
defines
the
group
of
parts
yeah.
A
A
B
Yeah,
if
diana
wants
to
start
this
great
because
cold
freeze
is
coming-
and
I
need
to
work
on
my
future,
but
okay.
A
Any
comments,
thoughts.
Do
you
like
to
discuss
more
here.
I
can
go
through
these,
like
there's
a
few
things
that
I
had
on
on
mine,
but
we
can.
If
you
have
something
else,
we
can
add
it
now.
I
think.
D
I
think
another
angle
to
look
at
this
and
this
body
model
to
to
look
into
that
is
that
whether
we
can
have
a
standardized
metric
or
what
kind
of
other
stuff
can
evaluate
how
good
this
kind
of
integration
will
do
for
each
like
job,
which
third
party
operator
like
for
spark
operator
by
giving
this
kind
of
size
cluster,
and
by
giving
this
kind
of
what
claims?
How
good
and
I
mean,
how
what's
the
major
methodology
we
can
use
and
what
kind
of
tool
we
are
missing
to
measure
this
kind
of
how?
D
A
It's
more
about
trying
to
find
common
ground
that
can
be
pushed
upstream
without
influencing
too
much.
What
the
different
you
know,
frameworks
are
trying
to
do
we're,
trying
not
trying
to
impose
a
specific
model
and
because
like
if
we,
if
we
define
a
metric
that
says
okay,
this
is
good,
and
this
is
bad.
It
may
as
well
influence
a
specific
design
and
that
method
may
not
apply
to
every
single.
A
You
know,
type
of
foreclosure
that
these
frameworks
is
targeting.
So
I'm
not
sure.
D
What
we
gained
by
yeah-
I
got
your
point.
I
got
your
point
so
I
I
say
that
it's
just
because
in
terms
of
technical
decision,
for
example,
when
you
evaluate
different
tools,
one
question
coming
from
your
upper
map
upper
management
is
that
well
give
me
some
sort
of
results.
Why
you
choose
this
over
there
and
if
there
can
be
a
metric
and
proving
that,
so
that
would
be
beneficial.
But
if
it's
not
in
the
scope,
then
each
team
that
evaluating
this
tool
will
have
to
work
themselves
to
come
up.
D
A
I
mean,
like
I
definitely
like
again
like
we.
We
can't
expand
the
working
group
into
many
different
directions.
A
C
I
do
think
it's
a
good
idea
to
think
about
evaluation
only
at
the
sense
of
eventually
when
we
come
up
with
something
we
can
upstream,
how
we're
going
to
evaluate
it
through
the
normal
kubernetes
performance
evaluation
right.
So
it's
it's
good
to
think
about
how
we
evaluate
the
design
and
so
that
we
can
prepare,
for
you
know
benchmarking
as
we
move
things
upstream.
D
A
So
like
do
we
I'm
just
trying
to
make
it
a
little
bit
more
concrete
here?
What
is
it
that
we're
trying
to
do
with
with
this
workstations?
A
I
guess
the
group
that
joined
today
is
more
interested
in
this
in
this
works
team.
So
here
I'm
saying
like
identifying
patterns
or
proposed
enhancements
to
job
level,
queueing
scheduling,
provisioning
and
auto
scaling.
Do
we.
A
And
I
guess
I
guess
we
can
go
through
these
like
points
and
maybe
that
will
make
it
a
little
bit
more
concrete
yeah
sure
is
like
converging
a
pattern
that
can
potentially
be
supported
natively
by
upstream
kubernetes.
Do
you
feel
that
we
can
get
to
a
point
where
we
will
have
queuing
support
upstream
in
core
kubernetes
or
you
feel
this
is
a
losing
battle
and
and
the
focus
should
be
on.
C
I
think
it's
possible
and
I'm
very
encouraged
about
that
and,
as
you
can
see,
there's
already
people
needing
demand.
Is
there
right?
So
I
think
you
know
if
we
do
a
good
job,
we'll
be
able
to
alleviate
a
lot
of
that
demand
and.
C
Right,
that's
what
I'm
hoping
we
can
design
something
like
that.
E
A
C
So
I
can
share
just
my
experience
and
just
for
for
clark
pure
clarification,
one
of
the
lead
developers
on
mcad,
which
is
a
queuing.
It
has
some
queueing
capabilities
and
you
know:
we've
been
working
with
various
people
within
the
company,
our
premium
and
there's
definitely
a
demand
for
cueing
solutions.
C
Because-
and
the
big
motivation
is
many
times,
you'll
have
cluster
workload,
focused
applications
on
clusters
and
many
of
the
teams
are
wanting
to
now
collect.
You
know
mixed
workloads
and
solving
the
problem
of
you
know,
capacity
issues
with
or
without
cluster
auto
scaling.
C
So
worked
with
many
teams
that
are
trying
to
solve
this
problem,
where
you
can
mix
these
different
types
of
workloads
with
you
know
different
types
of
batch
workloads
and
coexist,
and
you
know
setting
policies
around
it,
which
is
priorities,
quotas,
right
and
using
multiple
clusters.
Even
so,
there's
there's
a
demand.
I
just
from
my
own
experience
working
with
this,
and
so
I
just
I'm,
I'm
very
excited
to
see
this,
because
some
of
the
features
that
I
was
we
were
using.
C
I
see
them
already,
as
we
are
talking
about
this,
so
it
would
be
nice
to
have
this
upstream.
If
that
helps,
I'm
just
sharing
that.
A
A
A
Multi-Cluster
single
cluster,
auto
scale,
not
auto
scale,
doesn't
matter,
because
the
way
that
I
guess
we
want
to
model
capacity
is
not
by
looking
at
existing
nodes
but
by
saying
okay,
here's
in
an
object
that
defines
how
much
resources
you
have
right-
and
here
is
a
workload
like
you
know,
api-
that
you
can
queue
and
use
that
capacity.
A
Now,
how
do
we
implement
those
in
the
background?
It
would
depend
it
could
be
custom,
it
could
be
like
you
could
have
multiple
invitations
for
it.
I
guess.
C
Yeah
exactly
I
kind
of
think
of
it.
Sort
of
I
haven't
thought
through
it
all,
but
kind
of
like
in
the
scheduler.
Now
you
have
some
plug-in
models
right
where
you
can
add,
add
and
implement,
implementing
multiple
policies
or
multiple
solutions.
C
E
So
is
is
one
of
the
points
of
valid
I'm
curious
in
terms
of
like
choosing
between
whether
implement
whether
to
implement
something
entry
or
out
of
tree.
So
if
some
of
the
concepts
are
like,
basically
one
two
one
correspond
to
you
know
the
injury,
concepts
and
apis
that
are
like
quota
that
are
like
the
w1
batch
v1
jobs,
so
you're,
basically
functionally
equivalent,
but
you're
hoping
you.
E
This
is
something
that's
more
suitable
for
batch
workloads,
then
does
it
make
sense
to
you
know
instead
of
introducing
something
brand
new,
but
rather
you
extend
whatever
the
the
current
concept.
That's
more
geared
towards.
You
know:
application
serving
applications,
you
kind
of
just
put
that
as
their
batch
counterparts
or
like,
for
example,
I'm
kind
of
thinking.
In
some
other
conversations
like
people
were
like
asking
for
hey.
Does
it
make
sense
to
have
like
a
instead
of
staple
sets
like
stay
full
of
jobs,
then?
E
Does
it
make
sense
to
be
just
because
they're
kind
of
very
much
aligned
with
the
abstraction
level
of
existing
tremendous
objects,
native
objects
right.
A
So
we
had
these
discussions
on
like
there
are
two
things
that
you
mentioned
here.
Two
two
really
good
examples:
the
job
api,
the
quota
system.
Let
me
start
with
the
job
api.
So
what
we
do
have
now,
a
stateful
set
job,
the
sort
of
speed
we
have
index
job,
it
behaves
exactly
like
stateful
said,
but
the
pods
could
terminate,
and
so
it
matches
better
the
top
like
batch
workload,
you.
C
A
Behavior
now,
even
if
like
we,
we,
we
are
trying
to
enhance
the
job
again.
That
is
the
purpose
of
the
first
work
stream,
but
it
felt
that
at
some
point
it
will
not
be
possible
to
force
every
single
type
of
workload
to
be
modeled
only
using
a
single
job,
v1
job
there
will
be
cases
where
they
is
special,
like
spark
spark,
is
going
to
be
extremely
hard
to
moderate
as
a
job
api
v1
job-
I
don't
know
it's
it's.
A
It
has
some
custom
setup,
it
needs
its
own
operator
and
life
cycle
management,
and
so
we
need
to
accommodate
those.
Those
are
not
a
small
group
of
workloads.
A
A
I'm
not
sure
if,
like
I'm
being
too
like
clear
here,
but
it's
going
to
be
it's
not
going
to
be
one
type
of
job
api
that
will
affect
everything
with
regards
to
kota.
A
A
It
does
not
match
at
all
the
job
of
the
batch
like
requirements
of
of
cueing
and
and
quota
management,
and
fair
sharing
and
also
requirements
related
to
you
know
fungibility
and
auto
scaling.
That
is
a
heat
like
a
beast
of
set
of
requirements
to
try
and
convince
the
community
to
push
it
into
the
current
quota
system.
But
I'm
optimistic
that
we
could
have
something
designed
for
jobs
that
we
could
push
upstream.
A
E
Yeah
I
agree
right
like
and
what
I
was
alluding
to
is
not
necessarily
yeah
like
extend
the
current
job
api,
a
job
type
or
the
current
quota.
But
rather,
if
these
are
things
that
are
basically
one-to-one
equivalent
to
their
you
know,
non-batch
counterparts,
then
you
can
think
of
them
as
core
and
if
they're
core,
it
kind
of
makes
sense
to.
You
know
directly
push
them
into
okay
core.
So
I
guess
I've
been
kind
of
just
using
that
as
a
I
guess.
A
criteria.
A
C
C
Mean
I
thought
so
because
I
was
planning
on.
I
had
to
join
another
call,
but
yeah.
A
No,
I
think
I
think
we
should
be
always
always
on
time.
I
can
extend
this
meeting
to
45
minutes,
but
it
seems
that
this
one
is
less
popular
than
the
other
meeting.
Last
week
we
got,
as
I
mentioned,
like
17
people,
on
the
call
so
we'll
see
how
how
this
goes
forward.
A
Okay,
so
I
guess
please
take
a
look
at
the
road
map
again
make
suggestions,
edits
and
and
how
how
we
can
move
forward
with
this,
perhaps
next
week.
On
the
other,
the
longer
time
slot
we
can
go
again
through
the
api
job
api
and
the
specialized
hardware.