►
From YouTube: Kubernetes WG Batch Weekly Meeting for 20220428
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
this
particular
document
came
out
of
a
basically
a
different
one
that
was
done
through
six
scheduling
and
sig
node,
where
we
were
trying
to
find
all
the
uncovered
cpu
use
cases
and
I'll
put
a
link
to
that
doc.
Here,
if
you
want
to
read
up
more
but
when
you
go
through
and
start
digging
through,
all
the
things
not
covered
by
kubelet
and
there's
a
whole
bunch
of
features
that
are
not
there,
so
you
can't
mix
pinned
with
shared
cores,
you
can't
choose
which
pneumozone.
B
So
if
you
want
to
always
schedule
to
pneuma
zone
zero,
you
can't
do
that.
You
can't
spread
containers
across
pneumozones.
Currently
different
types
of
cores
are
not
covered.
So
if
you
have
something
where
you
have
two
different
types
of
cores
and
a
cpu,
you
can't
choose.
Accordingly,
the
cores
are
allocated
by
container,
not
bipod,
so
I
can't
have
a
pod,
you
know,
choosing
which
container
and
having
sub
containers
inside,
which
is
sometimes
used
in
hpc
compute.
B
B
And
when
you
start
looking
at
getting
all
of
these
features
in
it's
it
just
snowballs
right
because
it's
beyond
it's
beyond
cpu
management.
It's
also,
then
you
know
once
you
fix
the
cpu
part.
Now
you
have
to
go
fix
the
memory
part
and
what
are
the
consequences,
etc.
B
B
We
need
a
better
way
to
handle
this.
So
when
I
was
speaking
with
derek
carr
who's,
one
of
the
signo
people,
he
he
suggested
that
we
split
kublet
into
control
and
data
planes.
So
that's
where
that
piece
comes
from
and.
B
And
basically
the
control
plane,
we
end
up
doing
a
plug-in
type
model.
So
we
roll
up
the
current
topology
manager,
cpu
manager,
memory
manager
and
device
manager,
components
into
a
plug-in
following
kind
of
how
we
do
other
plugins
now
and
then
continue
to
expose
resources
through
the
data
plane
to
the
scheduler.
B
So
we're
looking
for
requ
for
comments-
and
I
think
this
is
incredibly
valid
for
batch
case
scheduling,
because
you
are
looking
for
high
performance
compute.
So
these
things
do
change
what
your,
what
your
performance
metrics
are
and
making
this
so
that
we
can
have
various
plugins
for
particular
use
cases
for
cpu
management.
I
think,
is
helpful
instead
of
trying
to
configure
kubelet
all
the
way
across,
and
you
know
kind
of
hard
setting.
C
I
have
a
few
comments
related
to
the
proposal
that
you
have
marlo.
One
of
the
things
that
I
think
would
be
extremely
beneficial
in
this
case
would
be
if
we
captured
like
more
design
details
on
what
on.
C
How
exactly
do
we
want
to
achieve
this
at
what
level
is
it
at
container
manager
level
that
we
are
going
to
have
a
component
that
is
exposed
that
had
to
handle
some
of
these
plug-ins
and
maybe
registers
them
similar
to
how
device
plugin
works,
and
the
other
important
thing
would
be
why
these
plugins
are
kind
of
introduced
in
the
cluster?
How
do
how
do
the
existing
components
coexist?
C
So
some
users
might
be
completely
okay,
having
topology
manager,
memory
manager
and
all
these
components
the
way
they
are,
so
how
do
they
coexist
and
what
is
the
plan
for
transition?
I
think
those
are
the
key
questions
here.
B
B
B
The
joke
is
bug
for
bug
compatibility,
but
that's
essentially
what
we
would
be
focusing
on
and
that
is
listed
in
the
non-goals,
so
very
specifically
in
in
the
non-gold
section,
we
will
not
break
any
existing
use
cases
for
topology
management
memory
manager
or
device
manager
sure,
but
additionally,
before
we
start
doing
design,
I
want
to
finish
getting
in
comments
on
what
what
the
requirements
are
and
then
we
can
start
looking
at
a
more
specific
design,
and
the
point
is
that
there
are
pieces
in
the
non-goals
that
that
do
restrict
us.
B
C
I
think
what
would
be
useful,
for
example,
that
I
saw
just
that
statement
that
plugins
should
not
write
to
the
api
themselves,
but
the
plugins
would
have
to
communicate
with
the
cubelet
apis.
So
if
we
captured
some
of
those
components
and
how
how
that
interaction
happens,
that
would
be
beneficial
in
terms
of
like
the
other
things
that
would
be
beneficial
here
would
be
from
user
perspective
and
from
cesa
segment
perspective.
C
How
do
we
see
these
plugins
being
deployed
in
the
cluster
and
how
how
their
workflow
changes
or
looks
like
with
the
introduction
of
this
mechanism?
I
think
in
at
this
point
in
time
it
it'll
benefit,
even
if
we
don't
capture
the
design
just
end
to
end
a
flow
of
how
we
foresee
this
implementation
happening.
A
A
D
Oh,
is
it?
Is
it
already
clear
if
the
plugins
are,
I
know
like
extra
demon
sets,
or
they
are
in
in
three
or
are
people
expected
to
to
other
plugins
and
compile
their
own
cubelet
wha?
Is
that.
B
Yeah,
so
that
it's
somewhat
open
my,
I
envisioned
us
working
through
some
sort
of
grpc
interface
for
these,
so
it
could
be
because
it
needs
to
work
with
two
different
things.
Some
people
will
want
to
deploy
it
as
a
damon
set,
and
some
people
will
want
to
use,
say
nri,
so
there's
the
nrips
that
they
still
don't
really
have
a
good
way
to
get
their
resources
through
so
they're
also
looking
to
leverage
this.
B
A
I
just
follow
up
on
that.
Like
the
current
managers,
they
are
linkedin
right,
like
they
compiled
into
the
cube
yeah.
B
They're
compiled
in
and
currently
the
way
the
other
out
of
tree
managers
are
working,
is
they're,
turning
off
all
the
capabilities
in
the
kubelet
and
then
running
outside
and
managing
the
resources
outside.
I
would
like
we're
looking
at
a
more
native
way
of
doing
this,
and
this
also
enables
you
know:
groups
like
kubat
to
release
things
or
release.
You
know,
plug-ins
that
are
specific
for
particular
use
cases.
D
I
have
another
one,
I'm
usually
going
back
to
patrick's
proposal.
I
don't
know
how
this
relates
to
it
and
the
reason
why
I
ask
it's
because
it's
already
an
approved
cap,
if
even
if
the
implementation
was
not
approved,
the
cap
itself
was
approved.
So
if
there
is
any
relationship
and-
and
if
it's
you
know
it's
being
reconciled
with
that.
D
It
was
not,
I
missed,
remember
I'll,
follow
up
now.
A
Yeah,
so
that
is
basically
my
first
question:
we
have
patrick's
proposal
on
dynamic
resource
allocation.
You
already
linked
that
in
from
your
dock.
I
think.
B
Plugins
yeah.
A
But,
but
like
still
for
for
for
us,
I
mean
people
who
are
not
deeply
involved
in
this
whole
thing
it
it.
It
is
not
at
least
for
me,
it's
not
quite
clear
the
how
they
all
play
together
this.
What
you're
proposing
patrick's
proposal
and
the
topology
aware
scheduling
you
know
set
of
controllers
and
efforts
that
that
are
going
on,
so
it
would
be
nice
if
part
of
the
like
that
rfc,
if
possible,
like
clarifies
what
does
each
of
these
things?
A
What
do
each
like
of
these?
You
know
efforts,
try
to
address
and
whether
they
are
complementary
or
gonna.
At
least
that
would
help
me
understand
better
whether
we
should,
for
example,
continue
to
yeah
like
whether
there's
any
dependencies,
whether
they
can
where
they
can
make
progress
like
you,
know,
independently,
etc.
A
Okay,
yeah-
and
it's
still
like-
because
you
also
mention
here
things
related
to
this.
Like
the
affinity,
I
mean
the
topology.
We're
scheduling
is
kind
of
related
to
that
right,
like
it's,
not
necessarily
addressing
this
specific
point,
but
it
is
trying
to
expose
node
topology
to
the
scheduler
right.
Maybe
I'm
misunderstanding
something.
B
B
A
Right
but
you
need
to
interface
right,
like
between
the
two
things
like,
for
example.
If
we
wanted
to
introduce
pod,
I
don't
know
the
affinities
are
going
to
be
like
at
the
container
level
or
the
pod
level,
and
once
we
define
them,
you
need
the
scheduler
to
act
right
like
based
on
the
what
about
like
things
that
we're
doing
right
now
and
then
once
it
gets
to
the
cubelet,
you
want
to
communicate
that
decision
right
to
cuba
and
say:
okay.
It
was,
for
example,
scheduled
on
that
specific
newman
node.
A
So
so
I
understand
that
they
are
complementary.
Like
one
is
the
cubelet
one
is
the
scheduler,
but
there
is
some
interfacing
that
needs
to
be
clarified
a
little
bit
here.
So
that's,
I
guess
what
I'm
asking.
A
Okay,
yeah,
I
mean
it
would
be
good
like
this
is
related
to
the
future
design
of
this.
It's
taking
those
into
account,
because
if
you
know,
if
you
need
specific
apis
or
specific
plumbing,
to
be
done
to
relay
this
type
of
information
down
to
the
cubelet,
we
should
probably
look
at
this
like
from
the
from
the
get-go
from
the
beginning,
yeah.
B
And
I'm
not
even
sure
it's
right
to
do
beyond
what
the
kubelet's
currently
doing
at
that
level,
because
you
do
have
prometheus
and
you
do
have
other
ways
to
information
gather
and
then
make
scheduling
decisions
outside
of
having
to
route
everything
through
kubelet.
So
there's
more
than
one
one
route,
if
say,
francesco
has
a
problem.
A
Okay,
because
you
you
mentioned
like,
cannot
choose
which
zoom
numero
zone
as
well,
it's
like
a
a
user
choice
or
like
you
want
you
want
that
to
be
a
user
choice
or.
B
I
don't
think
it
should
be
a
user
choice
necessarily,
but
if
you
know,
for
instance,
the
specifics
of
your
system
and
all
your
zeros
are
closer
to
or
have
higher
better
network
or
this
it's
contrived
right,
the
number
zone,
but
any
affinity
below
the
node
level.
So
if
I
want
something
next
to
particular
switches
right
because
I
know
they're
faster,
because
I
have
some
sort
of
mixed
switching,
I
should
be
able
to
choose
according
to
affinity
of
resources,
instead
of
just
choosing
affinity
of
the
node
right.
E
Hey
hello,
so
from
topolgiver
perspective,
it
is
partially
related
to
this
topic.
I
just
want,
because
the
conversation
went
about
the
possibility,
so
I'm
just
giving
some
bit
of
context
from
the
perspective
of
what
we
are
doing.
Indeed,
in
the
topology,
our
scheduling
initiative
we're
pursuing,
so
we
are
actually
intentionally
avoiding
to
the
scheduler
drive
the
choice
of
the
numa
zone,
and
this
is
intentional
and
there
is
not
a
clear
cut.
E
Actually,
this
is
a
quite
hot
topic
back
in
time
when
the
decision
was
made,
the
consensus
in
the
community
was
like
exposing
this
level
of
detail.
Dr
having
the
shadowing
driving.
This
decision
would
be
against
basically
the
look
and
feel
of
kubernetes,
so
it
was
something
we
we
are
avoiding.
We
we
are
exploring
options,
we
have
a
door
open
to
exploring
options,
but
that
felt
too
imperative
in
a
way.
E
A
So,
just
to
clarify,
like
the
currently
the
the
thing
that
you're
doing
with
numerous
scheduling,
is
to
like
make
sure
that
there's
enough
zones-
yes
on
this
node
that
can
handle
this
without
the
yes,
yes,.
A
E
Yes,
yes,
yes
again,
this
was
something
that
felt
too.
I
don't.
I
don't
have
a
right
word
so
bear
with
me.
I
it
well
felt
you
know
really
too
imperative
and
it's
it's
still
open.
This
is
just
to
give
you
a
false
context.
We
are
not.
You
know
we
are
not
doing
that
just
yet.
For
this
reason,
it's
open.
If
we
want
to
have
another
discussion
and
say
hey,
there
is
a
way
to
do
that.
Well,
we
are
open
to
that,
but
today
we
do
that,
for
this
reason
must
thank
you.
C
Yeah
and-
and
the
other
thing
related
to
this
is
if,
for
example,
the
scheduler
decides
a
particular
numa
node.
The
cubelet
is
the
one
that
that
is
actually
responsible
for
allocating
those
resources,
so
it
could
be.
There
should
be
a
way
to
relay
that
information
to
cubelet.
C
First
of
all,
like
the
whole
idea
about
doing
this
itself,
like
francesco
explained,
is
kind
of
controversial
and
then
from
implementation
point
of
view
the
scheduler
could
make,
even
if
it
does,
the
cubelet
could
make
another
decision,
and
then
we
are
kind
of
back
to
square
one,
and
that
is
another
consideration
here.
A
My
second
question,
a
high-level
question
is
like
a
follow-up
to
exactly
this,
so
I've
heard
from
slam
community
like
with
slam
there.
Probably
there
are
people
on
this
car
way
more
experienced
with
this
scheduler.
A
The
skiller
is
able
to
select
a
specific
numa
zone
within
a
note
like
that's
part
of
what
their
bread
and
butter
like
they.
They
are
able
to
make
these.
It's
part
of
their
scheduling
logic,
and
one
of
the
skills
that
I
had
with
this
slurp
community
in
the
past
is
that
they
are
struggling
with
this
with
kubernetes.
They
want
to
deploy
on
kubernetes,
whether
it's
below
or
under,
like
or
in
in
parallel.
But
one
thing
they're
struggling
with
is
exactly
this
and
that's.
A
I
guess
another
thing
that
I
would
advise
marlo
to
perhaps
reach
out
to
slur
community,
because
they
could
give
some
feedback
what
they
expect
to
control
that
would
open
the
door
first
layer
to
be
deployed
on
kubernetes
in
a
more
native
way.
That
would
be
win
for
the
hpc
community
in
general.
B
A
All
right,
thank
you.
Marlo.
Second
agenda
item.
Okay,
it's
me,
so
I
wouldn't
just
want
to
mention
this
quickly.
I
was
with
aldo,
was
trying
to
write
an
issue
to
explain
this
use
case,
but
I
didn't
get
the
time
to.
It
was
just
like
yesterday
that
this
one
up,
so
we
had
a
customer
that
was
trying
to
use
index
job
to
run
basically
many
independent
tasks,
the
job
itself
as
a
whole.
Is
they
don't
care
that
the
job
as
a
whole
succeeds
or
fails?
A
They
just
want
every
index
to
actually
execute
and
at
the
end
you
they
would
just
want
to
see.
Okay,
for
example,
you
execute
one
that
you
say
you
say
1000
completions
that
you
want
to
to
execute,
and
so
you
will
have
indices
from
0
to
999.
A
They
want
every
index
to
get
a
chance
to
get
executed
and
then,
at
the
end
you
get
this
like,
maybe
in
the
job
status,
for
example,
which
ones
have
failed,
which
one
succeeded.
This,
like
the
related
issue,
that
aldo
linked
to
here
is
is
in
the
same
spirit.
One
solution
could
be
to,
for
example,
have
back
off
limits,
pay
an
index
rather
than
at
the
job
level,
but
those
backup
limits
don't
decide
whether
the
whole
job
fails
or
not.
A
Rather
how
many
times
I
need
to
try
a
specific
index,
but
always
try
every
index.
I
don't
know
if,
if
if
this
use
case
was
discussed
before
especially
magic,
I
don't
know
if,
if
this
was
discussed
in
the
past
in
in
sig
apps,
when
the
job
api
was
being
developed,.
F
No,
I
don't
recall
any
discussion
as
such
something
of
a
similar
nature
we
already
have,
but
in
a
much
more
simplified
version
is
when
you
can.
F
If
I
remember
correctly,
you
don't
define
completions
and
you
can
define
parallelism
and
it
will
just
kick
off
as
many
as
parallelism
says,
and
the
first
one
that
completes
makes
the
entire
job
completed.
Can't
remember
if
that
was
with
completions,
not
set
or
one
of
them,
but
what
you're
describing
would
be
a
reasonable
approach.
A
Yeah,
so
there
is
a
precedent
to
this
in
you
see
my
screen
right,
like
aws
batch,
so
in
aws
batch
there
is
array
jobs
in
there.
They
they
have
this
mode.
Where,
like
you
know
there
is
this,
is
a
job
and
then
those
are
like
they
call
them
child
jobs
and
it
says,
like
you,
can
cancel
or
terminate
individual
child
jobs
without
affecting
the
other
child
jobs,
and
so
I
guess
that's
the
more
that
they
want
to
operate
in
yeah.
A
I
think
it's
reasonable
to
achieve
some
priority
with,
with,
in
general,
like
how
index
jobs
work
in
other
schedulers.
F
One
question
the
the
the
description
that
you
mentioned
says
that
if
a
child
or
a
job
fails,
the
parent
job
also
fails.
I'm.
F
Oh,
I
mean
so.
You
basically
want
run
everything,
but
do
not
continue
running
the
indexes
that
failed,
but
just
leave
them
as
where
they
are
and
the
overall
status
will
be
reflecting
if
at
least
one
failed,
that's
still
a
failed
job,
even
though
it
will
be
99.
Let's
say
out
of
199
successes
and
one
failed.
We
still
want
to
see
the
overall
status
has
failed.
We
just
won't
retry
that
single
job.
F
E
A
Can
have
a
definition
for
failed
as
well,
for
example,
if
specific
person-
but
maybe
this
is
this-
is
too
much
yeah.
F
C
F
So
probably
I
would
double
check
what
we
currently
do
and
it
definitely
sounds
a
reasonable
approach
being
able
to
specify
oh
in
an
index
job
just
make
sure
that
it
all
succeeds
and
do
whatever
it
whatever
it
takes.
Even
if
that
means
re-running
a
single
index
100
times
or
more
to
get
it
executed
or
just
let
it
fail
and
reflect
that
in
the
status.
That's
definitely
something
that
that
would
be
supportive
of.
D
Yeah,
I
think
the
request
can
be
simplified
as
failure
a
failure
policy
like
the
current
policy
is
that
when
we
declare
a
job
fail,
we
remove
any
running
pots.
F
A
D
F
D
The
other
thing
I
was
thinking
of
is
actually
having
a
backup
limit
per
index,
but
that's
very
hard
to
implement,
because
we
need
to
keep
account
for
every
index.
D
Yes,
that
wouldn't
be
visible
in
the
status
unless
we
limit
the
number
of
the
number
of
completions
very,
very,
very
much
like
one
million
or
something
I.
A
I
guess
like
yeah,
I
mean
at
the
higher
level.
I
feel
that
the
the
more
genetic
requirement
is
that
I
want
each
index
to
execute
and
set
a
different
failure
policy,
basically
not
like
job
level.
One.
It's
basically
says
like
if
x,
number
of
indices
fails
and
the
whole
job
fails.
For
example,.
A
And
so
you
could
basically
say
you
can
easily
if
you
wanna
every
index
to
actually
execute,
then
you
can
say
that
the
whole
job
will
only
fail
if,
if.
A
A
Anyways,
so
I
I
feel
that
this
is
something
useful
and
I
guess
we
will
just
create
a
create
an
issue
link
to
the
existing
ones
and
try
to
discuss
their
potential
like
approach
to
this
feature
request
and
then
we'll
bring
it
to
cigars.
A
All
right
we've
got
six
minutes.
Yeah
ten
minutes
for
kubecon
we've
got
our
first.
D
Talk,
yes,
so
well,
first
of
all,
next
next
meeting,
I
guess
will
be
cancelled
because
it's
keep
going
so
we
we
got
a
slot
to
present
our
our
working
group
and
at
the
time
we
sent
the
request,
because
this
was
all
starting
up.
So
at
this
point,
of
course,
I
can.
I
want
to
present
a
little
bit
of
the
history,
our
charter
and
our
pillars
or
sorry
work
streams.
D
You
know
job
api,
job,
queueing
and
device
support
hardware,
hardware
specialized
hardware
support
so
within
each
of
those.
If
you
have
anything
you
would
like
to
highlight,
I
can
certainly
mention
the
different
proposals
that
have
been
sent,
but
I
don't
know
if
there
is
anything
more
concrete
that
can
be
presented
in.
I
think
the
most
concrete
things
we
have
is
the
job
api
I
mean
the
caps
are
merged
and
what
not,
and
in
in
terms
of
job
queueing
we
have.
D
We
have
the
the
queue
project
within
six
scheduling
and
in
yeah
we
have
to
apologize
scheduling
in
in
in
specialized
hardware
support.
D
But
if
you
have
anything
else
that
you
would
like
it
to
be
included,
please
let
me
know,
maybe
just
like
a
couple
of
sentences
for
for
the
thing
you
want
to
highlight
yeah.
Ultimately,
this
is
this
is
a
presentation
of
all
of
us,
not
not
just
me.
So
please
do.
Let
me
know.
A
So
I
guess
the
goal
is
to,
I
guess
mention
all
things
batch
in
this
presentation:
everything
related
to
batch,
going
on
and
unders
under
under
kubernetes
companies
and
all
all
ongoing
sakes.
A
So
yeah,
please,
as
aldo
mentioned.
If
there
are
efforts
related
to
batch,
please
mention
it
so
that
it's
it's
it's
presented
in
in
this
maintenance
track
presentation.
G
Basically,
just
one
use
case
what
we
have
in
in
our
company.
We,
they
are,
let's
say
you
have
you,
can
queue
stuff
and
so
on.
Is
it
possible
to
have
some
interface
also
to
do
a
reservations,
reservation
system
that,
for
example,
I
want
to
reserve
for
four
nodes
to
do
yeah
some
bad
job
execution
in
a
later
time
frame,
I
reserve
it
yeah.
G
And
some
some
level
of
control
of
the
reservations,
so
we
we
want,
for
example,
to
ensure
fairness
of
reservations
to
limit
reservations
to
certain
time,
let's
say
three
days
or
a
week
in
the
month.
Longer
reservations
get
killed
automatically
after
that
and
stuff
like
that.
H
Either
one
is
perfectly
fine.
Thank.
D
H
D
I'm
I'm
currently
working
on
it.
Okay,
I'm
not
sure
if
I
want
to
share
with
the
entire
group,
because
might
be
a
little
bit
too
much.
D
Yeah,
but
if
you
want,
let
me
know
if
it's
something
you
want
to
include,
and
maybe
you
can
share
just
yeah.
D
I
A
I
think
it's
fine
to
like.
If
you
have
a
draft
I
mean
we
could.
I
D
D
I
I
will
include,
for
sure,
is
how
people
can
get
involved
because,
as
we
said
earlier,
we
want
to
invite
people
to
present
here
right.
There
are
different
frameworks
and
bring
their
feature
requests,
so
that
will
be
there
too.
A
We
have
four
minutes.
I
I
have
one
like
quick
high
level
subject.
I
don't
know
what
your
thoughts
about
how
the
working
group
is
being
has
been
going.
I
know
that,
there's
no
specific
focus,
you
know
or
a
specific
feature,
it's
more
like
a
discussion
forum
for
things
badge.
I
I
find
that
useful.
A
I
have
been
finding
that
useful
so
far,
perhaps
moving
forward,
maybe
once
some
of
these
efforts
that
are
going
on
becomes
more
concrete,
I
don't
know
how
we
want
to
handle
that
like,
for
example,
dedicate
time
to
every
topic
or
like
dedicate
us
or
just
leave
it
like
the
way
that
we're
operating
right
now
on
a
week-by-week
basis
like
we
just
look
at
the
agenda,
encourage
people
to
add
items
to
the
agenda
for
discussions.
A
K
Yeah,
sorry,
sorry,
I
I
I
didn't
manage
to
follow
completely
the
meeting,
so
apologies
if,
if
you
mention
this,
but
one
thing
I
think
I
think
you
offline
as
well,
but
there
is
this
updates
for
the
different
sigs
in
kubernetes
for
the
cook
for
coupon
that
will
be
like
in
the
keynote
there
will
be
a
coordinated
kubernetes
project
updates.
I
was
wondering
if
you
want
to
add
a
small
line
regarding
this
working
group
to
also
advertise
it
there.
K
So
this
is
this
is
a
single
presentation
during
the
keynote
for
like
15
minutes,
where
we
give
updates
of
the
different
projects
under
kubernetes
different
six,
for
example,
okay
and
then,
like
six
foot
storage,
says
what's
new
since
the
last
coupon
and
therefore,
if
you
want
to
give
one
for
six
scheduling,
I
was
thinking.
Maybe
it's
an
another
another
place
where
I
can
kind
of
advertise.
A
Yeah
I
mean
that
that
I
think
that's
good.
How
do
we
do
that,
like
I,
I
just
there.
K
A
K
H
I'll
do
this
again
again,
I'm
just
again
for
the
feedback
with
the
working
group.
We
had
a
list
of
projects
that,
at
the
beginning,
in
one
of
the
subgroups
or
something-
and
we
haven't
really
seen
in
many
of
those
presentations,
I'm
wondering
is-
are
we
waiting
for
folks
to
do?
We
have?
Are
we
reaching
out
to
those
folks
or
how
is
that
working?
H
Because
I
would
like
to
see
more
of
those,
and
I
still
need
to
do
one
myself
for
our
project,
but
almost
wondering
that
list
of
projects
that
kind
of
overlap
the
batch
area
in
many
ways,
I'm
not
seeing
many
of
those
presentations
in
this
forum.
So
I
just
kind
of
wanted
to
find
out
how
that's
being
driven.
A
So
there
hasn't
there
was
no
like
outreach.
So
I
guess
that's
a
that's
a
good
point
like
to
start
basically
asking
the
project
I
would
listed
for
for
for
for
them
to
come
to
the
working
group
and
present
so
far.
The
agenda
like
every
every
other
week
has
been
like.
You
know.
A
There
was
already
items
in
the
agenda.
We
were
waiting
for
a
couple
of
presentations
from
nvidia
that
they
got
like
you
know,
pushed
a
couple
times,
but
I
guess
I
guess
like
we
should.
We
should
start
doing
that
and
maybe.
H
Happy
to
do
that
so
I'll
reach
out
to
you
to
see
how
to
move
okay.
A
Yeah,
let's
coordinate
offline
and
if,
if
you
wanna,
if
you
wanna,
take,
take
take
up
like
the
task
of
reaching
out
to
these
values,
various
frameworks
that
would
be
great,
okay,
sure.
A
And
after
cube
corner,
we
will,
we
will
also
have
presentation,
maybe
for
for
q,
so
that,
as
like
being
one
of
the
frameworks,
we
already
presented
that
in
stage
category
and
sig
apps.
But
I
think
we
should.
We
should
do
it
here
as
well.