►
From YouTube: Kubernetes WG Batch Weekly Meeting 20220609
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
meeting
is
being
recorded,
hello,
everyone
this
this
is
today's
batch
working
group
meeting
today
is
9th
of
june,
and
we
have
few
items
in
the
agenda
today
by
diana
as
well
as
aldo
so
I'll
I'll
give
the
stage
to
diana
who's
going
to
talk
to
us
about,
and
cat.
B
B
There,
okay,
so
hopefully
you
can
see
my
screen.
Thank
you
for
allowing
me
to
present
this
open
source
project
that
we've
been
working
on.
B
I'm
very
excited
to
be
part
of
this
work
group
because
I
think
it's
a
great
overlapping
in
some
of
the
goals
and
challenges
we're
trying
to
meet,
and
so
I'm
really
happy
to
be
able
to
share
some
of
the
thoughts
and
ideas
we
had
when
we
developed
this
open
source
project
and
then
again
being
able
to
contribute
some
of
the
concepts
and
kind
of
drive
some
of
these
to
the
kubernetes
core
system.
So
thank
you
very
much.
B
B
The
development
was
really
to
do
some
batch
to
be
able
to
do
some
queuing
for
batch
jobs
and
then
really
build
a
framework
where
we
can
start
applying
various
policies
with
regards
to
the
whole
life
cycle
of
a
batch
job,
not
just
the
queuing
but
the
dispatching,
and
then
the
preemption
and
such
so.
We
had
some
some
initial
requirements
where
we
want
to
have
some
very
basic
policies
where
you
could
do.
B
You
also
one
of
the
very
first
policies,
which
was
you
know
very
obvious,
which
is
really
just
enabling
the
ability
to
determine
whether
you
can
run
this
all
of
the
resources,
all
the
compute
resources.
This
job
would
take
make
sure
that
there's
enough
availability
before
you
even
start
to
create
any
of
the
objects
of
these
jobs,
and
we
first
started
out
doing
this
on
a
single
cluster.
B
We
also
added
some
abilities,
as
we
were
working
through
the
the
folks
that
were
going
to
be
using
this
was
being
able
to
provide
policies
or
the
ability
to
do
some
aging
jobs,
and
this
is
really
to
start
to
think
about.
You
know
you
have
some
level
of
slo
that
you're
trying
to
accomplish,
not
just
for
you
know,
high
priority
jobs,
but
there
could
be
some
low
priority
jobs
that
you
want
to
be
able
to
do
some
sort
of
aging,
as
it
goes
through
the
queue.
B
Some
politicians
regards
to
quoted
quota
as
well
specifically
being
able
to
do
con
soft
constraints
and
such
that,
when
there's
low
utilization
on
these
clusters,
there
can
be
borrowing
of
quota,
but
obviously,
as
the
the
system
starts
to
have
high
utilization,
we
want
to
keep
all
the
identified
quota
definitions
within
their
their
scoped
definition
and
another
key
thing
which
was
was
really
an
interesting
challenge,
was
that
we
were
working
with
some
existing
solutions
and
services,
and
these
existing
solutions
and
services
had
kubernetes
they
were
using
for
their
batch
jobs.
B
Other
objects
other
than
the
the
actual
batch
v1
job
object
in
kubernetes
specif,
some
specific
things
they
were
doing
was:
they
were
using
deployments,
taking
advantage
of
some
of
the
benefits
there
and
also
stateful
sets,
which
was
an
interesting
challenge
there.
So
what
we
were
looking
at
and
when
we
were
evolving
this
is
how
can
we
support
these
solutions
that
are
already
using
higher
level
objects
that
don't
necessarily
have
run
to
completion?
B
B
Okay,
now
please
interrupt
if
you
do
happy
to
take
any
questions
so
as
we
developed
this.
One
of
the
things
we
we're
also
considering
is
is
that,
if
we're
going
to
evaluate
these
objects
there
that
represent
a
whole
batch
job,
it's
not
just
the
pod,
creating
objects,
there's
also
custom
resources
that
you
may
want
to
dispatch
as
part
of
the
job
or
they
can
be
other
non-compute.
B
B
Let's
go
ahead
and
treat
those
completely
as
one
entity,
and
the
reason
that
we
wanted
to
do
that
was
is
that
when
you
we
determine
that
the
whole
the
job
can
is
runnable,
and
I
keep
I'm
kind
of
highlighting
the
term
runnable
because
essentially
it
turns
into
policies,
but
what,
when
a
job
becomes
at
the
top
of
the
queue?
B
And
it's
determined
that
it
passes
all
the
policies,
then
then,
and
only
then
do
we
want
to
actually
create
the
objects
that
are
that
are
wrapped
in
this
wrapper,
and
I
forgot
to
mention
sorry
the
the
custom
resource
that
we
built
for
mcad.
B
The
definition
is
called
an
app
wrapper,
so
we
have
all
the
objects
listed
inside
the
app
wrapper
and
we
only
want
to
create
them
when
the
job
is
determined
to
be
runnable,
and
I
was
really
happy
to
see
the
fact
that
the
the
same
similar
concept
exists
now
today
in
v1
batch
v1
job.
So
that's
that
was
pretty
pretty
neat
because,
basically
the
same
concept
where
we're
spending
suspending
the
desired
state.
B
Until
we
we
mark
the
object
where
it's
actually
something
you
want
to
actually
actually
run
the
the
components
of
the
of
that
of
that
job.
So
hopefully
that
makes
sense.
So
we
kind
of
have
all
the
objects
in
the
app
wrapper
and
all
the
objects
don't
get
created
until
it's
determined
to
be
runnable,
and,
as
I
mentioned
before,
once
we
actually
determine
that
it's
runnable
it
gets
dispatched,
which
means
we
just
unwrapped.
All
the
objects
deployments
stateful
sets
services
and
actually
called
into
the
api
to
create
them.
B
Let's
see
so,
as
I
mentioned
earlier,
you
know.
The
whole
goal
was
to
start
to
begin
to
create
this
framework,
so
we
can
actually
actually
enable
additional
plugins
and
new
policies
really
more
of
like
a
life
cycle
management
component
for
all
kinds
of
policies
that
you
want
to
consider.
One
of
those
we're
considering
right
now
is:
is
cluster
scaling
similar
to
what
I
think
q
has
as
well?
B
What
I
wanted
to
make
clear
was
we're
not
doing
the
same
job
as
the
scheduling
as
far
as
binding
pods
to
nodes.
We
didn't
want
to
be
in
that
business
because
they're
already,
you
know
it's
lots
of
features
that
already
exist.
We
just
want
to
take
advantage
of
those,
so
they've
been
totally
out
of
the
scope
for
this
project
from
the
beginning.
B
Okay,
so
I'm
just
gonna
just
for
a
quick
view
of
how
things
are
work
there's.
So
we
have
the
custom
research
definition
in
the
app
wrapper.
You
put
your
objects
all
in
the
kubernetes
objects
in
the
app
wrapper
as
item
lists,
and
then
the
the
controller
that
it
that
we
have
that
was
running
is
obviously
operating
on
app
wrappers.
B
We
also
have
a
queue:
it's
a
priority
queue,
so
you
can
set
priorities
of
the
app
wrapper
and
they'll
be
dispatched
accordingly,
and
essentially
one
of
the
first
policies
that
I
mentioned
was
the
available
compute
capacity
right.
So
we
don't
dispatch
the
job
until
default.
Behavior
is
it
doesn't
dispatch
the
job
until
we
determine
that
there's
enough
compute
resources
to
run
all
of
the
pods
inside
of
the
that
are
expressed
inside
the
job
and
then,
as
I
mentioned
before,
we
actually
transition
from
actually
running
it.
Just
with
in
a
single
cluster.
B
Now
we
can
configure
the
the
the
mk
controller
to
run
in
two
different
modes.
One
of
them
is
what
we
call
the
dispatcher
cluster
or
the
dispatcher
controller,
and,
as
you
can
see
here,
what
I'm
trying
to
show
is
is
that
these
are
multiple
different
kubernetes
clusters,
the
one
where
we
dispatch
workload,
two.
B
We
call
them
the
agent
clusters,
they
both
all
three
run,
the
mcat
controller,
but
some
of
them
run
in
an
agent
mode,
a
one
only
one
of
them
runs
in
inspection
mode,
but
the
agents
kind
of
feed
available,
state
or
resources
that
are
available
to
the
ma
to
the
dispatcher
mcad
controller.
B
B
So,
as
I
mentioned
before,
we
have
this
app
wrapper
that
is
a
cr
and
wraps
the
dif,
all
the
objects
that
represent
the
job,
the
controller
picks
it
up
determines
whether
it's
runnable
for
based
on
the
policies
that
are
configured
and
it
does
the
inspection
of
that.
Then
it
once
is
determined
to
be
runnable.
It
basically
just
unwraps
all
of
those
objects
listed
in
there
and
calls
the
kubernetes
and
api
server
to
actually
create
them.
B
Okay,
this
is
just
a
little
bit
more
detailed
example
where
this
is
in
a
single
cluster.
You
would
define
this
is
actually
showing
a
custom
resource.
I'm
not
sure
folks
are
familiar
with
ray
ray
project,
but
we
have
essentially
the
this.
This
is
a
custom
resource
that
you
list
in
the
app
wrapper.
B
It
eventually
creates
pods
and
that
are
workers
and
header
pods,
but
what
you
would
define
essentially
is
just
the
customer
resource
or
the
object
you're
trying
to
create
the
controller
will
pick
it
up,
determine
if
there's
enough
resources
and
if
it
does
it'll,
actually
create
the
high
level
object.
You
define
and
that's
kind
of
what
I
wanted
to
make
clear
is
is
that
we
don't
create
the
pods.
We
allow
whatever
object,
you're
crea
you're
wanting
to
get
dispatched.
We
allow
those
objects
to.
B
Okay,
here's
a
another
example
with
a
more
complex.
I
think
one
of
our
services
was
a
spark
job
and
it
had
actually
quite
a
few
objects.
This
is
just
a
subset
of
them,
but
quite
a
few
objects
that
they
were
creating
as
one
complete
job
a
couple
services
name
space
network
policy.
B
B
B
So
you
can,
you
know,
pick
up
any
scheduler,
that's
already
already
using,
because
those
should
be
already
annotated
and
that
should
be
as
part
of
the
the
definition
you
put
in
the
app
wrapper.
B
Obviously
we're
creating
a
lot
of
objects
and
and
evaluating
them.
So
you
are
the
our
back
configurations
need
to
allow
mcad
to
do
those
operations
on
those
objects
and
currently,
as
far
as
dispatching
the
project,
there's
no
microserv
microservices
or
emission
controllers.
B
And
then
my
final
slide
here
is
just
to
share
some
of
the
things
I
see
going
already
and
hopeful
some
additional
things.
So
I
I
don't
know
I
don't
know
how
reasonable
it
is,
but
it
was
exciting
to
see
the
suspend
for
job.
I
think,
as
people
use
some
of
these
other
objects,
even
though
those
objects
are
not
right
to
completion,
they
use
them
in
their
definition
of
a
job.
So
it'd
be
nice
to
see
some
of
the
other
objects.
B
You
know
with
the
the
suspend
such
as
the
deployment
staple
set
where
we
can
pause.
You
know
the
desired
state
management
until
we're
actually
ready
to
create
all
the
objects.
B
B
So
if
you
saw
earlier,
there
was
the
array
custom
resource,
they
have
one
pod
group,
which
is
the
header
node
and
then
another
pod
group,
which
is
the
worker
nodes
and
those
all
both
make
up
a
job,
and
so
it'd
be
nice
to
see
the
multiple
pod
groups
as
either
part
of
the
batch
v1
job
or
maybe
maybe
even
consider
another
object
that
might
manage
those
multiple
pod
groups
and
that's
my
second
comment
or
my
third
comment,
which
is
maybe
in
addition
to
or
or
maybe
in
place
of,
having
multiple
pod
groups
in
a
job.
B
Maybe
we
we
may
consider
having
a
new
object.
I
don't
know.
Hopefully
we
can
kind
of
think
through
those
things.
If
we
want
to
do
that
in
this
group
and
also
some
of
the
features
that
we
did,
if
folks
find
it
useful
to
be
able
to
create
as
part
of
the
job
not
compute
any
any
kubernetes
object
that
doesn't
create
pods,
we
may
want
to
include
them.
D
While
magic
is
trying
to
figure
out
his
audio,
I
I
have
a
question:
can
you
explain
a
little
bit
again
about
non-compute
resources?
What
what's
an
example
of
that.
B
Good
question,
so
these
are:
these:
are
non-compute
compute,
consuming
resources,
so
things
like
services.
A
great
example
of
this
is
this:
this
one
use
case
where
we
had
all
these
objects
that
get
created.
Oh
two
of
them
create
a
compute
computing
consuming
resource.
That's
really
really
just
pods
is
really
what
that's
intended.
E
D
B
So
so
one
of
the
aspects
is
to
consider
all
the
other
objects
that
are
not
part
of
that
are
not
pods
and
then
actually
pausing
the
creation
of
those
until
the
job
is
ready
to
run.
So
that's
one
of
them
and
then
sorry
does
that
make
sense
or.
D
B
Oh,
that's
a
good
question.
I've
been
thinking
of
them
as
only
the
pod
objects,
but
I
I
don't
want
to
say
I
I
think
it's
the
actual
creation
I
want
to
say,
and
on
our
mode
we
don't
actually
create
the
service
until
the
app
wrapper
is
dispatched
from
the
controller.
B
So
I
don't
know
how
it
would
manifest
in
the
way
that
we've
been
able
to
wrap
it,
but
it's
a
conceptual
idea
where
basically
don't
create
any
of
these
objects
until
it's
it's
ready
to
run
and
in
the
concept
of
the
job,
I
think
from
suspend
mode
when
you
submit
a
job,
none
of
the
pods
get
created.
B
C
Okay,
I
don't
know
what
happened.
I
didn't
change,
so
I
wanted
to
comment
on
the
extension
that
you
mentioned
so
out
of
the
core
resources
that
we
currently
have
deployment
have
the
ability
to
be
paused.
C
We
picked
a
different
nomenclature
to
name
them
because,
given
that
there
are
a
little
bit
different,
how
they
run
deployments
can
be
paused,
but
I
double
check
if
the
other
apps
controllers,
such
as
daemon
sets
replica,
sets
baseball
sets.
None
of
them
has
that
capability,
but
I
think
adding
those
to
be
in
par
with
the
remaining
apps
controllers
seems
reasonable
improvement
that
we
can
add,
especially
that
over
the
past
couple
of
releases,
we
are
trying
to
align
the
controllers
together.
C
So
adding
the
the
pause
capability
to
to
the
remaining
ask
controllers
seems
perfectly
fine,
and
it
will
most
likely
will
be
caused,
will
be
called
pause
rather
than
suspended
just
tiny
little
difference,
but
deployments
can
already
be
cost,
which
basically
means.
I
can't
remember
what
it
what
it
does,
but
it
will
stop
managing,
not
sure
if
it
will
kill
the
current
one.
I
I
didn't
do
it.
I
don't
remember.
C
For
your
last
point
about
a
job
or
something
else,
creating
additional
resources
at
whenever,
whenever
it's
creating,
that
falls
into
some
kind
of
templatic
templating
mechanism
and
for
that
community
stayed
away
because
the
the
current
mechanisms
and
the
the
possibilities
of
using
some
kind
of
a
templating
mechanism
that
exists
currently
is
probably
broad
enough.
That
we
didn't
want
to
invent
our
own,
especially
that
in
openshift
we
have
a
templating
mechanism
and
it
has
a
very
limited
set
of
substitutions
and
what
you
can
do
with
it.
C
And
when
we
were
initially
discussing
having
similar
functionality
in
kubernetes
the
discussion
and
be
how
gross
set
of
functionalities
within
templating.
You
want
to
have
whether
we
want
to
have
just
a
simplistic,
oh
replace
these
values
with
those
other
values.
Or
you
want
to
go
further
and
oh,
I
want
to
be
able
to
conditionally
add
these
or
that
elements,
or
maybe
even
one
step
further
by
allowing
some
kind
of
a
functions
within
the
templates.
C
If
you-
and
I
totally
forgot
it-
there
is
a
python
templating
engine
that
I
can't
recall
the
name
of
it,
but
it
has
a
very
rich
functionality
when
it
comes
to
templating,
including
invoking
functions,
conditionals,
replacing
the
values
and
so
forth.
C
So
that
was
one
of
the
reasons
why
we
didn't
want
to
have
anything
like
that
in
kubernetes
and
probably
will
not
yeah
and
the
multiple
groups.
If
I
remember
correctly,
we
were
discussing
this
way
was
showing
this
on.
One
of
our
previous
calls.
C
I
have
one
other
question
unless
someone
else
wants
to
to
both
just
to
clarify
throughout
your
presentation
when
you
are
talking
about
a
job
that
was
a
job
in
the
sense
of
the
multi-cluster
dispatcher,
not
the
kubernetes
job.
A
Cool
thanks,
dana
we'll
move
on
to
the
next
agenda
item.
We
have
aldo,
he
wants
to
talk
about
retreatable
non-retrieval.
I
don't
know
how
I'm
pronouncing
that
correctly
part
failures
for
jobs,
so
yeah
stage
is
yours.
Do
you
want
me
to
give
you
co-host.
D
I
hope
we
have
more
time
later
to
continue
talking
about
diana's
presentation,
we'll
see
no
oh
yeah.
D
A
A
D
Share
the
same
issue
like
two
days
ago:
it
was
very
weird:
okay,
I
can
just
talk
about
it
and
then
I
guess
the
cap
is
visible
for
everyone.
So
sorry
I
was.
I
was
saying
that
we
we're
working
on
this
old
proposal
for
for
enhancing
how
jobs
are
retried
or
or
stopped.
Currently,
as
you
may
know,
a
job
only
finishes
a
job
finishes
running
when
well,
it
completes
successfully
or
it
reaches
a
back-off
limit.
D
The
problem
with
the
back-off
limit
is
that
is
all
the
failures
are
treated
the
same
right,
so
it
could
be
infrastructure
error
where
you
know
the
cubelet
dies
or
or
the
cubelet
evicts
the
pawn
or
could
kill
scatter.
It
frames
the
pot
things
like
that
they
all
fall
in
the
same
sac
as
as
failures
from
the
user
itself
from
the
binary
itself.
D
So
we
want
to
to
introduce
this
distinction
so
that
users
have
more
control
on
when
a
job
should
finish
or
when
we
should
just
keep
retrying
indefinitely.
D
So
there
are
basically
two
use
cases
from
a
from
a
user
point
of
view.
Maybe
you
know
exactly
your
exit
codes
from
from
the
one
who
wrote
the
binary:
they
you,
you
might
know
exactly
what
exit
codes.
You
return
some
of
them.
You
know
they.
They
are
not
retrievable,
some
of
them.
You
know,
and
they
are
retriable,
so
you
you
can
have
that.
But
there
is
also
another
use
case
where
you
have
an
infrastructure
provider
right
now.
The
university
administrator
of
the
cluster
things
like
that.
D
You
don't
know
what
people
are
running
and
you
still
want
to
provide.
You
know
non-retireable,
you.
You
want
to
limit
the
retries
for
users
for
user
errors,
but
you
want
to
make
sure
that
if
the
infrastructure
fails,
they
they
have
enough
retries
or
simply
the
retries
are
ignored.
So
we're
introducing
this
this
api
in
the
job
for
you
to
define
exactly
which
policy
you
want.
If,
if
you
can
go
down
to
the
the
story
number
two,
maybe
I
think
this
is
the
most
a
little
bit
lower.
A
D
So
I
think
the
most
problematic,
maybe
the
most
problematic
api-
is
this
one.
So
I
want
to
highlight
it.
D
The
problem
here
is
that
well
when,
when
there
are
exit
codes,
it's
easy
to
manage,
you
know:
cubelet
writes
the
exit
codes,
but
when
there
are
infrastructure
errors
they
are
not
necessarily
easy
to
to
identify,
but
currently
the
the
cubelet
introduces
certain
status
reasons
for
when
a
paul
pot
fails,
so
we're
we're
hoping
to
to
add
some
rules
based
on
on
on
those
status,
statuses,
so
yeah.
D
The
api
suggested
now
is
a
it's
kind
of
a
reduced
version
of
label
label
selectors
or
label
matchers,
so
that
that's
the
proposal
over
there
and
then
one
important
thing
to
mention
is
the
action.
As
you
can
see
in
line
383,
there
is
the
action
which
could
be
ignored
or
it
could
be
a
terminate.
D
So
that's
the
mechanics
of
the
api.
Now,
as
I
was
saying
back
to
status
reasons,
the
cubelet
introduces
status
reasons,
so
that's
very
useful,
but
things
like
cubescheduler
don't
introduce
status
reasons
today,
then,
we
have,
for
example,
the
pod
garbage
collector
that
also
removes
spots.
It
also
doesn't
introduce
status
reason.
So
the
the
second
change
in
the
api
we're
suggesting
is
to
introduce
delete
option
in
the
delete
api.
A
D
C
D
Failure
bot
garbage
collector
can
introduce
a
reason
things
like
that
and
then
in
the
job
you
can
use
those
reasons.
So
that's
that's
pretty
much
the
entire
proposal.
There
is
a
few
risks
in
particular.
This
reason
is
not
very
well
documented.
D
The
reasons
could
change
in
like
we
don't
have
a
proper
list
of
all
the
reasons
that
kubernetes
introduces-
and
it's
not
very
well
reviewed,
whether
these
values
change
and
whatnot.
So
maybe
the
the
risk
is
over
there
we'll
have
to
to
to
enhance
those
those
their.
You
know
the
reliability
or
the
backwards
compatibility
of
those
fields.
D
And
maybe
another
question
is
what
happens
with
index
jobs
so
in
index
job
you
could
have
a
failed
index,
but
you
still,
I
mean
it
fails
on
the
user
error
on
a
given
index,
but
it
might
still
succeed
in
the
rest
of
the
indexes.
We
are
aware
of
that
problem,
but
we
we
want
to
start
with
something
so
we'll
deal
with.
We
want
to
deal
with
that
in
the
next
release.
As
a
separate
cap.
C
D
I
think
that's
yeah,
that's
all
the
kvs
I
can
think
of.
Oh,
there
is
one
more
you
know.
Currently,
the
the
pod
garbage
collector
could
delete
a
pot
before
before
we
take
a
decision
on
the
status
reason
or
the
phase,
but
since
we
already
have
finalizers
for
simply
counting.
D
D
I
would
very
much
like
everyone
here
to
provide
feedback
on
the
cap
on
the
api,
and
I
I
will
bring
up
this
still
to
see
gaps
next
monday
and
I
need
to
discuss
with
api
reviewers,
because
this
con
this
recent
reasons,
api,
might
be
a
little
bit
tricky
to
support
long
term.
D
But
I
hope
that
that
can
all
be
solved
before
the
enhancements
freeze,
mija
mikhail
here
is
in
the
call,
as
well
he's
leading
the
development
of
this
feature.
So
he's
also
here
for
any
questions
that
you
might.
F
D
No,
I
don't,
I
don't
think
so
because
in
the
case
of,
for
example,
a
deployment
where
you
have
a
service,
you
always
want
the
service
to
be
running
right.
There
is
no
end
status
of
failure,
at
least
currently,
so
it
doesn't
yeah.
You
usually
just
want
to
keep
retrying
forever
in
in
other
objects.
Job
is
just
different
because
it
has
to
finish.
D
F
There
are
things
like
I
think
you
remember
the
discussion
that
we
had
in
cigarettes
called
a
couple
of
weeks
ago,
where
there
are
some
stages
where
scheduler
is
actually
putting
it
on
a
node
and
cubelet
is
actually
trying
to
reject
those
things
and
the
parts
will
continuously
try
to
come
up.
Can
we
do
something?
Can
we
do
something
in
the
controller
which
actually
tells
I
cannot
retry
it
because
of
this
particular
reason,
and
this
is
the
reason
that
cubelet
has
set
the
state
as
well.
D
I
see
so
the
proposal
here
is
to
put
these
fields
in
the
job
api.
The
the
the
pod
api
is
not
changing.
The
status
reason
already
exists
in
the
pod
and
we
are
using
that
and
we're
also
changing
the
delete
options.
Api.
F
A
I
think
in
in
context
to
ravi's
question
there
like
what
happens
is
for
scenarios
like
this,
where
the
cubelet
itself
is
rejecting
pods,
the
scheduler
doesn't
have
visibility
into
what
has
caused
that
and
it
just
repeatedly
keeps
placing
pods
or
if
it's
part
of
a
deployment,
it
keeps
placing
the
pods
on
the
nodes
and
there's
no
kind
of
loop
back
from
to
the
scheduler
from
cubelet,
and
this
could
be
applicable
to,
I
think
topology,
where
scheduling
is
one
of
the
things
that,
where
we're
trying
to
solve
this,
it
could
be
volume
creation
did
not
succeed.
A
D
I'm
happy
to
discuss
this
in
another
meeting.
I
was
talking
about
something
very
similar
with
alexander,
which
I
don't
remember
his
last
name.
D
Yes,
we
were
talking
about
the
possibility
of
d
scheduling
a
pot,
so
the
cubelet,
because
if,
if
you
just
fail
the
part,
then
there
is
a
new
part
and
then
we
have
lost
all
the
information
so.
A
D
D
Any
concerns
with
the
api.
Do
you
think
it's
reasonable
to
rely
on
the
reasons
for
this
kind
of
behavior
for
infrastructure
errors.
D
I
guess
silence
seems
means
that
it's
all
good
I'll,
probably
have
to
to.
A
A
Thank
you
any
questions
for
aldo
or
maybe
for
diana
before
we
wrap
up
today's
call.
A
D
Diana
do
you
think
so
kyo.
I
think
you
came
later
than
your
your
project.
So
do
you
think,
is
there
a
possibility
of
alignment
so
that
oh.
B
B
A
Cool
thanks.
Everyone
thanks
for
your
time,
thanks
for
attending
hope
to
see
you
in
two
weeks,
people
who
have
agent
items
in
the
future
backlog
item
if
they
can
maybe
start
putting
them
as
agenda
items
and
preparing
for
them
in
subsequent
meetings.
That'll
be
great
hope
to
see
you
guys
in
two
weeks
bye.