►
From YouTube: Kubernetes WG Batch Weekly Meeting for 20221110
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today,
is
November
10th,
and
this
is
another
of
our
best
working
group,
regular
meetings.
My
name
is
mate
and
I'll
be
your
host.
Today
we
have
President
packed
with
a
topic,
so
let
me
pass
it
over
by
the
way
to
Diana
to
talk
about
the
phase
two
recommendations.
Let
me
stop
sharing.
You
should
be
able
to
share
your.
B
Great,
so
just
a
recap,
a
couple
of
meetings
ago
or
a
meeting
ago
was
we
started
to
ask
about
what
are
the
next
steps
for
the
batch
working
group
and
I?
Think
the
next
phase
is
really
to
start
to
think
about
some
recommendations
out
of
the
working
group
and
so
I've
started
to
really
just
collect
some
thoughts
and
some
ideas
to
prepare
a
document
for
us
to
iterate
on,
but
I
don't
have
anything
yet,
but
at
least
I
want
to
I
know.
B
Our
next
meeting
is
not
until
after
Thanksgiving
in
the
U.S,
so
I
at
least
wanted
to
just
share
some
of
my
collection
of
information
and
then
gather
some
quick
feedback
from
the
group
to
start
to
add
some
context
to
some
of
these
items.
So
this
is
really
just
a
discussion
and
I'll
be
probably
just
adding
some
nodes
and
then
I
need
to
also
I
started
this
document.
It's
not
worth
publishing
yet,
but
as
soon
as
I
gather
feedback
from
today,
I'll
need
to
make
it
public
and
so
I
need
some
help
from
I.
B
So
today,
and
just
to
really
just
capture
a
lot
of
the
notes
that
I
had
and
then
just
kind
of
iterate
the
put
some
organization
on
it
to
it
in
the
next
couple
of
weeks
before
we
meet
again
and
then
start
to
publish
it
so
folks
get
to
iterate
on
it.
So
that's
kind
of
the
goal
today
many
questions
before
I
get
started.
C
Yeah
I
just
wanted
to
give
more
context
like
if
your
folks
remember
that
the
we
wanted
to
exit
criterias
for
The
Bash
working
groups
or
like
what
we
characterized
as
the
early
exit
criterias
is
coming
up
with
at
least
recommendations
of
how
we
can
improve
kubernetes
core
kubernetes
for
batch
workloads,
and
this
is
basically
what
I
guess
Diana
is
referring
to
as
like
working
group
batch
recommendations.
C
So
that's
that's
the
context
and
that's
why
the
title
looks
like
what
it
is
like
recommendations
we
discussed
in
the
charter
that
potentially
we
could,
you
know,
start
owning
some
code
Etc,
but
that's
I,
guess
down
the
right.
The
first
step
is
to
come
up
with
recommendations,
see
who
can
execute
these
recommendations.
If
existing
six
will
execute
on
those
recommendations,
then
the
working
group
is
delivered
on
its
charger.
C
If,
if
not,
if
we
came
up
with
a
new,
for
example,
control
that
needs
ownership
type
does
not
like
you
know
fit
in
the
existing
segs,
then
there
is
a
potential
for
the
working
group
to
evolve
into
a
city,
but
that's
too
early
now
to
discuss
just
wanted
to
highlight
what
we
mean
by
recommendations
here
with
that
and
where
that
is
coming
from.
B
Yes,
that's
great,
thank
you
for
that
context.
So
I'm
just
going
to
be
I'm
hoping
we
will
just
be
a
lot
of
interaction
today
to
talk
about.
You
know
things
that
we
will
be
missing
or
or
things
that
we
want
to
add
or
re
reshuffle,
so
I
I
did
want
to
share
that
and
for
folks
that
haven't
visited
Charter
lately
there
was
this
initial
definition
of
deliverables
about
where
recommendations
might
hit
for
the
different
sigs.
So
this
was
just
something
I
wanted
to
capture.
B
That's
already
been
noted
in
the
charter
related
to
the
six
apps
I
mean
some
items
under
there.
Updating
the
job,
API
or
perform
job
controller
perform
a
job
controller
so
that
we
can
scale
lots
of
jobs.
For
that
and
these
again,
this
is
just
things
that
we
are
thinking
through
of
where
we
might
make
a
recommendation
on.
B
We
still
need
to
Define
what
those
are,
but
these
are
some
some
possibilities
and
then
the
scheduling,
sick,
scheduling,
right
and
so
the
set
of
apis
for
for
queuing
for
job
queuing
about
Auto
scaling
for
capabilities,
job
level,
provisioning,
I
think
this
is
also
for
the
cluster
Auto
scaling.
As
well
and
then,
of
course,
the
support
for
specialized
Hardware
and
then
the
batch
job
specification,
like
multiple
instances
of
pod
and
I've,
refer
that
down
below
as
well
and
as
well
as
non-pod
grading
resources.
B
One
one
thing
I
wanted
to
take
from
here
was:
if
we
thought
through
and
talked
about
over
the
last
couple
of
months,
any
other
six
that
might
have
some
impact
into
the
back
into
the
batch
working
group
or
batch
working
group
may
have
impact
and
to
the
other
six.
If
we
have,
we
thought
that
of
any
other
cigs
that
might
have
some
recommendations
coming
from
us.
I
wanted
to
start
out
with
that
to
see
if
there's
any
other.
D
I
mean
I,
guess
my
I,
don't
the
one
is
you've
done
any
requirements
from
existing
batch
projects.
We
talk
about
multi-cluster
and
I
know
that
that's
kind
of
a
whole
other
thing,
but
I
I
do
that
like.
If
we're
talking
about
trying
to
extend
batch
to
multi-cluster,
it
might
make
sense
to
expand
that,
but
I
don't
know
what
the
the
reason
was
not
having
that
at
first.
B
So
is
there
so
folks
that
are
more
integrated
in
the
index
system,
our
ecosystem?
Is
there
a
Sig
that
addresses
multicluster.
B
A
Of
of
it
and
all
the
necessary
channels
and
all
those
data
you'll
find
in
their
readme.
D
Are
these
recommendations
of
like
work
needed
for
actually
like
what
they
need
or
like
what
kind
of
what
code
the
batch
working
group
will
also
touch
or
involve
because
I
know
that
I've
seen
at
least
some
sick,
apps
and
Sig
storage
issues
come
up
that
I
don't
know
if
we
need
to
involve
it
in
this
part
of
this
but
I
guess
that's
what
I'm
curious
about,
because
it
does
seem
like
bash
touches
everything.
B
That
is
a
good
point.
I
am
not
as
familiar
with
the
requirements
for
the
six
storage.
I'm.
Sorry
for
storage
I
do
know.
That's
part
of
a
you
know
the
badge
provisioning
of
jobs.
We
can
I'm
assuming
there's
a
sick
storage
as
well
for
this,
so
yeah.
D
And
I,
don't
you
have
to
update
the
doc?
I
was
more.
This
is
just
curious
about
like
what
we
I
don't
know
if
we
have
any
recommendations
for
Sig
storage,
but
I
know
that,
like
I
I
haven't
my
I'll
talk
later
about
some
issues,
I've
found
with
fake
storage
and
some
of
like
running
patch
jobs
with
volumes
and
I.
Don't
know
if,
like
that
is
like
recommendations,
we
want
to
give
to
Sig
storage
about
that
also,
or
is
it
unrelated
to
this
I
don't
know
so
it's
more
a
question.
I.
C
Don't
think
that
it's
unrelated,
my
concern
is
with
Alexa
searching
ourselves
thin,
but
at
the
end
of
the
day
we
want
to
collect
all
these
requirements.
So
definitely
we
want
to
have
that
tracked
so
that,
like
down
the
road
we
can
get
to
the
to
these
things
at
some
point
or
if
someone
is
taking
the
lead
from
The
Bachelor
from
the
group,
then
this
is
definitely
like
an
option
as
well
right,
I
guess.
C
My
hope
here
is
like
from
the
past
few
from
the
past,
like
I,
don't
know
how
many
months
we've
been
running.
This
working
group
is
actually
exactly
this,
like
we
had
this
high
level
idea
of
what
are
the
missing
things,
and
the
hope
is
that
to
make
those
things
more
concrete
right,
like
these
requirements
and
and
start
like,
you
know,
to
to
basically
not
necessarily
like
always
find
solutions
to
them
as
much
as
highlight
these
gaps,
continue
to
track
them
and
then
potentially
evolve
into
concrete
recommendations
or
even
Solutions
I.
C
If
it's
not
like
a
strict
answer,
I
guess
or
no
but
like
a
track,
I
would
say
we
need
to
track
them.
Definitely,
but
I
don't
know
if
we're
going
to
attack
all
these
things
at
once.
So
that's
I
guess.
B
Right
and
we
can
lay
them
out
like
a
road
map
for
the
batch
watching
route
as
well
so
and
which
ones
we
think
we
want
to
focus
on
first
or
if
there's
almost
some
items
that
may
not
be
on
the
first
focus
and
somebody's
willing
to
take
it
on.
B
Then
that's
that's
something
we
can
add
as
well:
okay,
so
I
kind
of
tried
to
go
through
and
just
kind
of
capture
the
big
ones
you
were
saying
there
was
a
it's
a
big
umbrella
of
things
to
to
consider,
and
so
I
was
trying
to
just
put
all
these
in
one
spot.
So
and
I
don't
have
a
very
good
way
of
organizing
this.
Yet
in
my
head
at
least
so
I
tried
to
do
some
organization.
I
guess
we
could
Swizzle
it.
Just
your
feedback
is
fine.
B
This
is
really
just
for
us
to
just
really
capture
the
breadth
of
the
the
set
of
the
work
or
the
requirements
or
the
you
know,
designs
of
things
so
that
we
can
put
some
organization
to
it.
So
I
started
out
with
sign
of
highlighting
some
features
or
some
enhancements
and
they're
kind
of
covered
already.
Some.
Some
of
these
in
the
previous
list,
but
I
just
started
to
charge
as
many
of
these
Downs
as
I
could
discover
from
our
past
meeting.
B
B
And
then
there's
some
other
requirements
that
I
think
I've
seen
which
is
being
able
to
have
dependency
Associated
dependencies,
maybe
from
between
jobs
or
tasks
within
a
job.
So
there's
some
that's
been
expressed
as
an
enhancement
or
a
feature
again,
the
throughput
and
that's
kind
of
captured
up
above
right.
We
want
to
be
able
to
handle
lots
of
jobs
and
expand
what
we
can
do
to
help
improve
the
throughput
of
that.
B
B
So
I
added
that
as
well
topology,
aware
I
think
we've
talked
about
that
with
a
different
types
of
resources
that
are
available
on
the
nodes.
Cluster
Auto
scaling
I
think
we're
we're
very
familiar
with
that.
With
regards
to
being
able
to
meet
the
bands
based
on
the
scaling
up
the
ability
to
scale
up
a
cluster
and
then
the
storage
use
cases
that
we
just
brought
up
any
other
high
level
features
descriptions
that
folks
have
been
thinking
through
or
have
solved
in
their
projects
that
that
are
missing
from
this
list.
F
I
think
they
know
requirements
are
missing,
but
I
don't
know.
If
that's
topology,
aware
yeah.
B
B
C
Is
it
only
Numa
when
we
talk
about
on
our
topology,
if
it's
normal,
then
we
just
call
it
new
malware?
Yes,.
B
I
was
thinking
I'm
aware,
but
I
didn't
want
to
be
exclusive,
so
I'll
keep
that
and
if
it
becomes
a
bigger
scope,
we
can
adjust
those.
C
So
those
are
like
high-level,
extremely
umbrellas
right
like
when
we
talk
about
cluster
or
scaling.
That's
just
basically
it's
a
massive
thing.
C
What
do
we
want
to
call
out
specific
things
underneath
each
one
of
them.
C
B
C
Yeah
exactly
like,
but
or
bulk
provisioning,
apis,
let's
call
them
and
then
within
them
they
could
be
all
or
nothing
as
one
multiple
operation,
foreign.
C
I
was
thinking
about
like
reservations
into
end,
which
is
both
us
scheduling
and
and
provisioning.
It
involves
both.
B
B
G
B
G
B
Yeah,
so
those
that's
those
those
are
really
valuable.
You
know
additional
policies
that
we
can
add
for
the
queuing
support,
so
I
kind
of
stuck
those
in
a
different
section.
But
let
me
capture
kind
of
what
you're
suggesting,
which
is
you
know:
preemption
within
a
class
and
I
haven't
gone
through
this
section,
so
I
apologize
so
preemption
support
position
within
a
within
a
queue
right.
G
G
C
B
B
F
Wanna
add
one
more
thing:
General
thing,
so
we
have
the
the
Numa
implementation,
which
is
we
which
currently
is
kind
of
broken,
because
there
is
no
scheduling
support
and
then
we
have
this
cap
for
dynamic
resource
allocation
which
is
going
through
alpha
or
in
my
constantly
into
Alpha
and
126..
F
And
then
we
have
this.
This
doc
from
I
think
it
was
Marlo
about
how
to
do
how
to
do
device
management,
so
I
think
one
one
good
outcome
for
the
working
group
would
be
to
unify
all
those
initiatives
so
that
you
know
we
don't
end
up
in
adding
three
different
implementations
for
device
management
into
cubelete
and
the
rest
of
the
system,
so
just
yeah
kind
of
unified.
All
these
proposals
that
are
in
a
way
kind
of
opposite
or
they
are
pulling
in
different
directions.
F
So
what
is
the
existing
pneuma
I
think
called
the
topology
manager,
the
other
one
is.
F
C
So
so
the
talk
that
marvelous
night
is
an
enhancement
to
the
topology
manager.
It
is
not
it's
like
it's
not
a
replacement
to
Dynamic
resource
allocation.
E
I
think
the
dog
that
she's
talking
about
how
do
we
move
some
of
the
components
and
the
resource
managers
that
exist
within
cubelet
outside
of
cubelet,
so
people
have
more
flexibility.
Dynamic
resource
allocation
is
again
maybe
an
alternative
to
device
management.
We
have
device
plugin
API
and
we
have
this
other
mechanism
to
expose
Hardware
within
the
cluster
that
have
different
requirements.
You
know
if
you
need
to
maybe
separate
the
life
cycle
of
the
Pod
from
device
instantiation
for
example,
then
you
could
use
that
these
are
again
all
very
separate
areas.
E
Topology
manager
is
related
it.
It
makes
sure
it
aligns
a
CPU
manager,
device
manager,
memory
manager,
but
all
this
happens
within
cubelet,
so
these
are
all
very
related,
but
at
the
same
time
it's
very
important
to
identify.
How
do
we
move
ahead
in
the
community
and
you
know
identify
the
right
path
there,
a
bit
conflicting,
but
at
the
same
time
they
are
different
as
well
in
their
own
way.
Thank
you.
So.
E
You
can
mention
the
discussion
about
moving
resource
managers
outside
of
Kubler.
C
I
think
it's
called
resource,
plugin,
yeah,
yeah,
yeah
and
again
it's
basically
evolving
the
topology
manager
to
be
pluggable.
I!
Guess!
That's!
That's
my
understanding.
Yeah.
B
Okay,
as
I
mentioned
before,
I
was
trying
to
capture
the
the
specific
aspects
of
the
different
sort
of
policies
and
I
call
this
dispatching.
We
call
this
release,
but
it's
essentially,
as
the
cueing
happens,
and
the
workload
is
released
from
the
queuing.
You
know
there's
Associated
policies
for
that.
You
have
the
different
cues.
B
You
can
have
priority
service
levels
based
you
want
to
address
aging
and
starvation,
then
there's
four
minute
management
policies
associated
with
how
jobs
are
released,
from
the
queue
Fair
sharing
and,
of
course,
being
able
to
do
preemptions,
as
we
mentioned
earlier,
within
a
queue
or
priority
class
and
preemption
of
PODS
within
a
job.
I
also
mentioned
that
further
down
here
with
regards
to
supporting
you,
know,
min
max
type
of
workload.
B
I
beyond
the
policies.
I
was
thinking
about
the
batch
workload
or
the
batch,
a
batch
workload
or
a
batch
custom
resource
I.
Think
we
mentioned
that
earlier.
There
may
be
some
concept
of
creating
a
new
CR
or
a
new
resource.
I
should
say
where
the
the
and
related
to
that-
and
we
kind
of
have
some
aspects
of
that
where
you
know
in
the
job.
B
Everybody
knows
that
you
can
actually
hold
the
job
and
not
release
it
to
actually
create
the
pods,
so
some
concept
of
that,
where
there's
a
hole
before
creating
the
resources,
there's
also
the
concept
of
pausing
a
workload
or
a
job,
and
that
can
be
in
multiple
ways.
You
can
pause
it
to
so
that
it
doesn't
create
more
pods
and
have
it
scale
down
or
you
can
just
kill
the
job
and
put
it
back
in
the
queue.
B
So
you
can.
You
can
do
all
of
the
resources
for
that
or
a
part
of
that,
and
then
here's
kind
of
where
I
was
kind
of
associating
this
virtual
preemption
within
a
job
which
is
scaling
right.
You
have
a
workloads
and
a
lot
of
these
workloads.
They
do
scale
up
in
and
out
right,
so
being
able
to
express
reduction
and
increasing
the
resources
for
that
workload
and
being
able
to
cue
those
requests
and
release
them
from
within
the
queuing
system
is
a
possible
area.
B
We
could
look
at
and
then
scaling.
This
would
be
scaling
across
different
instance
types
or,
in
other
words,
a
different
pod
templates.
So
some
pod
templates,
you
want
to
scale
at
a
certain
rate
compared
to
the
rest
of
the
Pod
templates
inside
a
job,
so
it
can
get
pretty
complex
depending
on
the
workload.
B
So
those
those
are
those
are
kind
of
some
of
the
the
areas
that
I
was
capturing
with
regards
to
a
workload
that
was
being
managed
by
by
the
system
right,
I.
C
I
have
one
like
macro,
I
guess
question.
So
if
you
go
up
a
little
bit,
if
you
see
like
most
of
the
things
that
we
have
here
are
kind
of
associated
with
a
specific
thing,
we
have
in
kubernetes
right,
like
expansion,
budget
scratch
job
API,
we
have
batch,
we
have
the
job
controller
there.
We
can
discuss
that
in
that
context,
right
same
thing
with
increasing
the
throughput.
C
For
example,
Network
aware
we
can
think
of
that,
maybe
in
the
context
of
the
scheduler
new
amount
in
the
context
of
the
node
cluster
Auto
scaling,
we
know
we
have
cluster
Auto
scale
available
already
storage.
Again
we
have
a
specific.
You
know
we
have
controllers.
We
have
support
now
with.
A
C
We
know
kind
of
the
requirements,
but
we
don't
have
a
like
a
canonical
playground
to
actually
execute
those
ideas
and
put
them
in
motion
like
when
we
talk
about
the,
for
example,
the
working
group
Charter.
C
We
want
to
make
recommendations
when
we
talk
about
queuing,
we're
making
recommendations
to
who
right
like
there
is
nothing
in
kubernetes
core,
a
controller
or
a
group
that
we're
going
to
make
recommendations
to,
and
so
my
suggestion
here
and
I
want
to
plug
in,
like
the
work
that
we're
doing
with
few
with
a
k
is:
maybe
it's
we're
getting
to
a
time
where
it
is
possible
that
we
can
invest
in
a
controller
to
actually
put
these
things
in
practice
and
move
them
forward
in
a
more
material
way,
with
the
hope
that
this
could
eventually
become
something
like
a
de
facto
with
kubernetes.
C
If,
if
we
kind
of
like
at
the
higher
level,
agree
with
the
model
that
we
have
right
now,
which
is
if
we
want
to
integrate
something
with
kubernetes,
we're
not
going
to
invert
like
invent
a
new
schedule
or
an
auto
Square,
so
we
need
something
that
works
with
the
current
ecosystem
that
we
have
within
code
kubernetes,
and
this
is
what
Q
is
trying
to
do
like
suggesting
that
we,
we
kind
of
start
discussing
these
things
in
the
context
of
care,
how
we
evolve
to
support
multi-cluster
storage
requirements
and
so
forth.
C
C
If
we
don't
have
something,
it's
like
you
know
a
but,
like
you
know
tangible,
to
work
with
right.
What
would
be
the
outcome
for
queuing
support
if
we
don't
have
something
tangible
to
suggest
to
right
like
a
specific
control,
Etc,
so
I,
don't
know
what
your
thoughts
here.
This
is
my
proposal
like
curious
in
the
very
early
stages.
We
can
take
it
in
whatever
direction
we
want.
C
A
F
So
if
I'm
hearing
correctly
the
one
of
the
questions
here
is
whether
we
want
to
make
you
the
recommended
approach
for
kubernetes
and
potentially
merge
it
into
Q
controller
manager,
let's
say
or
whether
we
can
be
more
agnostic
and
just
push
some
bits
of
it
into
controller
manager,
but
still
keep
most.
C
I
I
guess
when
we
talk
about
like
we
have
policies
Fair
sharing.
Are
we
we're
discussing
them
in
what
context,
for
example,
like
we
can't
talk
about
Theory,
but
then
what
right,
like?
What
recommendation
are
we
going
to
make
to
who
right
like
who?
How
is
the
community
going
to
benefit
from
these
recommendations?
I
mean
curing
theory
has
been
fully
studied
right,
like
in
the
in
literature,
and
they
have
tons
of
schedulers
out
there
Legacy
ones
and
so
to
make
actual
contributions
to
the
community
and
make
some
some
something
more
concrete.
C
So
we
discussed
that
when
we
started
the
batch
working
group,
the
spots
with
the
charter
discussion,
we
couldn't
say
like
because
we
were
asked
for
an
explicit
exit
criteria
that
is
achievable
so
executing
on
every
single
thing
we
want
to
do
was
not
deemed
like.
You
know,
achievable.
That
is
like
because
we
don't
have
the
Mandate,
for
example,
to
execute
on
all
these
groups,
so
that
was
like.
Basically,
the
first
step
is
to
okay
to
understand
what
are
the
gaps?
What
is
it
that
we
want
to
do
with
various?
C
You
know,
like
you
know,
controllers
that
we
have
in
core
kubernetes.
That's
why
we,
the
consensus
from
the
steering
committee,
all
the
way
down
to
people
who
are
trying
to
start.
The
working
group
is
to
start
at
least
with
recommendations
as
an
early
exit
criteria.
C
It
doesn't
mean
that,
like
like
we're
already
executing
on
the
job
API
improvements,
for
example,
so
starting
with
the
recommendations
indefinitely,
anyone
who
is
recommending
something
happens,
you
will
find
someone
who
is
excited
to
actually
push
it
forward.
Hopefully,.
H
Yeah
I
see
it
in
Stick
Auto
scanning.
We
have
great
of
ideas,
but
not
that
many
people
to
to
implement
them
and
the
deaths
were
part
of
my
concerns.
If
this
group
focuses
only
on
recommendation
and
says
that
they
don't
have
mandate
to
do
anything
in
I,
don't
seek
scheduling,
Auto
scanning,
no,
whatever
everyone
has
a
mandate
to
contribute
and
to
send
PRS
and
to
be
vocal
advocate
for
the
changes
and
I.
Don't
think
that
the
charter
of
the
group
should.
E
H
At
the
place
where
the
recommendation
is
meant
and
then
left
alone,
addition
of
steps
should
follow
like
advocating
for
it
pushing
it
asking
about
it
and
trying
to
make
it
happen.
And
if
a
particular
sikhists
has
some
concerns
about
recommendation
and
this
group
should
iterate
on
their
feedback
and
possibly
change
the
recommendation
and
change
the
design
or
whatever.
C
Like
this
was
the
early
exit
criteria
that
was
basically
reasonable
to
achieve
okay,
and
it
doesn't
mean
that
we,
like
it
I
I
guess
in
my
mind,
like
it
follows
that
if,
if
we
make
a
recommendation,
we
make
a
condition
because
we're
excited
about
this
and
we
need
it
and
we
will
continue
to
definitely
to
advocate,
for
it.
H
C
H
Piece
of
ideas
that
I
have
is
what
about
word
outside
of
kubernetes.
For
example,
do
this
group
is
responsible
for
making
ask
to
Cloud
providers
to
expose
some
apis?
That
would
make
much
better,
for
example,
from
the
cluster
Auto
scalar
point
of
view.
This
provisioning,
IP
I
would
be
better
executed
if
Cloud
providers
provided
some.
B
I
I,
don't
I,
don't
I.
Think
it'll
be
great.
If
we
could
do
something
like
that,
but
I
I,
don't
I,
I
I
think
it's
something
we
could
consider
and
how
how
effective
we
could
how
much
more
effective
we
could
be
with
the
standardization
of
something
like
that.
I
think.
H
B
H
B
Okay,
so
we
have
two
more
minutes:
I
mean
so
I
I
kind
of
defer
to
the
the
organizers
of
the
group
to
understand
how
it
works.
With
regards
to
recommendations
for
supporting
this,
you
know
the
queue
implementation.
B
My
personal
recommendation
would
be
great.
We
have
something
to
work
with,
let's
continue
and
iterate
on
it,
but
I
don't
know
what
the
process
is
to
officially
accept
that
so
I'm
going
to
defer
that
to
the
organizers
of
the
group-
and
maybe
you
guys
can
give
us
some
insight
on
how
that's
usually
handled
or-
or
this
is
completely
new.
We
could
just
need
to
make
a
decision
or
a
vote
or
just
sit
here.
B
I
just
I,
don't
I,
don't
have
that
expertise
or
experience
so
I
know
a
few
of
the
organizers
here
are
online.
How
does
it?
How
does
it
usually
manifest
with
regards
to
support
deciding
on
a
technology
and
moving
forward
with
it.
F
I
wonder
if
we
can
do
some
sort
of
pilot
with
one
of
the
of
the
high
level
items,
for
example
in
the
in
job
I
I
have
some
ideas
of
features
where
we
know
we
need
and
I
have
a
rough
timeline
for
it.
So
maybe
I
I
could
share
that.
F
That
could
be
the
pilot
that
we
can
replicate
in
other
domains.
The
second
one
I
can
think
of
is
the
the
normal
domain.
I
think
there
are
clear
requirements,
so
we
we
could
come
up
with
with
a
dog
with
a
timeline
dog
and
gather
feedback,
and
so
on
and
so
forth.
F
B
G
So
I
can
just
give
a
quick
comment
so
at
least
for
the
foreseeable
future,
the
work
that
I
do
on
queuing
would
have
to
be
within
Armada
for
a
couple
of
reasons,
one
we
have
to
do
queuing
outside
of
the
cluster,
so
it's
a
mixed
video,
at
least
for
now,
and
we
have
to
sort
of
make
available
the
improvements
we
do
within
the
model,
but
I
would
love
to
for
it
to
be
the
case
that
the
work
I
do
can
also
contribute
to
things
like
you,
so
2.10,
maybe
there's
a
way
in
which
we
can
abstract
the
way,
the
queuing
logic
into
something
like
a
library,
and
we
can
then
build
our
different
products
using
the
same
core
queuing,
Library
I.
B
Okay,
okay
I
know
we're
at
over
time,
so
I
appreciate
the
feedback
we
didn't
get
through
some
of
this,
but
you
got
through
a
lot
of
it
and
I
think
this
is
a
good
start.
I'm
going
to
try
to
capture
as
much
as
I
can
put
some
of
the
questions
that
we
have
and
a
document
and
then
Abdullah
I
guess
I'll.
Send
it
to
you
to
make
it
public
in
the
work
group
I'm,
not
sure
how
that
works,
but
I'll
just.
A
Say
that
you
just
shared
with
with
the
mailing
list
and
then
automatically
all
the
readers
of
the
mailing
list
we'll
get
access
to
the
document.
It's
good
also
to
share
it
with
this
list
and
with
the
kubernetes
that
it's
usually
sufficient,
also
make
sure
to
link
it
in
the
doc.
I
left
there
a
place
for.
A
Know
to
also
link
it
in
the.
B
B
A
Thank
you
very
much
all
enjoy
the
rest
of
your
day.