►
From YouTube: Kubernetes SIG Node 20220118
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Do
you
want
to
announced
another
and
about
I?
I
noticed
this
announcement.
B
Oh,
yes,
give
me
one.
Second,
I
guess
we
started.
The
recording
today
is
tuesday
january
18th.
2022.
welcome
to
sig
node
reminder
that
we
have
a
code
of
conduct
and
please
be
excellent
to
each
other.
I
am
just-
and
I
got
the
wrong
mailing
list
and
we're
gonna
start
with
announcements
today,
and
I
wanted
to
share
the
a
reminder
that
we
have
a
production
readiness
review
freeze
coming
up
for
enhancements
and
we
also
have
a
an
enhancement
freeze
overall
and
I'm
trying
to
find
the
dates.
B
Where
are
we
there
we
go
so
the
the
enhancements
freeze
is
going
to
be
thursday.
The
third
of
february
and
the
production
readiness
review
freeze
is
one
week
before
that
on
the
27th
of
january.
B
So,
as
a
reminder,
the
production
readiness
review
freeze
means
that
you
should
have
your
production
readiness
review
questions
ready
to
review,
because
the
production
readiness
review
team
is
a
different
team
than
the
node
reviewers
and
it's
pretty
small.
B
So
they
need
enough
time
in
order
to
be
able
to
look
at
all
of
the
caps
and
for
the
kept
freeze,
which
is
one
week
later.
Your
enhancement
should
be
merged
and
good
to
go
for
the
release
and
if
it
misses,
then
we
will
not
be
able
to
work
on
it.
This
release
so
just
wanted
to
add
a
reminder
of
that
don.
Do
we
have
sergey
today
to
go
over
pr
stats
for
the
past
week.
D
B
No,
they
could
happen
simultaneously.
Okay,
I
see
thanks.
B
Sounds
like
we're
ready
for
the
next
thing.
Oh,
I
will
mention
if
you
are
interested
in
shadowing
the
production,
readiness
review
team,
since
it's
only
four
people
feel
free
to
go
ahead
and
do
so.
I
said
the
note
in
the
slack
channel,
which
is
pound
prod
readiness.
We
are
looking
for
more
people,
I'm
also
on
this
team,
so
saying
we.
B
A
E
Here,
yeah
thanks
tom,
so
my
name
is
abdullah,
I'm
co-chair
of
six
scheduling.
So
over
the
past
few
years
we've
been
getting
a
lot
of
like
requests
related
to
features
focused
on
batch.
We
started
to
look
at
that
area
more
closely
and
we
felt
that,
in
general,
core
kubernetes
support
for
batch
has
lacked
support
for
that
that
we
we
spend
on
on
services
in
general
and
and
that,
like
intentionally
or
unintentionally,
like
caused
some
sort
of
fragmentation
in
the
like,
you
know,
ecosystem
or
patch
support
on
kubernetes
in
general.
E
There
are
a
ton
of
frameworks
that
were
built
on
top
of
kubernetes.
That
is
not
necessarily
a
bad
thing.
Kubernetes
is
by
design,
you
know
extensible
system,
but
it
did
like
that.
Fragmentation
could
sometimes
confusion
and
like
what
is
the
best
way
to
do
what
some
of
the
features
as
well
like
required,
required
like
if
they
were
to
perform
properly.
They
needed
better
support
from
core
kubernetes
as
well.
E
Those
features
could
be
like
in
the
scheduler,
some
of
them
and
and
even
in
on
the
cubelet,
and
specifically
like
to
cubelet,
for
example,
some
twitter
support
for
numa,
which
we've
done
like
a
ton
of
progress
on,
but
it's
still,
we
can
do
even
more
to
to
you
know,
make
it
even
better,
like
end-to-end
support
for
these
types
of
features.
E
Now
this
is
all
of
this
basically
triggered
creating
a
working
group.
A
forum
for,
like
you,
know,
pooling
all
these
minds
who
are
are
always
thinking
about
patch
and
they
have
batch
as
their
main
use
case
for
deploying
workloads
and
kubernetes.
E
Just
to
try
and
deep
clutter,
let's
say
or
or
you
know
we
don't,
I
don't
want
to
say
like
try
to
convert
in
one
two
way
for
batch
as
much
as
okay.
Let's
see
we
have
this
problem
that
we
need
to
support
for
badge
on
which
layer
do
we
need
to
support
it?
How
much
support
should
we
get
from
core
kubernetes
and
how
much
should
be
left,
for
example,
for
upper
or
higher
level
frameworks
built
on
top,
and
we
came
across
several
features.
E
For
example,
batch,
like
you,
know,
job
api,
queuing
and
again,
like
mode
level,
features
like
support
for
hardware,
specifically
for
like
hpc
type
workloads,
and
so
we
came
up
with
this
proposal
for
creating
a
batch
working
group
for
exactly
this
purpose.
Like
I,
I
get
that
it's
not
like.
I,
I
tried
to
propose
that
on
the
sig
node.
I
think
I
actually
tagged
derek
on
the
steering
committee
channel
asking
asking
him
to
look
at
the
charter
as
one
of
the
co-chairs
as
well.
E
One
of
the
concerns
from
derek
is
that
we
might
be
fragmenting
the
discussion
for
some
of
these
features
between
multiple
groups,
and
I'm
here
also
to
discuss.
How
can
we
alleviate
that?
E
I
want
to
explain
here
that
the
bad
working
group
again
its
goal,
is
to
bring
people
who
are
interested
in
batch
and
you
want
them
to
start
thinking
about
all
these
features
in
the
context
of
batch
and
so
a
feature
like,
for
example,
that
we
are
going
through
right
now,
which
is
the
dynamic
resource,
dynamic
resource
allocation
which
could
serve
some
badge
use
cases.
E
I
wouldn't
expect,
for
example,
such
a
feature
to
be
discussed
in
detail
in
the
batch
working
group,
but
I
would
expect
that
it
would
be
brought
to
the
batch
working
group
to
understand.
Okay.
How
can
we
use
this
to
solve
hbc
problems?
Does
it
solve
our
use
cases,
the
exact
implementation
on
the
cubelet
side
or
on
the
node
side?
E
The
other
other
item
that
I
want
to
close
with
here
is
that,
like
the
the
in
the
context
of
the
gold
working
group
that
we
want
to
bridge
the
gap
as
well
with
other
working
groups
outside
kubernetes
like
cncf,
we
want
to
have
some
sort
of
like
a
point
of
contact
for
these
groups
to
basically
again
and
make
sure
that
we
are
also
covering
the
use
cases
that
they
have
in
mind
or
what
they
are
trying
to
do
and
have
people
who
are
like
not
understand
these
types
of
workloads
and
able
to
communicate
these
requirements
back
and
forth.
E
So
that
is
our
pitch.
I
guess,
and
our
ask
is
that
sigmoid
would
be
a
correspondence
to
this.
This
working
working
group
and
yeah,
if
you
have
any
questions,
please
let
me
know.
F
Hey
abdullah
thanks
thanks
for
the
overview,
I
think,
from
the
red
hat
side
like
our
concern,
is
mainly
like.
We
have
a
lot
of
groups
that
are
covering
parts
of
it
and
we
are
worried
that
this
will
cause
further
fragmentation
so
like
there
was
a
resource
management,
work
group,
this
topology
over
scheduling
discussions
and
so
on
so,
and
I
think
we
were
doing
a
good
job
like
having
all
these
subgroups
work
with
signord
and
lexignor,
providing
a
good
api
and
making
progress
on
these
topics.
F
A
A
So
a
lot
of
effort
inc
across
several
multiple
microsd
groups,
and
so
I
hope,
there's
the
sun,
like
the
outside
of
like
the
maybe
signal
or
six
scheduler
and
a
different
work
group.
So
then
they
could
based
on
the
real
end-to-end
experience
and
scope
out.
What
is
the
requirement
for
each
stick
group?
So
it's
kind
of
to
help
each
one
to
define
the
interface
and
how
we
are
going
to
support
like
a
batch
workload.
We
have
so
many
problems
and
then
which
is
sick.
A
It
is
responsible
for
right,
so
we
in
the
past.
We
basically
need
to
start
a
lot
of
effort
from
the
signal
and-
and
it
didn't
always
take
a
really
long
time
when
we
present
the
api.
When
we
talk
to
the
sig
scheduler
when
we
talk
to
some
other
state
and
us,
so
it's
really
long
time
to
make
progress.
A
So
so
so.
This
is
why
I
think
about.
Maybe
this
is
the
new
way
to
driving
such
work,
such
as
support
like
the
majority
work
actually
is
just
one
or
two
sick,
but
at
the
end
you
need
other
sake
to
support.
A
A
I
originally
I
want
to
ask
in
the
in
the
proposal,
but
I
decided
anyway,
you
came
to
signal
to
talk
right.
So
so
I
don't
want
like
the
in
the
past.
Will
you
agree
what
group
will
be
temporary?
It's
not
like
the
sick
right,
so
we'll
be
permanent.
A
E
This
is
a
great
question.
Let
me
just
ask
you
what
you
mentioned
as
well
about
fragmentation.
I
would.
E
I
would
hope
that
people
who
are
working,
for
example,
topology,
aware
scheduling
they
would
hold
into
into
this
working
group,
and
this
is
why
I
was
trying
we
are
all
trying
to
actually
get
people
from
sig
apps
signaled,
six
scheduling
likely
that
are
have
their
eyes
on
this
working
group
and
have
people
from
diverse
six
diverse
companies
to
join
us
as
like,
again
one
once
one
place
where
we
have
one
slack
channel
one
meeting
where
we
can
discuss
these
issues
and
and
find
synergies
as
well
with
other
batch
related
features,
for
example
the
resource
management
one
we
want
to
discuss
queuing.
E
How
is
that
going
to
work
with
curing
in
general,
like?
How
do
I
express
these
types
of
resources
as
a
type
of
resource
that
I
want
to
manage
and
implement
fair
sharing
for
so
that
is
what
I
wanted
to
shed
some
light
on,
like
the
ecos
like
the
end-to-end
experience,
is,
would
be
our
goal
like
we
want
to
make
sure
that
all
the
pieces
fit
together
now
regarding
the
exit
criteria.
E
So
we've
tested
this
problem
this
question
frequently
and
we
don't
have
like
a
a
clear-cut
answer
here.
It's
a
new
working
group
we
have.
Our
first
goal
is
to
propose,
like
we
do
have
goals
in
the
charter.
What
is
in
scope?
What
is
not,
but
we
want
to
set
a
clear
roadmap,
what
the
exact
things
that
we
are
trying
to
do
in
the
medium
term.
E
That
includes
enhanced
continuing
to
enhance
the
job
api,
introduce
new
ideas
around
job
queueing,
because
this
has
been
like
a
very
frequently
asked
feature
how
it
fits
with
the
rest
of
the
of
the
controllers
that
we
have
in
the
system
with
or
scale
the
scheduler
etc
support
for
hpc
workloads.
Now
there
are
two
paths
for
for
this
for
the
working
group.
To
conclude,
we
either
conclude
that
there
are
clear
cut
con
code
ownership,
for
example,
if
we
ended
up
with
new
apis
under
the
batch
under
api,
slash
batch.
E
Those
would
be
owned
by
this.
For
example,
a
sig
batch.
The
controller
that
we
have,
for
example,
for
queueing,
would
be
owned
by
batch.
The
any
scheduler
plugins
would
be
the
same
thing.
I
view
it
slightly
similar
to
sig
storage
in
a
way
like
save
storage,
they
do
on
their
own
controller,
but
they
also
have
code
that
is
in
that
they
own
in
in
the
scheduler
that
they
have
part
of
it.
I
think
in
cubelet
as
well.
They
they
have
influence
over.
So
that's
one
one
exit
path.
E
The
other
exit
path
is
well.
We
figured
that
we
don't
really
need
to
have
this
explicit
sig.
We
dismiss
and
the
the
sigs
that
on
the
core
that
they're
currently
on
the
card,
they
continue
to
own
it
and
and
and
maintain
it
our
like
any
new
apis.
We
like
that
are
higher
level.
E
We
expect
that
they
would
be
under
sick
apps
if,
if
we
didn't
have
a
sick
batch
for
example,
eventually,
because
it's
a
type
of
workload
basically-
and
that
is
the
closest
thing
we
can
associate
with
at
the
higher
level.
E
And
so
like,
and
when
do
we
get
to
that
decision
point
I
unfortunately
I
can't
answer
right
now.
It
is
we
could.
For
example,
we
have
a
lot
of
you
know
cloud
providers
here
we
want
to
see
like
a
significant
increase
in
amount
of
workloads
deployed
on
on
kubernetes
services.
We
want
to
see
that
if
we
see
that
trend
happening,
we
and-
and
we
are
able
to
say,
okay
kubernetes-
is
batch
friendly.
I
guess,
then,
maybe
we
make
that
decision
at
that
point,
I
don't
know.
A
It
looks
like
the
agnes
does
this
work
group
to
figure
out
what
it
is,
the
scope
first
thing
and
also
how
to
exit,
and
I,
if
we
are
at
the
end-
and
I
have
the
stick
badge-
I
will
really
concern
about
the
fragment
and
then
I
will
agree
with
the
menu
and
the
direct
comment
separates
right
and
there's
a
potential
segment,
because
this
is
just
work,
node
right,
one
type
of
workload
and
then
we
end
up.
Could
we
have
like
the
sig
gpu,
sig
ml
work,
node
and
the
sig
state
for
sticker?
A
What
whatever
things?
I
basically
think
about
the
this
work
group
is
just
figure
out.
What's
today's
problem
as
the
kubernetes
right,
we
are
the
kubernetes
community
and
what's
the
problem,
how
we're
going
to
address
and
then
define
the
work
item,
high-level,
cut
and
distributed
to
the
each
sig
and
to
take
your
ownership.
Then
we
can
work
up
and
also
we
need
a
separate
work
group
and
actually
the
supervise
of
babysitter
that
progress,
the
healthy
progress
and
over
time.
So
we
can
collaborate
across
many
six.
One
of
the
part.
It
is
the
patchwork,
node
issue.
A
Actually,
what
you
described
we
recognized
couple
years
ago,
I
even
reached
out
to
seek
apps
and
even
like
the
summer
of
the
sake
of
scheduler.
The
problem
is
because
the
problem
is
on
the
node
side,
and
also
many
customers
didn't
complain.
So
people
don't
want
to
prioritize
those
work
so
right
now
we
have
like
customer
use
cases
and,
and
the
user
complains,
so
it's
kind
of
easy
to
prioritize
so
so.
This
is
why
I
feel
like
there's
the
value
for
this
workload,
but
I
do
resonate
the
potential
of
the
flag.
C
And
I
have
another
question:
I
I
really
like
what
you're
doing
you're
saying
about
criticization.
I
would
be
really
interested
to
understand
what
the
problem
is
batch
workload
experience.
We
talked
about
like
konuma
and
topology
a
curious
east
side.
Car
is
something
that
bushes
really
need:
south
korea
containers
or
maybe
pods
scattered
scheduling,
credences
that
we
discussed
maybe
half
a
year
ago.
C
Is
it
something
that,
on
top
of
the
mind
of
people
running
by
workloads,
so
improvisation
across
topics,
not
just
some
topics
that
on
the
focus
of
current
proposal,
would
be
really
interesting
to
know.
E
I
completely
again
that's
why
the
first
item
on
the
working
group
agenda
is
going
to
be
to
set
that
roadmap
set
concrete
goals
of
what,
like
high
level,
things
that
we
want
to
achieve,
for
example,
again,
for
example,
numa
support
where
numa
aware
schedule
and
then
maybe,
if
we
can
find
a
solution
for
that,
using
the
dynamic
resource
allocation,
then
great
we're
gonna,
like
you
know,
put
our
weight
behind
that
make
make
sure
that
it
is
actually
gets
pushed
forward
in
six
scheduling,
sick
apps
if
necessary,
because
maybe
we
need
a
new
control
in
the
you
know,
queue
control
manager,
and
then
there
is
changes
also.
E
That
needs
to
happen
in
the
cubelet
side.
So
I
would
imagine
that
we
would
want
to
focus
on
maybe
two
or
three
items.
High
level
items
initially
prove
that
this
working
group
is
actually
going
to
be
successful
and
useful
in
achieving
these
goals
and
then.
E
And
then
we
expand
to
other
issues
like
I
don't
want
it
to
be
like
a
a
grab
bag
of
things.
I
I
and
and
and
just
distracting
distract
us
from
doing
you
know
useful
work.
So
that
is
the
first
item,
the
agent
that
to
set
those
things
right,
as
you
mentioned,
and
those
things
are
going
to
be
said
based
on
priorities.
B
I
think
that
if
you
maybe
if
you
try
to
bring,
I
think
it
might
address
some
of
the
concerns
if
you
bring
that
scope
a
little
bit
narrower
in
the
charter,
because
I
think
that,
like
the
you
know,
there's
the
big
bullet
for
node
and
maybe
all
of
the
node
things
could
be
like
having
that
sort
of
prioritization.
So
we're
going
to
do
this
first,
here's
the
scope.
If
that
works,
then
we're
going
to
move
on
to
these
things.
B
Just
since
kevin's,
not
here
today-
and
I
don't
know
if
folks
are
going
to
have
a
chance
to
read
the
slack
thread
I
wanted
to
include
from
the
recording
that
kevin
kluse
had
shared
a
number
of
these
concerns
and
pointed
out
that
we
used
to
have
a
working
group
for
resource
management,
but
that
is
now
defunct
and
there
was
a
resource
management
forum
also
defunct.
So
there
are
concerns
about
fragmentation
around.
B
You
know,
cpu,
pinning
memory,
pinning
pneumo
or
scheduling
that
sort
of
thing
and
those
things
are
making
progress
right
now.
So
folks,
if
you
want
to
read
more,
I
added
a
link
to
the
slack
thread,
so
you
can
check
out
the
agenda,
but
I
just
want
to
make
sure
that
made
that
onto
the
recording.
E
So
what
would
be
the
next
step
so
so
far
we
got
plus
one
from
from
cigars,
sixth
grading
sig,
even
auto
scaling.
We've
got
like
from
steel
committee
as
well
the
greenish
light.
So
far,
I'm
I
I
don't
know
like.
I
don't
wanna,
I'm
gonna
say
like
okay,
let's
keep
signaled
out
for
now.
Let's
maybe
focus
on
other
things
that
are
that
don't
necessarily
touch
signal
or
brings
or
bring
issues
related
to
cube
later
signal
like
ourselves.
E
We're
gonna
bring
them
to
to
the
to
the
to
the
sig
later
on,
like
if
we,
if
we
find
that
there
are
things
that
we
need
to
do.
That
touches
my
I
guess.
What
I
wanted
to
say
is
that
what
I
would
hope
for
is
that
someone
from
syd
node
would
be
present
in
these
working
group
meetings.
That
gives
us,
like
you,
know
their
perspective
of.
B
Well,
I
think
danielle
is
proposed
as
one
of
the
organizers
right
and
she
is
very
active
in
sick
node.
So
there
is
that,
I
would
say,
like
I
think,
maybe
maybe
part
of
the
issue
here
is
not
necessarily
like
that
anything
you're
doing
is
wrong,
but,
like
this
hasn't
been
presented
to
sig
node
before
and
it
was
listed
as
one
of
the
sponsoring
sigs
so
like
it's
good
to
come
here
and
chat
and
talk
about
scope
and
that
kind
of
thing-
and
I
think
it's
not
so
much
like.
B
Oh
no,
it
has
a
big
problem
with
this,
but
just
like,
we
haven't
even
had
a
chance
to
talk
through
any
of
these
things
before
so
now
that
you've
gotten
some
feedback
from
the
sig
and
worries
about
fragmentation.
Maybe
now
we
can
go
back
and
perhaps
like
reduce
that
scope
a
little
bit
in
the
working
group
charter
and
then
come
back
for
another
round
of
feedback
and
hopefully
should
be
pretty
straightforward.
E
So
I
would,
I
would
say,
like
at
this
point
what
we
were
trying
to
achieve
is
that,
like
the
charter,
is
not
going
to
really
spell
out
what
exact
features
we're
going
to
like
pursue,
that's
going
to
be
after,
like
the
after,
we
form
the
sick.
Like
the
working
group,
the
working
group
are
sit
together
and
say:
okay,
this
is
our
order
so,
and
this
is
kind
of
what
the
steel
committee
is
well
like
kind
of
gave
us
the
like
the
green
light.
E
So
when
would
you
like
to
be
exactly
spelled
out
in
the
charter
at
this
point
from
from
sigma's
perspective
like
right
now
we're
saying
we
what
we
want?
What
we
expect
from
signal
like
from
like
collaborating
with
sigmoid,
are
things
related
to
hpc
like
again
pneuma,
any
anything
related
to
resource
allocation
that
it
requires
also
like
scheduling,
support,
for
example,.
A
I
I
think,
the
in
the
past,
in
the
previous
discussion
last
20
minutes
discussing,
I
think
here
in
the
signal,
the
the
concern
it
is
the
fragmentation
right.
So
the
fragmentation
is
the
real.
After
this
conversation,
I
think
so
so
kind
of
it
is.
I
think
the
idea
is
in
the
proposal.
I
think
the
the
scope
is
a
little
bit
abstract
right,
so
we
should
make
that
scope
more
concrete.
A
This
is
what
my
earlier
question.
It
is
how
we
are
going
to
what
any
problem
exactly
we
want
to
solve
and
also
there's
the
zero
of
the
activities,
I
believe,
is
not
a
signal
that
already
has
activity
because
for
nomadic
dynamic
resource
allocation
or
it
is
pneuma.
Actually
the
people
already
talked
to
the
sixth
scheduler
right
in
the
past,
so
so
I
I
think
about
to
identify
like
the
what
what
is
the
problem
you
want
to
solve.
A
I
I
know,
but
it
is
too
high
level
the
problem
in
the
original
proposal
and
to
make
that
more
clear,
and
at
least
that's
the
first
milestone
identifying
those
kind
of
things
documented
things
and
to
support
the
first
goal
to
solve
the
first
problem
you
expected
or
participate.
The
signal
would
can
help
right.
So
you
could
say.
Oh
I
want
to
support
the
pneuma
sensitive
and
work
node.
A
A
It
is
complicated
problem
even
on
a
signal.
It's
a
super,
complicated
problem.
So
that's
why?
So?
I
hope
this
work
group
can
really
help
on
the
collaboration
across
the
multiple
c
groups,
so
identify
real
problems
and
driving
from
the
use
cases
on
top
of
the
use
cases.
But
we
don't
have
that
information
in
the
proposal.
E
E
I
can
give
you
one
complete
example
for
that
numerator
scheduling
that
that's
one
very
concrete
example
that
we
could
have
as
one
top
level
goal,
that
this
working
group
would
try
to
push
forward,
for
example,
and
it
would
push
that
forward
by
making
sure
that,
like
six
scheduling
and
signals
are
engaged
in
actually
making
progress
on
solving
that
problem,
so
that
is,
I
guess,
how
I
think
about
it.
A
Yes,
so
that
yeah,
so
that's
why
I
want
this
work
group
really
play
the
role
to
collaboration,
a
set
up
a
salary
to
progress
and
also
collaborations
so
maybe
come
back
after
you
scoping
out
more
and
also
identify
the
top
one
issue
you
want
to
address
and
the
expectation
for
each
sick
group
how
to
participate
then
come
back
to
here.
I
I
do
worry
about
the
earliest.
Oh
no.
We
just
move
forward
and
without
a
signal,
then
that's
obviously
no
from
me.
A
Then
I
will
say:
okay,
sorry,
I
don't
support
this
separate
work
group
want
this
group
will
solve
real
problem
collaboration
and
then,
but
we
found
okay
a
little
bit
concerned
and
release
the
question
and
we
just
move
forward
without
the
take
most
important
component
to
here
right.
So,
but
we
already
doing
this
group
actually
mini
engineer
and
is
already
doing
this
part
of
those
problems.
So
you
cannot
move
forward,
not
solve
problems.
E
Pr
like,
if
you
want
me
to
go
back
and
explicitly
spell
out
what
exactly
exact
features.
That
is,
the
role
of
the
working
group
itself,
so
we're
kind
of
putting
the
car
in
front
of
the
horse,
and
so
I
think
what
we
have
in
the
charter.
Pr
covers
what
we
have
in
mind:
there
are
explicit
four
items
there:
job
api,
job,
queueing
and
pneuma,
aware
scheduling,
and
it
does
describe
the
type
of
workers
we're
targeting
the
workloads
that
are
actual
batch
workflows,
not
the
ones.
E
For
example,
there
are
services
but
masquerade
as
badge,
so
we
have
those
things
written
in
the
charter,
but
I
don't
want
to
get
deeper
into
specifying
these
things
in
the
charter,
because
I
don't
want
to
narrow
the
scope
too
much
to
make
it
not
useful.
I
want
to
have
a
discussion
with
the
community
on
this
right
like
I
want
to
understand.
Okay,
we
want
to
do
queueing,
but
I
I
what
type
of
query
do
you
want?
You
want
to
do
fair,
sharing,
et
cetera.
E
All
of
that
is
going
to
be
part
of
the
discussion,
so
I
do
feel
that
we
do
have
what
you're
asking
for
in
the
charter.
Pr
and
like
I
don't
have
anything
in
mind
to
add
there
to
make
it
more
concrete.
At
this
point,
I
feel
it
is
the
job
of
the
working
group
to
do
something
more
specific
and
more
concrete.
G
G
B
I
took
a
look
a
little
bit
of
the
history
of
the
inclusion
of
sig
node,
because
it
was
surprising
to
me
that
this
is
kind
of
the
first
we're
hearing
of
it
and
from
what
I
can
see.
The
working
group
was
initially
proposed
without
node
scope
and
some
people
then
went
back
and
said,
oh,
but
what
about
node
and
the
node
was
added.
But
then
it
wasn't
discussed
here
now.
It's
being
discussed
here
and
node
has
concerns
about
fragmentation.
B
So
I
think
maybe,
like
perhaps
the
most
easy
path
forward
is
say:
okay,
we'll
limit
it
to
like
not
includes
node
scope
yet,
and
if
we
want
to
add
that
we
can
do
it
later
as
a
charter
change,
that
would
probably
be
the
most
straightforward
and
that
like
addresses
some
of
the
concerns
about
fragmentation.
B
Otherwise
I
think
you
know
there
might
be
some
back
and
forth
required
just
to
ensure
that
with
folks
in
node
that
there's
not
going
to
be
overlap
with
existing
efforts.
E
I'm
I'm
fine
with
that.
I
think
that
that
is
fair
and
that
probably
may
address
like
don
and
derek's
concerns.
Here
we
can
move
forward
with
the
current
working
group
charter.
Without
node
we
can
make
progress
on
the
road
map
and
then
dawn
and
derek,
and
the
ancient,
northern
and
elena
can
look
at
it
again
and
say.
Okay
now
we
are
on
board
with
this
specific
feature
that
you
have
and
but
I
guess
I
don't
I
don't
like
yeah.
E
It
doesn't
seem
to
me
that
whether
or
not
we
add
the
sig
note
explicitly
in
the
chart,
what
is
it?
What
does
it
change?
I
guess
that's
the
other
question
that
I
have
as
long
as
we
have,
for
example,
danielle
from
sid
node,
who
has
the
unknown
experience
of
tubulin,
give
us
that
perspective
of
what
can
be
done,
what
can't
and
what
makes
sense
and
what
doesn't
at
least
in
in
in
some
of
these
discussions.
E
E
Concrete
discussion
enter
in
for
some
of
the
features
that
we're
trying
to
perform.
A
I
actually
maybe
I
on
this
topic.
I
do
worry
about
the
fragmentation,
but
I
do
I
disagree
a
proposal
here.
I
saw
that
to
avoid
the
further
fragmentation,
we
should
include
the
node,
because
node
is
working.
We're
working
on
a
lot
of
the
work
list
here
is
help
the
featured
owner
to
understand
and
also
to
literally
have
the
help
from
the
other
sig,
because
the
nomadic
dynamic
resource
allocation
we've
been
doing
this
couple
times.
Even
the
resource
management
work
growth.
The
reason
in
the
past
we
didn't
fly
slow.
A
We
didn't
make
progress
all
the
time,
but
during
the
flight
it
is,
we
are
like
about
other
sig
groups.
Sig
node
is
more
advanced
to
know
those
problems.
We
estimate
those
efforts
upfront
before
our
user
really
reach
those
problems,
but
within
flights,
because
other
significant
priorities
don't
want
to
help.
A
That's
it
so
now
it
is
the
opportunity
other
stakes
realize
they
also
have
the
problem.
User
complain
and
risk
the
consent
they
want
to
solve.
The
problem
signal
to
just
say:
oh,
I
already
started
something
and
don't
want
to
move
forward.
We
want
to
separate
those.
I
don't
know
what
to
the
goal
we
want
to
achieve.
I
I
just
want
to
make
it
clear.
A
I
do
have
the
concept
of
fragmentation
and
but
my
concept,
I
think
what
we
can
handle
through
this
work
group
and
through
the
technical
way
on
mechanical
way,
how
to
avoid
those
things.
We
basically
need
to
surface
the
effort
to
signal
the
effort
to
there.
We
already
have
the
representative
and
also
the
owner
for
those
features.
We
just
identified
those
features,
and
then
they
got
the
they
got
like
the
height
of
the
kubernetes
community.
A
Had
that,
and
also
the
steering
community
have
that
visibility
well,
which
help
the
whole
community
right
help
each
other
this
one.
So
that's
why
I
don't
want
to
like
to
see.
I
don't
understand
the
fragmentation,
it
is.
Oh,
we
just
don't
include
of
the
signal
and
separate.
This
is
more
concern
for
me.
That's
the
fragmentation
right.
E
Now
there
is
not
like
to
not
include
signal
because
like
but
bring
in
signal
like
features
as
much
as
well.
If
we're
not
including
signal,
then
most
likely
we're
not
going
to
prioritize
works
that
are
not
related
to
signal
in
the
beginning,
until
we
have
a
more
concrete
broadband,
what
we
want
to
do
there
to
act
to
answer
your
question
about
what
exactly
you
want
to
achieve
with
with
with
signals,
and
so
because
it's
again
it's
a
chicken
and
heat
problem,
your
the
the
feedback
was
give
us
something
about
concrete
and
from
us.
E
E
If
you
can't
approve
this
proposal
at
this
point,
then
the
next
as
again
I
mentioned
okay,
let's
move
sigmoid
out
if
this
is
a
big
concern
for
signal
focus
on
things
that
are
not
involving
signal
so
that
we
can
make
progress
and
then
once
we
have
a
road
map,
we
bring
it
back
and
include
signal
back.
So
I
guess
those
are
the
two
options,
because
at
this
point
I
don't
think
anything
I
can
do
with
the
the
chart
that
we
have
like.
I
don't
know
what
else
I
can
do
there
to
be
honest.
A
A
So
let's
focus
on
how
to
avoid
prevent
the
fragmentation
and
also
direct,
is
another
chair
and
he
didn't
approve
right.
So
we,
this
is
kind
of
the
is
like
the
community,
so
we
have
to
spend
the
time
to
convince
people.
So
that's
why
I
just
want
to
share
here.
I
personally
think
about
this
work
group
will
have
the
meaning
and
useful.
So
I
try
to
explain
to
community
here
what
I
my
real,
why
it
is,
but
I
also
have
to
listen
to
other
people.
It's
not
like
that.
We
don't
approve
and
we
approve.
A
E
I
guess
my
answer
here
was
just
about
like
the
action
item
that
we
we
need
to
do
something
and
come
back,
but
I
did
not
really
understand
what
is
that
something
that
we
need
to
do?
Yes,
yes,
to
come
back
with,
I
was
trying
to
explain
that
we
already
have
that
in
the
charter
yeah
and
anything
extra.
It
needs
to
be
done
as
part
of
the
discussion
we
will
have
after
establishing
the
world
group.
B
I
didn't
even
realize
that
that
don
had
like
shared
her
approval
already
that
that
was,
I
didn't
see
that
so
so
what
are
you
on
muted?
Do
you
want
to
jump
in.
H
Yeah,
so
I
think
topology,
aware
scheduling.
No
more
square
number
scheduling
was
mentioned
a
couple
of
times.
One
of
the
items
that
I
want
to
highlight.
I
think
I
agree
with
dawn
here
for
topology,
where
scheduling
to
actually
succeed.
Node
is
an
integral
component,
and
if,
in
the
batch
working
group
that
is
on
like
one
of
the
top
most
items,
I
think
it
cannot
progress
without
additional
improvements
in
node.
H
So
I
think
we'd
have
to
figure
out,
and
the
other
thing
I
want
to
mention
is
we
have
an
existing
bi-weekly
sync
that
we
have
for
topology
scheduling
where
all
the
stakeholders
and
the
contributors
need
to
discuss
the
next
steps,
so
we'd
have
to
figure
out
a
path
for
how
do
we
converge?
H
Do
we
continue
those
or
do
we?
You
know,
wrap
those
up
and
join
the
meetings
with
this
working
group.
So
I
think
those
are
the
things
that
would
be
beneficial
if
we
covered
those,
even
even
as
part
of
the
charter.
F
Yeah-
and
I
think
part
of
my
thing
is
like
have
we
considered
joining
one
of
those
existing
groups
so
like
for
these
world
like
swati
and
folks
from
the
topology?
Are
scheduling
already
aware
of
this
thing
happening
or
you
know?
Suddenly
there
is
a
certain,
a
new
group
and
now
they
are
disrupted
and
they
have
to
go
and
join
this
new
group
rather
than
the
new
group
joining
this
effort,
which
is
already
underway.
A
So
that
yeah
no,
this
is
exactly
the
problem
I
risk
here.
I
hope
we
could
figure
out
logistics
or
tactical
way
to
avoid
the
disruption,
but
at
the
same
time
there
also
have
some
other,
like
the
items,
the
effort
on
the
signal
community.
When
we
identify,
we
basically
need
just
from
the
bottom
up
right
so
driving
this
one,
so
how
we
are
going
to
merge
those
kind
of
things
and
find
the
boundary
between
and
and
then
identify
those
engagement
protocol.
So
then
we
could
accelerate
the
whole
effort.
A
A
A
But
there's
als
also
some
other
effort
like
the
api
job,
api
and
also
scheduler,
because
all
those
kind
of
things
is
connect
the
store
so
how
we
are
going
to
better,
like
the
engaged
kind
of
pro
pro
proposal
and
and
also
the
community
and
include
them.
So
that's
why
I
hope
that
the
proposal,
those
efforts
identify
those
is
ongoing
effort
and
then
give
the
proposal
even
like
we
don't
have
clearing
how
to
do
but
give
a
proposal.
A
B
Right
sorry,
so
we
have
three
more
items
on
the
agenda
and
I
think
that
probably
we
won't
be
able
to
come
to
a
like
100,
perfect
consensus
today.
My
recommendation
at
this
point
would
be
maybe
take
away
some
of
the
discussion
here.
Sigma
node
does
meet
weekly.
So
if
we
want
to
sync
up
and
talk
about
this
again
next
week,
there
won't
be
too
long
of
a
feedback
loop
and
perhaps
in
the
interim,
some
of
the
folks
who
provided
feedback
like
swati,
derek
kevin,
etc.
B
E
Sounds
great,
so
I
I
guess
like
to
me,
then
you
know
like
the
charter.
Pr,
please
comment
on
the
pr.
Please
add
your
concerns
there
and
then
we
can
discuss
it
on
the
pr
then,
and
does
that?
Does
that
sound
like
the
most
concrete.
G
G
I
will
request
specific
if,
if
you
have
specific
wording
to
signal
okay,
this
should
be.
This
should
be
this
section.
This
phrase
this
sentence
should
be
part
of
the
of
the
character,
and
then
this
should
be
removed,
or
this
line
should
be
clarified
because
yeah
we
want
to
bro
to
progress
with
this
asap,
so
we
can
start
meeting
with
the
community.
We
have
proposals
that
we
want
to
discuss
so
yeah.
I
Hi,
so
there
isn't
much
to
update
since
last
week.
I
think
the
only
new
thing
that
came
in
is
the
enhancements
freeze
is
coming
up
and
I've
updated
the
pr
the
enhancements
pr
three
one,
five,
three
don.
I
think
you
can
merge
this
once
you
take
a
look.
All
it
does
is
updates
the
targets
and
the
pr
I
think
we're
waiting
for
direct
to.
I
don't
think
I
saw
any
feedback
from
him,
so
I
know
he's
out
this
week
so
for
the
code.
I
We
want
to
wait
for
direct
to
come
in
and
take
a
look
and,
if
he's
he's
anything,
that's
crucial,
we
want
to
fix
it
before
we
merge
this
and
alana.
If
you
get
a
chance,
please
take
a
look
at
that
one
issue.
I
think
we're
down
to
one
alpha
blocker
and
I
did
a
little
bit
of
code
refactoring.
It
was
all
in
line.
I
moved
it
into
a
its
own
helper
function
and
I
couldn't
spot
where
it
seems
to
be
making
any
mutations,
it's
all
deriving
from
what's
there.
I
So
that's
my
take
on
it
when
you
get
a
chance.
Please
take
a
look
at
this
last
remaining
alpha
blocker,
okay,.
B
Yeah
I
I
chatted
with
derek
on
friday.
I
guess
before
he
went
out
and
he
said
he
had
a
a
review
in
progress,
so
I'm
mostly
going
to
be
deferring
to
him
a
little
bit
on
that.
I
guess
I
haven't.
J
B
A
chance
to
submit
it
yet
so
yeah,
I'm
at
this
point,
I'm
just
waiting
on
derek's
feedback,
it's
quite
a
large
pr,
so
it
just
takes
quite
a
long
time
to
go
through.
So
I'm
kind
of
waiting
on,
let
derek
put
his
feet
back
in
and
then
you
know
kind
of
work
through
that
and
then
I'll
do
another
pass.
J
I
Definitely
want
to
get
derek's
input
on
this
one,
so
yeah
we'll
look
a
look
into
it
next
week.
I
think
there
should
be
no
issue
with
the
enhancement
merging
that
so
that
we
it's
a
housekeeping
item.
I
think
there's
some
deadline
coming
up
for
that.
It's
still
a
couple
of
weeks,
but
we
don't
want
to
let
that
slip
through
the
cracks
thanks
a
lot.
B
Thank
you
cool
next
up,
I
believe
we
have
rodrigo
with
username
spaces.
J
I
think
yes,
thank
you.
K
J
F
It
rodrigo
I
made
a
pass
before
the
meeting
today
and
left
some
minor
comments.
I
think
one
thing
we
need
to
clarify
is
that
some
of
the
phase
three
items
the
design
is
still
open
and
like
just
like
state,
clearly
that
this
is
just
addressing
phase
one,
and
this
is
for
a
phase.
Two
item
b
be
very
clear
about
it,
and
I
think
one
thing
final
thing
we
need
to
close
with
tim
hawkin
is:
is
the
defaulting
behavior
whether
we
want
to
like
opt-in
or
not?
K
K
C
It
was
about
123,
but
we
we
may
either
at
least
of
shiraichi
just
need
it
already.
K
K
No,
we
didn't
added
that
yet
I
think
we
need
to.
We
have
a
little
of
changes
in
the
previous
cap
and
maybe
we
can
simplify
them
a
little
bit.
We
should
we
need
to
add
a
basically
a
way
in
the
cri
protocol
to
express
the
mapping
that
that
a
part
is
going
to
use
and
when
it
should
create
a
user
namespace
or
not,
but
it's
not
clarified
clarifying
the
gap
yet.
C
That
would
be
great
to
have
this
in
a
cab,
but
I
don't
think
it
is
strictly
blocking.
I
think
that
comment
was
made
before
124
in
the
previous
version:
okay,
because
in
193
we
had
so
many
changes
with
like
graduation
of
from
v1
alpha
2
to
v1,
and
it
was
a
lot
of
changes
for
single
release
in
124.
It
may
be
easier.
B
Cool
next
on
the
agenda,
we
have,
I
think,
cho
tongs
and
something
about
ambient
capability.
J
Yeah
hi,
I
just
wanna
ping,
this
pr
for
vignette-
I
I
don't
think
he
usually
attends
this
meeting.
This
is
about
the
ambient
capability
cap,
which
is
already
proved.
This
is
just
the
cri
changes.
Pr,
the
size
of
change
is
not
much.
H
J
Yamaruna
is
already
doing
that.
Thank
you.
J
B
C
Yeah,
we
only
have
five
minutes,
but
I
just
wanted
to
shout
out
to
all
people
doing
great
job
closing
and
merging
prs.
I
think
I
don't
want
to
call
names.
Maybe
all
approvers
know
themselves.
Most
of
working
progress
prs
were
closed.
There
is
nothing
rotten
in
this
35,
so
it's
it's
all
intentionally
closed
and
reversed.
Quite
a
lot.
17
parts
have
immersed
good
job
everybody.
Let's
keep
it
up.
B
B
Sounds
like
that's,
a
wrap
nice
to
see
everybody
and
hope
to
see
you
next
week
at
cigno's
same
time
same
place.
Thank
you.
Cheers.