►
From YouTube: Kubernetes SIG Node 20180424
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
A
B
C
D
C
So
I'll
try
to
give
a
brief
overview
of
what
this
skip
talks
about.
So
the
main
motivation
for
the
fullest
phase
is
node
level
user
namespace
support
what
it
means
is
so
just
before
I
go
into
that.
Do
we
need
an
overview
of
what
user
namespaces
a
main
and
how
they
work
or
that's
I
can
assume
that
I.
C
So
basically,
you
sit
in
space
is
another
namespace
in
the
kernel
that
allows
you
to
map
like
holstee
IDs,
to
different
IDs
inside
the
container.
So,
basically
you
can
have
the
0
in
your
container
to
be
mapped
to
a
non
0
on
the
host.
So
this
is
powerful
because
it
allows
you
to
do
route
like
operation
inside
the
container
while
being
on
route
outside.
So
in
case,
if
your
container
process
is
able
to
escape
the
container
somehow
the
node
is
protected
from
the
container
and
the
way
this
works
is
like.
C
There's
a
mapping
and
the
mapping
specifies
three
things:
the
beginning
host
UID,
the
beginning
container
ID
and
the
size
of
the
mapping.
So
typically,
we
want
the
container
ID
mapping
to
begin
at
zero,
and
then
we
make
a
reasonable
size
like
65k
or
30k,
or
something
and
pick
a
starting
host
UID.
So
for
for
phase
one.
The
idea
is
that
the
node
will
have
a
single
user
name
space
mapping.
This
is
similar
to
how
like
docker,
has
support
for
dock
for
user
namespaces.
C
So
that's
phase
one
and
to
enable
that
we
are
proposing
one
flag
or
feature
gate,
which
is
node
user
name
space
equal
to
true.
When
that
is
set.
We
propose
that
cubelet
uses
a
new
CRI
API
called
runtime
configured
for
to
get
the
username
space
mapping
from
the
container
runtime.
So
if
you
want
to
take
a
brief
look
at
the
changes
here
there,
we
copied
the
the
settings
from
the
OCA
runtime
spec.
So
it's
consistent
and
it
looks
similar
everywhere.
C
So
you
have
a
Linux
id
mapping
with
the
container
ID
host
ID
and
the
size
of
the
mapping,
and
then
you
have
the
new
call
for
the
runtime
config
info,
which
returns
the
user
namespace
config,
no
user.
Namespace
config
has
an
array
of
UID
mappings
and
GID
mappings
and
typically
UID
mapping.
The
GID
mappings
will
be
the
same,
but
we
should
just
expose
it
for
future
flexibility.
If
there
are
ever
use
cases
where
they
we
want
them
to
be
different
and
under
the
hood
we
can
keep
them
the
same.
C
For
now,
and
one
more
thing
that
is
needed
is
the
ability,
disable
a
user
namespace
like
I,
think
some
time
in
the
past
we
had
added
a
feature
gate
to
default
host
user
namespace
and
that
could
be
done
on
the
basis
of
like
selecting
any
other
name
space
or
source.
So
typically,
when
you
want
here,
phaidor,
IPC
or
net
to
be
hosts,
you'll
also
want
your
username
space
to
be
host.
Otherwise
you
run
into
like
permission
denied
that
don't
make
sense.
C
So
whenever
we
see
that
one
of
these
is
set
to
host,
we
automate
automatically
set
user
name
space
to
host,
and
then
there
are
non
namespace
capability
like
MK
node
systems,
this
module
or
if
the
pod
is
trying
to
use
a
host
path
volume.
So,
basically,
when
you
said
this
this
flag
to
true,
then,
on
the
basis
of
these
conditions
host
the
host
user,
namespace
is
selected,
and
for
that
change
we
need
us.
C
We
need
to
expose
that
in
the
CRI
to
pass
it
down
to
the
runtime,
so
here's
another
value
in
the
namespace
options
for
user,
and
for
that
one
time
support.
We
need
to
add
another
line
for
continuity,
but
basically
docker
already
has
this
flag.
Cryo
has
a
work-in-progress
and
Aikido
mentioned
that
continuity
also
supports
user
namespace
mapping.
So
there's
a
comment:
we'll
add
it
to
the
cap
for
continuity
and
the
plan
is
there'll
be
two
phases.
C
The
Alpha
is
when
we
enable
the
node
user
name,
space
gate
and
add
the
CRI
API
and
the
cubelet
will
handle
volume
source
churning
once
it
gets
back
the
user
namespace
and
for
back
from
the
runtime
and
for
docker
shim
it's
for
docker,
sim
we've,
given
some
implementation
details
and
how
we
can
implement
the
runtime
config
inference
for
the
new
daemons
I
think
like
cryo
and
continuity.
It
should
be
easier
to
just
return
the
default
user
name.
Space
mapping,
so
docker
info
doesn't
return
the
mapping
today.
C
It
tells
you
that
user
name
space
is
enabled
in
the
daemon,
but
we
can
take
a
look
at
Etsy
sub
UID
file
in
which
it
has
a
dock,
remap
user
and
we
can
get
the
mapping
from
that.
If
we
see
the
docker
info
is
showing
the
showing
the
daemon
is
configured
with
user
name
spaces
beyond
that,
like
for
phase
two,
we
also
plan
to
add
a
tunable,
so
the
the
user
can
request
host
user
namespace
to
be
true.
C
Now,
why
do
we
need
it
like
defaulting
will
work
for
most
of
the
cases,
but
in
some
cases
it
isn't
clear
whether
we
want
host
user
names
to
be
true
or
not
like.
If
you
say
I
want
to
privilege
pod,
you
could
mean
that
I
want
all
the
capabilities,
but
it's
not
clear
whether
you
want
the
user
namespace
to
be
turned
on
or
off.
So
that's
why
we'll
be
used?
It
will
be
useful
to
make
it
explicit
for
those
cases.
C
After
all,
this
is
done
that
is
way
out
in
the
future,
because
we
still
need
features
in
the
kernel
and
other
changes
to
make
it
more
performant
and
useful,
because
the
kernel
doesn't
have
the
file
system
changes
where
you
can
share
the
same
layers
between
the
two
parts
without
joining
them
and
if
you
chew
on
you
end
up
using
too
much
this
space,
so
we
can
go
into
that
later
on.
So
that's
that's
an
overview
of
the
proposal,
because
is
anything
I
forgot
to
mention
you.
A
C
A
C
Like
the
weak,
like
capabilities
work,
is
the
capabilities
are
tied
to
a
user
name
space,
so
I
mean
theoretically
in
use
with
user
name
spaces.
We
can
actually
grant
more
capabilities,
assuming
that
that
the
kernel
doesn't
have
bugs
and
like
as
far
as
the
kernel
bugs
I
think
I
mean
this
is
a
feature.
Gate
and
more
people
will
try
out
more
vulnerabilities,
they'll
find
out
and
the
kernel
will
become
more
secure
over
time.
I,
don't
think
like
not
trying
to
use
it
at
all
is
a
good
way
to
mature
the
feature
like.
F
Think
a
lot
of
those
kernels
are
vulnerabilities
are
exploitable
simply
if
the
kernel
has
user
name,
space
support
enabled
so
even
if
you're
not
using
or
running
in
a
user
name
space
simply
having
access
to
that
API
exposes
you
so
I.
Don't
really
think
this
is
increasing
the
attack
surface
very
much.
C
Yeah,
so
this
is
like
I
mean
I
like
The
Container
security
is
like
an
onion
I
mean
we're
just
adding
another
layer.
So
in
top
of
all
the
things
we
have
today,
we
are
also
enabling
user
namespaces
and
we
are
hoping
that
it
gives
us
more
security
and
over
time
you
see
in
fewer
bugs
in
that
area,
so
yeah
seelix
will
still
be
there
see
all
the
other
features
will
still
be
there.
C
And
one
more
thing
I
would
like
to
mention
is
like
we
are
proposing
two
flags
here,
but
I
think
most
likely.
We
would
also
want
to
like
fold
these
two
flags
into
a
single
flag,
because
I
I
don't
see
a
lot
of
cases
where
you
would
enable
node
user
name
spaces,
but
not
enable
horse
usernames,
where
it's
defaulting,
because
you
would
always
have
some
pods,
which
will
run
in
the
host
namespace
or
something
so
depending
on
I
mean
I'm.
Looking
for
feedback
here,
I
think.
B
I
think
I'm
not
take
them
now.
I
tend
to
agree
and
I.
Think
given
the
way
like
the
issue
had
always
been
like.
Do
you
run
this
at
Michigan
troller
to
actually
get
the
defaulting
logic
to
take
place
and
I
think
the
HR
machinery
is
moving
towards
path
where
it's
harder
to
disable
these
things,
it's
more
like
you
have
a
fixed
order.
Hang
it
stared,
so
I
might
have
falter
off.
B
So
I
guess
I
guess
from
a
timeline
perspective
or
just
like
a
reviewer
perspective,
I've
gone
I
had
volunteered
to
Shepherd
this
through
the
process,
but
I
had
wanted
to
get
an
act
from
probably
yuju
on
the
CRI
changes,
if
possible,
but
I'm
just
curious
if
there
any
major
concerns
of
what
was
discussed
here.
If
not
we'd
like
to
kind
of
start
proceeding
on
the
implementation
side.
A
B
B
F
H
C
C
So
we
want
to
do
that
in
the
future,
but
I
mean
I
can
go
into
some
details.
The
problem
right
now
is,
if
you,
if
there
are
two
pods
and
both
of
them,
are
different
user
name
spaces
and
be
how
to
churn
the
layers
for
them
to
use
the
root
of
s
and
when
you
join
the
root,
I
mean
the
layers
in
the
root
of
s.
You
end
up
making
copies
and
we
basically
need
kernel
support
like
so
that
we
don't
end
up
copying.
G
C
So
we're
kind
of
waiting
for
that
to
mature
in
the
kernel,
however,
I
mean
after
we
go
through
this
first
phase.
We
can
still
take
choose
to
take
the
hit
of
churning
the
file
if
you
can
afford
a
disk
space
and
enable
like
per
pod
or
a
per
namespace
user
name.
Space
setting,
I
think
per
pod
might
be
too
granular.
C
Click
per
name
space
might
be
a
might
be
a
logical
next
step,
and
then
maybe
if
we
can
allow
users
to
drill
down
further,
but
but
I
think
that
that
might
be
going
too
much
putting
too
much
burden
on
the
user
just
enabling
it
at
the
namespace
level
might
be
better.
So
just
users
don't
have
to
change
the
pod
configs
or
anything
it
just
works
for
them,
but
I
guess
we
can
discuss
that
when
we
get
there.
It.
I
C
Right
yeah,
so
so
for
a
part
you
can
yeah.
Definitely
you
know
how
to
use
the
same
username
space,
nothing
because
you
you
share
the
other
namespaces
and
if
you
run
into
issues,
if
you
don't
share
the
username
space
as
well,
but
what
we
talk,
I
thought.
The
question
was:
why
don't
we
allow
a
different
username
space
per
pod.
B
We're
also
trying
to
tackle
this
like
in
a
crawl,
walk,
run
type.
Mind
sets
so
like
at
least
that
Red
Hat,
we
feel
like
having
a
cluster
wide
knows
our
node
user
name.
Space
remapping
would
simply
improve
the
security
of
our
clusters
today,
and
you
know
forecasting
a
little
further
out
is
probably
something
that,
like
you,
know,
I'm
not
really
comfortable
doing
until
we
actually,
you
know,
see
that
first
next
incremental
stuff,
so.
C
So
we
can
have
like
a
different
feature:
gate
and
I
mean
or
like
you
can
be.
Either
this
enable
node
level
or
either
you
can
enable
like
a
pod
pod,
slash,
namespace
level,
user
namespaces
and
depending
on
what
flag
you
set,
the
different
user
name.
Spaces
will
be
sent
down
for
namespace
or
you
know,
default
to
the
node
name.
Yeah.
B
Tomatoes,
like
is
a
the
benefit
of
protecting
the
hosts
from
the
workload,
is
what
this
is
trying
to
support.
I
think
that's
secondary
benefit.
If
you
do
something
at
the
namespace
level
and
Manny's
terms
are
also
overloaded.
It
is
really
if
you
have
like
distinct
tenets
sharing
a
common
node,
and
you
know
I,
think,
there's
a
lot
of
other
things
to
work
through
when
you
talk
about
separating
environments
for
that
that,
but
that's
why
we
couldn't
deferred
that
beyond
the
first
phase
and
just
said:
let's,
let's
protect
the
the
host
from
the
workload.
B
J
C
B
G
J
B
J
J
C
B
K
Alright,
so
you
guys
can
see
the
was
also
so
the
main
change
that
we
are
right
now
for
config
maps
and
secrets
and
projected
volumes
we
are
already
having.
We
have
a
cache
that
we
maintain
in
cubed
site
and
that
cache
has
a
TTL
and
that
cache
is
like
if
the
user
updates
the
config
map,
and
then
the
volume
gets
remoted
inside
the
part,
and
the
mechanism
is
basically
the
each
time
you
sync
the
pod
P.
K
We
asked
up
the
volume
manager
to
check
the
the
PVC
oath,
so
the
contract
map,
and
if
the
country
Mac
requires
remount,
then
we
basically
remount
the
the
volume
inside
the
pod
so
for
online
resizing.
We
need
something
similar
online
resizing
if
you're
not
familiar
with
is
like
is
a
resizing
of
persistent
volumes.
That
means
gcpd,
EBS
and
all
the
other
volume
types
and
we
implemented
resizing
in
1.8
and
we
implemented
filesystem
resizing
in
1.9
and
we
have
it
working.
K
The
only
requirement
currently
is
like
for
the
for
the
volume
types
that
have
a
file
system.
You
had
to
delete
and
recreate
the
part
for
the
final
resizing
to
be
finished,
because
the
resizing
is
only
done
when
you,
when
the
part
is
started
when
the
off
like
during
the
mount
process
after
the
mount
and
but
before
being
being
wall
valen
being
exposed.
The
container
so
P
resize
the
volume
at
that
point
in
time.
So
this
online
resizing
proposal
is
will
be
presented.
K
So
so
there
are
some
bunch
of
like
interface
and
how
it
will
look
like
and
all
the
description
here,
but
underlying
thing
that
I
wanted
your
opinion.
You
guys
opinion
is
like
if
the
caching
will
be
the
cache
from
the
PVC.
Sop,
is
ok
and
or
always
cause
a
load
on
the
it
city
server,
and
then
we
have
like
will
have
this
cache.
Cache
is
probably
existing
in
all
the
nodes.
So,
like
all
the
nodes
where
the,
where
the
PVCs
mounted
and
in
use
so.
B
B
Mean
I
think
from
a
foreign
standpoint,
I'd
just
be
a
matter
of
measuring
I,
mean
and
probably
tied
to
like
average
cluster
sizes,
I
I
guess
am.
I
you're
aware
of
our
our
cluster
sizes.
I
mean
for
what
you've
seen
when
you're
doing
the
research
here.
Did
you
have
a
major
concern
or
at
that
come
up?
No.
K
I
think
the
cluster
size
I
think
it
was
it.
Is
it
shouldn't,
be
a
problem
but
like
we
have
to,
you,
probably
have
to
do
some
measurement,
a
lot
like
finding
the
large
enough
clusters,
but
we
haven't
done
it
like
for
like
how
it
will
behave
like
when
we
have
ten
thousand
PVCs
and
five
thousand
PVCs.
How
does
caching
and
each
node
yeah
pulling
four
PVCs?
Basically
for
him?
For
me,
this
is
because
not
always
but
like
whenever
the
TTL
expires
will
look
like.
B
K
Yeah,
so
we
have
welcome
comments
on
this
one
and
we
can
do
this
off
offline
on
the
pier
cool
all
right.
So
the
next
item
is
the
dynamic
attached
volume
limits
and
this
one
had
such
we
had
some
discussion
with
Tim
and
Southern
upstream.
So
the
idea
is
like,
if
you're
not
familiar
with
this,
how
how
we
have
things
currently
is
like
in
kubernetes
scheduler
hardcourts,
the
number
of
volumes
attachable
volumes
for
GCE,
a
native
place.
K
I
think
this
is
the
maybe
else
you're
I'm,
not
I'm,
not
sure,
but
DC
and
AWS
are
hard-coded
right
in
the
scheduler,
and
if
we
try
to
increase
for
AWS
like
more
than
39
volumes,
we
try
to
schedule
on
a
node.
Then
we
just
don't
consider
that
node
for
scheduling
this
is
the
Maxwell
in
predicate
that
rejects
the
node.
Obviously,
this
does
not
scale
for
other
volume
types
like,
for
example,
the
one
that
are
being
introduced
so
CSI,
or
even
the
entry
volume
types
like
unless
the
heart
could
exist
in
the
in
the
scheduler.
K
K
So
this
proposal
is
about
like
a
dynamic
attached
following
limits
so
they're,
like
originally
I
thought
I
proposed
this,
that
there's
a
field
we
add
to
node
object,
node
or
status
or
attach
limits,
and
there
we
keep
the
name
of
the
name
of
the
plug-in
that
we
have.
We
use
this
name
internally
as
like,
as
like
a
full
namespace
name
of
the
plug-in
that
we
each
plug-in,
defines
it
and
then
calmly
limited
supports.
So
this
is
and
the
change
that
we
proposed
and
we
have
to
do.
K
We
have
to
make
a
change
to
CSI,
but
that's
unrelated
for
not
interested,
not
interesting.
For
this
sake,
then
we
have
to.
According
to
this,
we
have
to
make
a
change
in
scheduler
that
that
it
can
right
now
it
statically
checks
for
volume
types,
and
that
cannot
work
because
we
have
to
do
the
dynamic
chick,
so
we
will
have
to
load
the
volume
plug-in
managers.
That
means
the
syndulla
has
to
be
linked
with
all
the
all
and
plug-in
manager
has
to
load
editors,
dynamically
check,
which
volume
plug-in
is
being
used
at
the
part.
So.
K
So
thus,
thus
one
more
change
that
this
PR
proposes
and
then
and
then
like
third
change
that
we
have
is.
We
need
to
find
out
some
limits
like
for
CSI,
the
the
change
the
limits
will
come
from
CSI,
but
for
entry
volume,
plugins
I
propose
that
in
pkg
club
provider,
cloud
or
Co,
we
introduced
this
additional
interface
volume
limits
and
that
takes
cat
module,
a
midseason
function
that
takes
a
context
and
no
name,
and
it
transfer
a
map
like
this.
So,
for
example,
AWS
cloud
where
I
will
return
this
map
and
that
will
be
cool.
K
K
K
So
there's
no
capacity
volume
level
volume
capacity
for
the
part,
so
do
we
have
poor
dot
container
start
resources
that
we
can
potentially
use,
but
it's
going
to
be
problematic
and
second,
one
is
like
the
concerns
that
were
raised
by
with
Tim
and
a
bunch
of
other
guys
is
the
we
don't
want
to
modify
because
loud
voices
are
moving
out
of
three
anyway.
So
adding
a
new
interface
will
be
problematic,
so
I.
K
So
the
problem
is
that
we'll
have
to
put
limit
for
all
volume
in
tree
volume
plugins
and
that's
like
there
is
like
the
Elector,
all
of
them
at
least-
and
we
don't
want
to
put
limit
for
like
we
don't
want
to
stuff
12
objects
into
the
node
R
and
12
like
a
dictionary
after
element
into
the
node
object
that
may
or
may
not
be
used
in
the
in
the
code.
So
so
there's
some
discussion
on
on
those
two
lines
and
I
just
want
to
present
this
force
signal
and
what
people
think.
A
K
A
L
K
Right
now
the
limits
are
cluster
level,
it's
not
per
node,
so
it's
like
it
becomes
a
problem
like
when
workloads
are
scheduled
or
c-5m,
for
instance,
have
been
a
double
base.
Example:
the
parts
get
stuck
forever,
they
don't
they
never
start
and
then
yeah.
Then
we
have
to
basically
to
take
out
that
node
for
scheduled
it
from
scheduling.
A
K
Yeah
so
I
think
I
yeah.
If
people
had
to
hire
just
to
look
at
it,
I
would
request
yeah
people
to
have
a
look
at
it.
And
second
thing:
is
that
also
like
the
damage
that
the
capacity
using
if
we
can
use
capacity
rather
than
using
the
status,
no
dot
status?
So
it's
something
we
have
to
decide,
but
this
needs
to
be
solved
like
in
4.11,
yeah.
B
B
B
Guess
I
guess
the
difference
is
like
this
is
trying
to
control
something
that's
already
in
use
today,
whereas
GPUs,
where
were
introduced
with
this
concept
already
in
mind,
so
that's
kind
of
where
I
see
like
the
distinction
in
same
with,
like
I,
think
the
max
pod
concept.
I
think
I
was
David.
I
heard
said
that,
like
like
in
theory,
we
could
have
done
a
capacity
count
on
pods
and
made
each
pod
make
a
request
for
one
pod,
but
that
got
complicated
just
from
the
history
of
the
project.
B
K
A
also
like
capacity
constraints
on
parties
like
it's
per
container
basis,
whereas
the
volume
workload
is
per
pod
basis.
So,
like
that's,
that's
another
concern
that
I
had
like
there's
no
way
to
count
like,
like
yeah
container,
like
that,
the
resources
there's
no
field
in
the
path
to
say
capacity.
It's
a
per
container
PC.
So
if
you
are
going
to
do
counting
for
the
part,
then
we'll
have
to
pick
up
the
one
container
and
retrofit
the
I'll
just
push
all
the
volumes
that
are
being
used
in
the
pod
in
one
first
container
of
the
part.
G
M
G
M
Concern
are
to
try
to
echo
what
Tim's
concern
was
is
adding
in
a
new
set
of
fields
for
every
new
cluster
resource
or
node
resource
that
we
have
doesn't
seem
feasible.
There
are
fields
that
exist
to
manage
node
resources
within
the
node
object,
and
the
existing
GPU
design
is
already
using
that
as
establish
pattern,
rather
than
create
a
separate
new
pattern
for
storage,
first
try
and
see
if
there's
any
way
that
we
can
get
alignment
with
that,
rather
than
just
create
a
new
set
of
fields
for
every
new
resource
that
we
have.
B
B
M
E
B
M
F
B
M
B
It's
like
a
chicken
egg
in
both
directions.
Right
like
it
just
because
I'm
saying
it's
not
clear
that
it
would
necessarily
be
used
for
CPU
in
memory
or
or
even
ephemeral.
Storage.
Right,
like
we've
done
an
ephemeral
storage
without
this
field,
so
there'd
have
to
be.
It
would
have
to
be
just
like
some
other
general-purpose
counted
resource,
that's
not
associated
with
the
container
lifetime
and
I.
M
M
M
B
M
B
D
E
M
M
N
N
N
N
K
A
B
I
would
say
my
consensus
is
that
this
is
different
than
GPUs
and
memory,
and
so
I
agree.
I,
think
that
was
a
clarifying
discussion
and
then
really
I
feel
like
the
value
is
invariant
to
the
life
of
a
node.
Then
like,
like
the
right
rate
of
that
field,
is
not
too
concerning.
So
then
it
becomes
more
of
like
a
to
me.
This
feels
like
something
that
complicates
scheduling
and,
and
the
node
is
kind
of
just
just
an
information
delivery
vehicle,
but
like
it's
really
the
scheduler
who
has
to
do
more
complicated
work,
yep.
N
A
A
B
Okay,
there's
no
other
comments
on
this
proposal.
I
guess
speaking:
we've
autumn
the
agenda,
so
the
last
item
in
the
agenda
was
one
that
I'm
sure
that
gets
everyone
super
excited
which
was
trying
to
write
down
what
our
Charter
was.
Yeah
David
I
appreciate
the
dance
so
I've
been
working
with
Dawn
in
the
background
and
I
guess.
I
will
share
my
screen
and
the
people.
B
Kind
of
reference
template
charter
that
basically
each
has
been
encouraged
to
respond
and
adjust
to
and
just
kind
of
enumerate
the
roles
within
a
sake
and
basically
the
process
by
which
the
sake
function
so
I
have
adapted
that
template
and
tried
to
apply
it
to
sig
note
and
I
appreciate
Don,
with
some
of
her
initial
comments
and
feedback,
because
that's
feedback
that
needs
to
go
back
to
the
steering
committee.
Because
in
some
of
these
cases
we
want
it
to
be
consistent
across
the
project.
B
So
there's
an
opportunity
for
folks
to
be
a
technical
lead,
but
not
necessarily
be
a
six
year,
and
so
the
technical
lead
role
would
be
new
to
the
cig
and
just
from
convention
from
what
the
steering
committee
had
pushed
out
was
that,
like
the
initial
list
of
technical
leads,
would
be
seeded
by
existing
chairs.
So
if
folks
are
interested
in
being
the
technical
leader,
signaled
I,
think
donor
myself
would
would
be
interested
in
knowing
that.
B
So
please
reach
out
to
us,
but
basically
a
technical
lead
should
be
someone
who's
well.
Seasoned
in
the
project
has
a
lot
of
breadth
of
experience
and
they
have
some
unique
powers
for
subjects
that
we've
been
discussing
over
the
last
couple
months,
especially
with
respect
to
sub
projects.
So
I
think
there's
been
discussions
about
how
we
can
get
a
new
sub
project
sponsored
by
sig
note.
Whereas
previously
there
was
the
incubator
process
in
the
project
and
that's
kind
of
been
deprecated
and
now
there's
a
per
sake
sub
project
process.
B
So
the
the
major
I
guess
power
that
I
would
see
that
a
technical
lead
would
have
in
the
project
is
that
they
can
sponsor
the
establishment
of
new
sub
projects
and
then
potentially
mission
existing
projects.
Now
this
isn't
a
clarifying
item.
We
need
to
kind
of
work
across
the
project
on
like
what
the
process
would
be
to
decommission
like
in
theory.
It
would
be
different
if
if
the
project
was
completely
unmaintained
and
not
healthy
and
and
it's
possible,
we
actually
have
some
sub
projects
and
take
note
that
are
like
that
course.
B
It's
a
project
that
is
healthy,
actively
maintained
and
has
users
so
kind
of
had
to
work
through
the
edge
cases
on
that
and
then
the
other
major
thing
would
be
that
technical
leads
within
this
could
sponsor
the
formation
of
a
new
working
group
and
so
like.
If
you
wanted
to
spin
up
a
new
working
group
or
decommission
it
like
typically
working
groups
are
supposed
to
stay
on
SIG's,
so
I
guess
in
theory
a
technical
lead
from
each
say
Gris
to
sponsor
the
group.
B
Otherwise
folks
can
read
through
the
details.
I
think
I.
Think
there's
proposed
numbers
for
the
number
of
technical
leads
that
that
one
might
have
in
the
sig
and
then
there's
an
additional
role
for
sub
project
owners.
I,
don't
know
how
much
people
have
been
following
the
governance
updates
in
the
project,
but
basically
a
sub
project
owner
would
be
someone
listed
in
the
top-level
owners
file
of
the
project
and
and
they
basically
Shepherd
management
of
that
individual
sub
project,
and
so
like
today
are
supported.
B
Node
feature
discovery,
no
problem,
detector,
there's,
probably
a
couple
of
missing
off
the
top
of
my
head
and
in
theory
you
can
be
a
sub
project
owner,
without
necessarily
being
a
technical
lead
in
the
stakes.
So,
like
you
could
drive
the
development
of
your
component,
but
not
just
lis,
sponsor
the
formation
and
development
of
new
components.
B
So
that's
that's
kind
of
the
laddering
system
that's
been
proposed
in
the
community
and
folks
can
look
through
and
see
what
the
responsibilities
would
be
and
then,
like
generally
speaking,
just
trying
to
codify
like
best
practices
within
the
sig
I.
Think
we've
done
some
of
this
today,
actually,
which
is
good
about
talking
about
how
we
like
propose
and
make
changes
so
I.
Think
generally,
there's
been
a
push
to
try
to
make
broad
changes
more
visible
across
the
community,
and
so
look
the
kept
process
would
be
a
way
to
do
so.
B
Look
at
a
cross-section
of
charters
that
have
been
authored
across
the
various
SIG's
and
and
try
to
see
if
there's
any
patterns
or
commonalities
and
on
what
we
want
to
improve.
But
hopefully
there's
nothing
too
contentious
or
crazy
here.
But
feedback
is
is
very
much
appreciated
and
any
questions
or
comments.
B
Okay,
I
know
this:
isn't
this
most
exciting
topic,
but
so
the
next
step
on
this
so
basically
I
think
there's
a
member
of
the
steering
committee
that
I'll
be
paired
up
to
review
each
SIG's
charter
and
work
it
through
the
approval
process.
So
anyway,
I
think
that
fills
our
time.
I,
don't
know
dawn.
If
you
have
any
parting
words
for
the
seg
or
things
we
want
to
bring
up
that
weren't
raised
no.