►
From YouTube: 20210812 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
everybody
thank
you
for
being
here
today
is
august
12th,
I
believe
2021,
and
this
is
kubernetes
architecture.
Community
meeting
I
will
put
up
the
agenda.
A
C
C
Basically,
it
turns
out
that
people
are
trying
to
delete
and
modify
a
bunch
of
alpha
metrics.
So
it's
turning
out
not
to
be
sufficient
for
the
kind
of
stability
that
we
want.
So
maybe.
D
To
give
a
little
bit
of
background,
so
we
currently
have
two
metric
stability
classes
alpha
and
stable,
and
there
are
like
the
number
of
stable
metrics.
You
can
count
on
your
hands,
so
the
vast
vast
majority
of
metrics
in
cube
are
considered
alpha,
but
you
know
that
doesn't
necessarily
mean
that
they
are
all
like
alpha.
Some
of
them
have
run
around,
for
you
know
like
10
plus
releases
and
are
marked
alpha,
so
people
depend
on
them,
but,
like
that
doesn't
necessarily
mean
like
an
alpha.
D
Specific
deprecation
policy
should
apply
to
them
because
they've
been
around
for
like
10
releases,
but
we
don't
really
have
any
way
to
like
forcibly
mark
them
as
like.
Well,
actually
you
know
you
should
you
can
rely
on
this
thing?
It
will
continue
to
exist
because
the
entire
world
is
divided
into
the
very
small
subset
of
these
things
we
said
are
definitely
stable
and
everything
else.
C
And
there's
also
like
stuff
like
scalability,
relies
on
a
certain
number
of
metrics
and
basically,
if
those
things
change,
then
scalability
tests
will
start
failing
so,
and
I
I
am
pretty
sure
that
not
all
of
the
metrics
that
the
scalability
tests
rely
on
are
stable.
So
there's
that
so
yeah
we're
basically
proposing
to
add
additional
phases
to
bring
metric
stability
classes
in
line
with
feature
branches
or
future
feature
stages.
C
When
we
brought
this
up
with
the
reliable
right,
reliability
working
group,
they
suggested
that
we
basically
stagger
the
stages,
one
behind
the
feature
release,
because
it
turns
out
that
feature
releases
don't
actually
require
metrics
until
the
beta
stage.
So
in
the
beta
stage
you
would
introduce
something
like
an
alpha
metric
and
when
a
feature
went,
ga
it
would
become
beta
and
then,
like
maybe
a
release
or
two
afterwards.
That
has
yet
to
be
decided.
It
would
be
promoted
to
stable.
C
A
So
if
a
metric
is
stable-
and
I
forgive
me
for
not
being
up
to
date
on
all
our
policies,
if
a
metric
is
stable,
I
I
guess
we
have
to
be
careful
about
which
metrics
we
make
stable
because
of
those
sort
of
exactly
the
question
you
just
said.
If
they're,
if
they're
too
detailed
of
instrumentation
of
the
existing
code,
then
they
could
go
away,
they
could
they
could
that
could
hamper
our
ability
to
change
implementation.
C
Yeah
I
mean
currently,
the
ones
that
we
have
promoted
to
stable
are
basically
slowable
metrics
api
server
requests.
You
know
latencies
stuff,
like
that,
basically
the
stuff
that
people
are
using
to
determine
slos
and
slis.
D
I
think
like
one
of
the
problems
right
now
is
that,
like
I
think
that
the
the
granularity
between
stable,
which
is
a
very,
very
high
bar,
an
alpha
which
is
basically
not
a
bar
like
there's,
there's
too
much
of
a
gap
between
the
two
and
so
what
keeps
happening
is
people
keep
trying
to
remove
metrics
and
sig
instrumentation
says
hold
on
hold
on
you
can't
remove
that
metric.
It's
been
here
for
like
five
releases,
you
have
to
separate
it
first
and
if
you'll
come
back
a
little
bit,
but
it's
alpha.
D
It's
like
okay,
but
like
there
are
different
flavors
of
alpha
at
play
here.
So
maybe
we
need
a
better
way
to
mark
that.
A
A
They're
sort
of
outside
of
that
entire,
like
don't,
build
your
dashboards
and
your
your
slos
and
slas
on
these,
but
they
maybe
are
useful
for
debugging
effectively,
is
what
you're
saying
yeah?
Basically,
okay!
I
I
personally
don't
have
any
problem
plus
any
other
thoughts
on
that
folks.
C
Yeah
I
mean
this:
would
this
would
theoretically
tie
into
enhancements
potentially,
since
we
mandate
or
pseudo-mandate
metrics
at
the
beta
stage,
so
there
would
be
some
impact
there,
which
is
why
we
thought
it
was
important
to
bring
it
here.
D
I
think,
generally
a
rule
of
thumb
saying
that
when
something
graduates
it
should
have
a
beta
metric.
Assuming
that
you
want
users
to
rely
on
that
metric
and
the
metric
is
reliable,
like
I
think
that's,
I
don't
think
it
should
necessarily
be
a
hard
rule,
but
I
think
that
we
should
strongly
encourage
people
to
not
perma
alpha
metrics.
E
Sorry
that
that
was
what
I
was
like
trying
to
suggest.
I
mean
I'm
I'm
wondering
if
we
want
to
introduce
some
like
something
similar
to
no
perma
beta
for
api,
so
no
perma
alpha
metric
or
something
like
that.
Either
you
promote
the
metric
and
then
within
I
don't
know
two
free
releases
to
beta
or
you
remove
it
or
something
like
that.
A
E
E
Yeah
yeah
internally,
I
agree,
but
but
those
that
we
expose
for
users
to
track
if
something
works
or
something
like
that,
but
it's
not
purely
for
debugging.
I
think.
D
We
don't
differentiate
between
those
right
now,
so,
at
least
for
me,
as
like
an
end
user,
I
would
be
confused
because,
like
all
I
see
is
metrics
to
me,
there's
no
difference
whether
they're
considered
internal
or
not.
I
think
it's
useful
to
have
some
like
to
be
able
to
determine
okay.
These
ones
have
like
the
beta
ones
won't
disappear
tomorrow.
That's
good
to
know
and
it'll
also
improve.
D
I
think
our
documentation
situation
where
right
now,
like
basically
none
of
our
metrics,
are
documented
and
we
could
potentially,
we
can
add
enforcement
for
that
right.
Han.
C
Yeah
I
mean
the
basic
idea:
is
we
all?
We
already
have
the
static
analysis
tooling,
so
we
can
generate
a
list
of
metrics
for
alpha
beta.
We
would
exempt
the
internal
ones,
but
we
would
we
would
generate
documentation
for
alpha
beta
and
ga
or
stable.
A
A
A
C
Than
three
documentation:
no,
no,
we
we
automatically
generate
tags
for
the
the
annotations
for
metrics.
A
So
as
long
as
we
so
that,
then,
that
that's
to
me
that
that
works
great
right.
If
I
know
as
a
user,
I
should
only
rely
on
the
stable
ones
than
or
maybe
beta
but
be
prepared
to
for
it
to
change.
Then
I'm
happy
with
it,
but
I
don't
know
what
we
want
to
call
them
internal.
But
I
want
to
be
clear
that
there
are
some
things
that
might
disappear
at
any
time,
but
aren't
alpha
in
that
they
you
can
expect
them
to
eventually
hopefully
migrate
to
ga
to
stable.
That's
so.
B
Does
this
become
like
mostly
a
documentation
issue
like
where
you
really
have
to
like
make
it
clear
to
users
like
this,
is
what
like
this
is
what
this
means.
This
is
what
this
means,
so
you
can
rely
on
them.
This
is
like
these
are
the
class
of
metrics
that
we
think
you
shouldn't
necessarily
rely
on
like
is,
but
is
this
mostly
a
documentation
issue
like
if
they're
already
tagged.
D
I
think
both
the
documentation
issue
and
maybe
also
just
like
a
a
naming
issue,
so
we
already
in
the
metrics
we
differentiate
like
we.
Don't
we
don't
call
things
alpha
and
ga
we
call
them
alpha
unstable,
so
maybe,
like
beta,
isn't
the
right
name
for
that
in
between
stage.
Maybe
we
want
to
pick
a
better
name.
That
indicates
like
it's
not
going
to
disappear
on
you,
but
it
doesn't
have
to
graduate
to
stable.
C
F
Yeah,
it
seems
like
there
are
two
audiences
here:
one
is
developers
making
changes
inside
kubernetes
and
like
enforcing
or
describing
what's
in
bounds
to,
can
you
remove
this
thing?
Can
you
add
or
remove
labels
like
what's
in
bounds
and
then
the
other
audience
is
the
end
user,
and
so
just
as
long
as
it's
clear
to
both
of
those
audiences,
what
can
be
done
or
what
should
be
expected
from
a
metric?
C
D
Yeah,
I
know
it's
it's
tempting
to,
I
think,
like
follow
the
alpha
beta
ga,
but
I
know
like
we
talked
in
sig
instrumentation
about
how
we
would
actually
want
like,
assuming
that
we
were
planning
on
promoting
these
metrics
to
what
we're
currently
calling
stable
like.
We
wouldn't
do
that
when
we
promoted
the
feature
to
ga
or
stable
we'd,
wait,
and
so
already
they
don't
match
up.
So
I
think
that
using
the
same
names
is
confusing.
I
think
it's
fine
to
call
the
first
phase
of
metrics
that
don't
have
any
guarantees
alpha
metrics.
D
I
think
that
conveys
the
right
thing,
but
I
think
that
it
makes
sense
to
say,
like
we
have
this
intermediate
class,
which
we
don't
have
a
name
for
yet
and
we
have
stable
and
those
are
different
than
beta
and
ga.
C
I
prefer
the
the
name
parody
and
it
just
describing
it
as
lagging
a
release
or
two,
but
that's
a
personal
preference.
F
Maybe
it
would
be
helpful
to
kind
of
sketch
out
the
life
cycle
like
it
sounds
like
there
are
two
types
of
metrics
we
have
ones.
We
expect
to
progress
and
ones
that
we
expect
to
not
progress,
and
then
there
are
those
two
audiences
maybe
sketching
out
the
lifecycle
apart
from
names
and
then
like.
We
can
think
about
what
names
would
make
sense.
C
Yeah,
so
in
the
reliability
working
group,
the
life
cycle
is
basically
alpha.
Metrics
for,
for
the
beta
feature,
release
cycle
beta
metrics
when
the
feature
went,
ga
and
then
a
release
or
two
afterwards,
once
the
ga
feature
was
notably
stable.
We
would
promote
the
beta
feature
or
the
beta
metric
to
to
to
stable.
C
And
this
is
because
features,
do
change
and
as
people
use
the
feature,
even
if
it
is
stable
it
there
are
bugs
there
are
things
that
change.
There
are
dimensions
that
people
didn't
realize
were
important,
and
these
are
not
immediately
obvious
when
a
feature
goes
stable.
F
D
D
C
C
I
I
think
it's
not
the
right
place
to
actually
figure
out
the
life
cycle.
I
think
probably
the
next
step
is
to
just
write
a
cap.
It
sounds
like
people
are
on
board
with
increased
granularity
of
metric
stability
classes,
and
that
part
seems
uncontentious.
C
D
Yeah,
I
think
that,
or
generally
in
agreement,
it's
just
a
matter
like
I
I
like
that
it's
nobody's
like
questioning
whether
or
not
we
need
like
something
to
reflect,
there's
a
large
wide
gap
between
alpha
and
stable
metrics.
This
currently
exists
and
now
we're
just
we're
talking
about
hammering
out
the
details.
So
I
think
that's
a
great
start.
A
Okay,
thank
you.
Let's
move
on
then,
and
we'll
look
forward
to
arguing
over
the
cap.
Awesome.
Then,
let's
start
with
the
sub
project.
Readouts
do
we
have
dims.
F
Let's
see
do
this
on
the
fly,
this
will
be
exciting,
so
the
main
thing
we've
been
focusing
on
in
code
organization
is
dependency,
pruning
and
visibility.
So
there
are
two
tools
that
we
have
been
working
on:
one
you
will
see
running
on
pull
requests
which
gives
visibility
to
sort
of
the
before
after
of
the
pull
request.
F
So
before
this
full
request
here
were
here's
the
state
of
our
dependencies,
how
many
direct
dependencies,
transit
or
dependencies
max
depth
of
our
tree
and
then,
after
this
pull
request,
what's
the
state
and
it
dips
them
so
that
it
makes
it
easy
when
we're
doing
something
that
bumps
dependencies
upgrades
to
see
like
really
obvious
problems
like
oh
well,
we
bumped
a
minor
version
of
this
thing,
but
it
pulled
in
like
200
extra
things,
so
it
gives
visibility
to
that.
F
So
we're
using
that
when
we
do
dependency
reviews,
that's
been
great,
we're
improved,
making
some
improvements
there
to
kind
of
get
more
fine-grained
details,
but
that
will
help
us
keep
things
from
getting
worse
and
then
the
other
tool
is
one
that
is
letting
us
put
sort
of
a
spec
status
of
dependencies.
F
We
know
we
want
to
prune
out
and
keep
us
from
like
increasing
things
with
links
to
those,
and
so
these
might
be
things
with
problematic
licenses
or
that
aren't
well
maintained
or
that
we
know
like
pull
in
a
bunch
of
transitive
things,
and
we
want
to
work
to
reduce
dependence
on
and
eliminate,
and
so
that
tool
will
be
sort
of
a
ratcheting
effect.
F
We'll
say
like
here
are
the
things
we
would
like
to
stop
depending
on
and
then
gradually
can
remove
our
direct
ties
to
them
and
then
over
time,
remove
indirect
ties
to
them
and
kind
of
reduce
over
time
go
117
release
candidate
2
is
out
we're
experimenting,
pulling
that
in
and
seeing
what
that
does.
It
looks
like
it
changed
some
stuff
with
modules
and
we're
going
back
and
forth
to
the
go
team
on
whether
that
was
intentional
or
not,
but
we're
hammering
that
out
who
would
work
wise?
What
else?
What
else?
F
I'm
sure
I'm
forgetting
things?
Oh,
a
big
effort
for
the
123
time
frame
is
to
switch
the
build
and
code
generation
tooling
from
using
gopath
mode
to
using
go
modules.
We're
doing
that
in
in
pieces.
Kind
of
finding
pieces
of
our
build
and
and
good
generation
that
don't
work
with
modules
and
fixing
those
the
goal
is
to
be
done
with
that
by
123,
so
that,
at
whatever
point
you
go
drops
go
path
support.
A
The
the
component
working
group
shutting
down
and
handing
off
a
repo
to
the
arch
or
am
I
am-
I
mixing
things
up.
F
A
Handoff
I
like
like
this,
is
I,
as
far
as
I
know,
I
don't
think
we
own
a
whole
hell
of
a
lot
of
code,
and
now
we
own
a
piece
of
code.
So
is
that
code
or
subproject,
but
don't
know
who's
owning
that?
I
guess.
F
I
I
don't
know
I
mean
the
mechanics
of
the
like
component
config
stuff.
The
mechanics
are
api
machinery,
because
it's
just
loading
api
types
in
between
them.
F
The
use
of
it
had
sort
of
been
distributed
across
the
component
owners,
so,
like
scheduling
messed
with
loading,
scheduler
files,
node
mess
with
loading
keeplet
files,
as
far
as
like
the
consistency
aspect
of
it
or
sort
of
the
central
utility
helper
library
stuff
like
that
was
component
config
working
group-
I
don't
know
if
it
would
be
code
organization
or
api
review
or
if
it
would
just
sort
of
devolve
back
to
individual
component
teams
and
be
like
here.
This
mechanism
is
here:
here's
how
other
groups
are
doing
it,
I'm
not
sure,
okay.
F
Yeah,
scheduler
and
cubelet
are
the
primary
ones
who
were
making
use
of
it
cube
atom
to
some
extent.
Scheduler
is
making
progress
cubelet.
I
hadn't
seen
a
lot
of
progress
and
the
other
components
haven't
started
using
it.
You
can't
actually
start
api
server
or
the
controller
manager
from
a
config
file.
Yet.
A
F
See
all
right-
this
is
just
sort
of
a
survey
46
apr
reviews
completed
for
122.,
that's
a
lot
more
than
previous
releases,
but
a
fair
amount
of
that
is
more
issues
or
pull
requests
going
through
the
review
process.
So
that's
that's
good
to
see
it
means
we
have
visibility
to
things
that
previously
we're
just
sort
of
happening
in
one-off
ways.
So
now
you
can
actually
look
at
a
project
board
and
see
like
oh
here's,
the
dock,
changes
and
validation
changes
and
like
the
things
that
touch
the
api.
F
So
I
don't
actually
know
that
it's
a
significant
increase
in
real
volume,
but
it's
an
increase
in
visible
volume,
which
is,
I
think,
a
good
thing.
I
called
out
or
linked
to
a
few
notable
api
reviews
that
are
queued
up
currently.
So
these
are
things
targeting
123
or
in
the
gateway
case
it's
out
of
tree,
but
it's
queued
up
currently
hpa
v2
is
one
of
the
last
big
beta
to
ga
graduations
for
sort
of
these
perma-beta
apis.
F
So
that's
good
to
see
we're
trying
to
get
that
reviewed
and
done
early
in
the
cycle
in
place.
Pod
vertical
auto
scaling
is
one
that
has
been
in
the
pipeline
for
a
long
time
and
hopefully
will
make
progress
early
in
the
cycle
as
well,
and
then
the
gateway
api
is
an
api
that
started
as
an
experimental
api
under
networking
sig
and
is
looking
to
transition
into
the
kids,
io
namespace.
So
there's
an
api
review
associated
with
that,
and
then
I
wanted
to
mention
a
doc
update.
F
This
is
something
we're
trying
to
do
better
at
when
we
are
doing
reviews
and
basically
anytime,
someone
hits
an
issue
in
a
pull
request
that
the
reason
they
hit
it
was
because
the
documentation
didn't
help
them
we're
trying
to
follow
that
up
with
doc
updates
to
say,
like
here.
Here's
how
it
would
have
been
helpful
to
do
this
so
that
we
give
people
better
guidance.
F
A
F
I
think
this
tends
this
area
tends
to
be
very,
very
bursty,
so
for
the
majority
of
the
release,
there's
actually
not
a
big
backlog
and
then
a
week
or
two
or
three
before
code
freeze
it
the
pipeline
just
explodes
and
so
yeah
that
during
the
last
two
or
three
weeks
it's
there
aren't
enough
reviewers,
approvers
and
during
the
rest
of
the
cycle,
it's
okay.
A
One
of
the
viewers,
or
is
it
just
approvers
or
what
like
I'm
just,
is
there
anything?
We
should
do
we're
about
to
start
a
cycle
right
and,
and
is
there
anything
we
should
do?
Maybe
we
can't
do
anything
about
this
cycle?
Is
there
anything
we
should
do
so
that
next
cycle,
especially
if
the
reviews
are
increasing
yeah.
A
That
that
is
there
anything
we
can
do
to
encourage
there's
a
contributor
summit
coming
up.
What
what
is
there
anything
anyway?
You
won't
have
to
answer
it
here,
but
think
maybe
you
want
to
start
thinking
about
if
it's
manageable
or
if
more
needs
to
be
done.
F
Yeah,
so
I
mean
we've
had
the
description
of
the
reviewer
shadow
program
for
a
while
last
release.
I
actually
started
doing
that
with
all
the
reviews
I
did
personally.
So
I
would
encourage
the
other
api
reviewers
to
do
the
same
thing,
but
I
have
a
bi-weekly
slot
and
there
are
like
I
have
people
from
different
sigs
that
I
pull
in
and
that's
where
I
do
all
the
reviews
that
I
do
and
instead
of
me
doing
them
and
other
people
watching
we
actually
like
roundtable
them
and
take
turns
taking
leads
on
them.
F
So
the
things
you
see
me
commenting
on
there's
also,
I
think,
michelle
from
storage
and
although
from
scheduler-
and
this
is
terrible
away
from
api
machinery,
so
I
I'm
trying
to
pull
people
from
different
cigs
and
give
them
exposure
across
sigs
and
then
have
them
take
turns
taking
lead
on
reviews.
So
my
goal
there
is
to
build
the
reviewer
bench.
Michelle
has
been
doing
a
great
job
like
she
she's.
She
catches
pretty
much
everything
I
would
catch,
but
I
would
like
to
see
more
of
that
built
across
sigs.
So.
B
I
would
just
consider
I'm
not
saying
a
deadline,
but
I'm
saying
to
somehow
articulate
that
the
earlier
you
get
your
things
in
the
more
you
can
guarantee
that
the
api
review
is
actually
going
to
happen,
because
what
I
see
is
people
coming
into
the
cycle
very
late,
and
you
know
teams
wanting
to
review
things
and
really
doing
their
best,
but
that
almost
disincentivizes
people
from
getting
things
in
early,
and
I
think
that
trying
to
articulate
that
you
know
if
you
get
things
in
four
weeks
before
the
deadline
and
you're
never
going
to
happen
three
weeks
before
the
deadline.
B
Those
are
going
to
definitely
get
reviewed.
We
could
we
can.
We
can
guarantee
that.
But
if
you
get
things
in
a
week
and
a
half
before
the
deadline,
you
can't
guarantee
it
and
maybe
a
couple
of
those
don't
get
the
api
reviews,
because
there
has
to
be
an
incentive
for
also
people
getting
them
in
early,
because
your
timeline
becomes
so
crunched.
B
And
even
if
you
have
more
reviewers,
they
might
not
be
available
like
at
the
very
last
minute
to
get
through
all
of
these,
and
I
think,
trying
to
articulate
that
it's
not
guaranteed
at
a
certain
point
that
you
don't
have
enough
time
to
sufficiently
review
it,
because
it's
also
about
being
able
to
sufficiently
review.
Things
is
something
to
get
people.
Also.
Thinking
about
like
that
you're,
not
an
on-demand
reviewer,
who
has
to
jump
when
people
snap
their
fingers
at
the
last
minute.
But
you're.
B
Best
effort
to
get
them
in
especially
ones
that
come
in
early
and
somehow
communicating
that
at
the
beginning
of
a
cycle
could
be
really
helpful
to
get
people
aware
of
it,
because
I
don't
necessarily
think
that
for
better
force.
I
don't
think
anybody's,
really
thinking
about
that
or
thinking
about
about
you
and
the
team
of
people,
that's
looking
at
them
and
and
maybe
just
articulating.
That
would
be
really
helpful.
D
So
kirsten
one
of
the
I
think,
issues
that
I
saw
with
node
in
the
last
cycle
was
that
a
lot
of
things
that
like
needed,
api
review,
they
were
potentially
in
early,
but
they
weren't
actually
labeled
with
api
reviews.
They
just
got
completely
missed
until
I
went
through.
D
Oh,
these
things
need
api
review
and
then
flag
them
sort
of
late
in
the
release
cycle
can.
B
We
make
that
label
mandatory
the
way
that
we
have
the
mandatory
sort
of
like
docs
label,
or
I
forget
I
forgot
right
at
the
moment,
but
there
are
labels
where
it's
like
you
have
to
set
it.
You
either
have
to
say
yes
or
no
and
so
like.
If
you
have
that
like
that,
could
make
people
if
labeling
is
a
problem
like.
D
Yeah,
so
part
of
the
problem
is
that
the
bot,
the
bob
will
flag
things
as
api
changes,
but
sometimes
it's
wrong
and
like
I
think
that
the
api
reviews
don't
actually
get
pulled
in
unless
they
have
the
api
review
label,
but
the
bot
will
just
label
them
kind.
Api
change.
F
Yeah,
there's
a
there's:
a
per
directory
label
that
gets
automatically
applied
by
the
bot
and
then
things
that
actually
require
api
review.
You
have
to
manually
opt
in.
I
mean,
on
the
one
hand
like
if
you're,
making
api
changes
like.
Presumably
there
was
a
design
and,
like
I
think,
prs
that
just
sort
of
show
up
out
of
the
blue
actually
need
to
design
and
you
buy
in
from
like
the
area
owner.
F
So
ideally,
if
the
you
know
the
sub
project
leads
or
the
sig
leads
know
the
api
process
api
review
process,
then
they
should
be
able
to
sort
of
help,
coordinate
that
and
like
oh,
this
is
making
api
changes.
We
need
to
get
this
in
early.
We
need
to
once
once
we're
happy
with
this.
We
need
to
flag
it,
but
I
agree
that
should
be
communicated,
probably
at
the
beginning
of
every
cycle,
so
maybe
maybe
coordinating
with
c
release
when
they
send
out
the
like.
123
is
open
for
development.
D
So
this
actually
segues
very
nicely
into
the
thing
that
I
was
going
to
ask,
which
is,
I
think
that,
like
for
the
average
contributor,
they
might
not
necessarily
like
a
new
contributor,
might
not
be
able
to
accurately
flag
this.
But
I
don't
think
it's
an
unreasonable
thing
to
expect
say
like
a
sub
project
owner
to
flag,
and
so
I'm
kind
of
wondering
something
I
was
gonna.
D
Ask
the
api
review
team
right
now
is
very
small,
and
I
think
you
mentioned
michelle
earlier,
like
what
is
the
process
to
become
an
api
review
shadow,
because
that
would
be
something
that
I
would
potentially
be
interested
in
doing
for
node
and
that's
maybe
something
that
we
should
try
to
recruit
across
vertical
sigs.
F
Yeah
so
in
the
owner,
aliases
file,
there's
a
list
of
reviewers
per
sig
and
the
process
is
work
with
the
sig
leads
to
put
the
people.
You
want
to
be
shadows
in
that
list
for
your
sig
and
those
should
be
tagged
in
as
reviewers
for
the
packages
for
that
six
apis.
If
that's
not
the
case,
then
we
can
fix
that.
F
But
basically
we
want
a
group
of
people
that
is
small
enough
to
actually
concentrate
like
doing
work
and
getting
experience,
but
big
enough
that
people
like
if
there
are
people
in
that
sig
in
that
area
who
want
to
get
experience,
I
would
expect
them
to
be
able
to
be
included.
So
probably
two
to
four
ish
per
sig
seems
like
a
pretty
decent
target,
so
those
aliases
are
what
we
use
for
the
shadows
first
thing.
So
if
those
are
out
of
date
update
those
or
work
with
carry
leads
so.
D
In
summary,
oh
sorry,
god
in
summary,
the
way
to
become
a
shadow
is
to
update
the
owner's
files
to
add
the
people
who
would
be
shadowing
as
api
reviewers
and
not
approvers.
A
Okay,
awesome.
Thank
you
everybody.
I
think
last
item
here
on
the
agenda
kirsten.
It's
you
for
our
enhancements.
Some
project
update.
Oh.
B
So
I
have
like
more
of
an
ad
hoc
presentation
since
now
I
was
doing
enhancement
and
release
team
and
I'm
not
doing
release
team
anymore,
so
I'm
hoping
to
spend
more
time
working
on
enhancements
and
the
one
thing
that
I
think
is
really
important
is
just
this
background
like
there
have
been
a
lot
of
changes
in
the
release
process
and
the
collection
process
and
kind
of
the
enhancements
process
and
like
a
lot
of
the
community,
has
been
responding
to
it
and
doing
all
the
things
that
we've
asked
them
to
do.
You
know.
E
B
B
And
okay
123.,
like
it's
thinking
about
how
are
we
going
to
enter
into
2022,
and
I
think
that
kind
of
approaching
the
enhancement
process
on
or
enhancements
on
a
larger
level
is
also
going
to
be
helpful.
So
it's
about,
I
think,
like
a
theme
that
I'm
I'm
considering
is
mostly.
B
But
I
think
that
the
anchoring
idea
should
be
like
hey,
like
a
lot
of
people
have
changed
how
they
they've
done
things
and
they've
been
really
cooperative,
but
there
are
some
smaller
things
that
we
can
do
to
kind
of
make.
The
experience
better
so
like
people
are
working
on
a
kept
website,
which
is
something
that
people
have
been
asking
for
like
an
easy
way
for
maybe
external
people
to
get
an
idea
of
what
the
caps
are.
Another
person
xander
has
been.
B
He
actually
has
like
a
pretty
cool
like
update,
that
he's
been
working
on
just
just
privately
to
kept
ctl
that
has
some
like
pretty
great
functionality.
Just
it
doesn't
change
any
process,
but
it's
a
tool
that
helps
people
kind
of
see
the
status
of
caps
and
milestone
tests
and
everything
like
that
and
that's
great
because
that
doesn't
impose
extra
process.
It's
like
really
like
a
value,
add
some
of
the
other
ideas
that
I've
had
it.
B
One
of
the
some
of
the
problems
that
I've
seen
is
that
we're
not
very
clear
on
what
we
expect
from
people
in
their
enhancements
and
there's
sometimes
friction
and
confusion
that
people
have
when
they
come
to
the
repo.
So
like
auditing
the
kept
instructions.
Are
they
clear?
Are
we
really
clear
about
what
we're
asking
people
to
do?
They're
included
in
the
template?
So
that's
not
necessarily
completely
ergonomic,
and
there
are
some
points
of
friction
in
there
where
I've
seen
authors
get
confused
over
time.
Another
idea
is
asking
cigs
and
also
architecture
and
other
people
like.
B
Are
there
some
canonical
examples
of
good
caps
like
that's
a
way
to
like
reinforce
values
about
like
what
we're
thinking
of
when
we
say
this
cup
needs
to
be
good
like
this
is
an
example
that
you
can
also
look
at
instead
of
you
know,
relying
on
your
reviewers
over
and
over,
and
you
know
like
really
sort
of
staining
this.
This
is
good
to
us.
B
We
don't
have
a
lot
of
changes
right
now
in
the
past
and
that
if
we
can
improve
that,
it's
a
little
bit
easier
because
what's
happening,
a
lot
is
that
people
come
back
to
enhancements.
Maybe
after
like
a
cycle
or
two,
you
know
they're
like
okay,
it's
time
to
like
I've
talked
to
the
sig.
We're
definitely
going
to
move
it
to
beta
we're,
definitely
going
to
move
it
to
ga.
They
don't
really
know
that
something
has
changed,
though,
and
so
they
put
a
lot
of
work
into
the
kept.
And
then
what
happens?
B
B
This
is
what's
new
like,
for
example,
it's
not
new
anymore,
but
what's
new
is
prr
like
if
you
don't
have
that,
like
you
really
need
to
make
sure
that
you
have
that,
and
that
builds
like
a
sort
of
like
proactive
awareness
to
somebody
who's
going
into
the
repo
as
well
as
potentially
providing
once
once.
We
agree
that
cap
should
be,
the
template
could
be
versioned.
B
Is
you
know
a
diff
because
that's
not
actually
easy
like
when
people
are
looking
at
the
template
to
say
you
can
say
it
needs
to
be
updated,
but
the
question
the
follow-up
question
is
always
like
to
what
what
do
I
need
to
update
and
we
don't
have
a
great
way
to
like
demonstrate
that
to
somebody.
So
these
are
like
not
necessarily
like
massive
changes,
but
I
think
that
there
are
ergonomic
changes
that
would
address
some
of
the
friction
that
I
see
in
the
day-to-day.
B
That
would
make
people
a
little
bit
more
comfortable
with
the
enhancement
process,
but
also
allow
us
to
kind
of
circulate
our
values
like.
Why
is
this
important?
This
is
what
we
expect
of
you
on
the
front
end
rather
than
the
back
end,
because
people
are
not
as
inventable
to
things
when
they're
like
a
week
before
code,
freeze
and
they've
worked
on
it
and
now
they're
they're
finding
out
that
they
did
something
wrong
or
there
were
other
expectations.
B
A
lot
of
changes
is
like,
at
least
for
me,
a
theme
going
forward
in
2022
and
then
the
other
sort
of
major
thing
is
I'd
really
like
to
get
a
conversation
going
with,
like
just
a
work
in
progress
process,
kept
template
just
to
kick
that
off,
because
I
think
that
the
cadence
kept
was
a
disaster,
and
part
of
it
was
because
we
didn't
have
a
template,
and
I
think
that
the
template.
B
Necessarily
like
the
contents
of
it,
it's
about
sharing
like
a
lot
of
this,
is
about
sharing
our
values
and
what
we
think
is
important
and
for
large
far-reaching
changes.
The
most
important
thing
is
communication.
It's
about
telling
people
giving
people
the
opportunity
to
reply,
giving
people
an
opportunity
to
know
what's
happening
and
having
that
circulated
like
far
and
wide.
So
that
nobody's
surprised
at
the
last
minute,
and
I
think
that
if
we
can
kind
of
get
that
conversation
going
and
not
really
focused
on
like
when
is
it
going
to
be
done
like?
B
That's
also
going
to
be
great
going
into
2022,
and
hopefully
you
know
like
sometime,
then
we
can
have
a
template
that
everybody
can
use,
but
that's
kind
of
like
a
theme
that
I'm
thinking
of
for
the
whole
enhancement
thing
as
opposed
to
like
massive
projects,
just
reducing
friction
being
empathetic,
because
I
I
just
see
this
I've
seen
it
a
lot
now
over,
like
a
couple
years
like
dealing
with
the
releases
and
trying
to
be
a
little
bit
more
proactive
so
that
people
have
a
more
comfortable
experience
with
enhancements,
and
I
think
that
all
that
kind
of
infrastructure
which
will
let
us,
then
you
know
if
we
have
to
make
changes
to
the
cap.
B
We
have
a
great
way
to
talk
about
it
with
people.
We
have
a
great
way
to
document
those
changes.
We
have
a
great
way
to
tell
the
release
and
the
signatures
like
how
to
go
forward
with
that.
So
I
don't
know
that
was
like
a
freestyle.
That's
where
my
head
is
at
and
that's
I
don't
know.
I
hope.
That's.
Okay,.
A
No
thank
you.
I
think
what
would
be
interesting
at
some
point
would
be
as
you
develop.
That
theme
like
some
some
read
next
time,
maybe
a
readout
on
like
what
yeah,
what
we
want
to
simplify
things
like.
A
A
G
Technology,
I
also
wanted
to
make
sure
that
we
saw
the
notes
for
kirsten's
statement
and
make
sure
that
there's
we
we
heard
that
and
it's
visible,
and
if
I
missed
anything
I
can
tend
to
type
in
this
stuff.
I
would
encourage
you
if
I'm
on
a
call
typing
fast
fix
my
misstatements
and
incorrections.
G
Thank
you
so
much
kirsten
for
your
time
and
effort.
I
love
that
you
have
a
theme
going
forward
and
I
think
it's
lovely.
G
For
the
conformance
sub
project
attendance
on
these
things
is
very
sparse
and
that's
okay,
because
it's
only
a
few
people
that
have
the
skill
set
so
far
to
be
able
to
do
this,
and
we
love
to
increase
that,
but
we're
also,
hopefully
getting
towards
the
end
of
the
digging
the
filling
the
hole
without
digging
the
hill
deeper.
I
think
we're
going
to
get
through
that,
probably
in
125,
so
I
don't
know
somewhere
in
there.
It's
helpful
I'd
love
to
be
done
by
it
in
the
next
year.
G
Let's
we'll
see,
but
we
are
on
target
for
120.
Oh
so
sorry,
the
monthly
me
so
rather
than
meeting
every
other
week,
we're
gonna,
let's
move
it
to
once
a
week
and
that
way
for
people
that
not
so
often
make
it
a
priority
show
up
we'll
make
sure
that
the
agenda
is
full
of
all
the
things.
A
G
123
we're
gonna
get
below
75
without
conformance
that
means
we
just
have
75
more
endpoints
to
go
that
doesn't
include
all
the
things
that
we
put
on
the
not
available
for
conformance.
We
might
revisit
that,
but
let's
do
that
in
a
conformance
meaning
when
we
get
closer
to
see.
What's
left
we're
at
99
we've
gotten
below
100.,
that's
a
pretty
cool,
pretty
cool
milestone
and
there
is
an
endpoint
that
we
have
lost
conformance
on.
It
happens
when
apis
change
or
somebody
changed
a
library
or
a
test,
and
so
this
one
isn't
being
hit.
A
Okay,
awesome!
Well,
that
is
our
agenda
for
today.
So
thank
you.
Everybody
really
appreciate
all
the
work
everybody's
doing
and
we
will
see
you
in
a
couple
weeks.