►
From YouTube: Kubernetes SIG Auth 20180321
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20180321
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
B
Yeah
yeah
I
mean
Carnegie,
it's
buried
all
the
events
to
it
according
to
an
m-space
and
with
this
demo,
and
then
the
talents
were,
can
use
the
service
alkanes.
To
read
this
audience.
Service
O'connor
is
required
this
to
be
verified.
Your
service
service
count
for
the
food
and
and
you
use
a
remote
user.
B
A
C
C
B
B
B
D
Sorry,
a
little.
D
I
was
just
wondering
if
a
service
account
can
always
access
their
own
auditor
or
like,
and
the
remote
authenticators
only
require
practice
and
different
namespaces
or
if
it
has
to
go
through
the
remote
service,
either
way.
D
D
B
C
Did
this
thing
or
external
system
did
this
thing
that
that's
gonna
cause
pain,
kind
of
in
writing,
access
control
and
auditing
and
incident
response
in
other
areas?
I,
don't
know
if
we
actually
have
any
guidance
that
says
anything
like
that.
Maybe
we
should
but
yeah
I.
This
kind
of
use
of
service
account
credentials
makes
me
a
little
nervous.
F
It
it
it's
at
least
a
misnomer,
because
we
have
fuser
accounts
and
service
accounts
the
times
where
a
user
should
be
assuming
a
service
account
role
is
probably
only
during
debugging.
In
my
personal
opinion
and
I
I.
Think
it's
reasonable
to
export
service
account
credentials
for
things
like
Jenkins
that
run
outside
the
cluster
I
would
recommend
that
we
don't
use
service
account
tokens
as
the
credential
format.
For
that
but
yeah
again
we
don't.
We
do
lack
guidance
right
now,
so
it
might
be
useful
to
clarify.
G
G
A
G
Yeah,
so
this
I
I
tagged
this
as
RFC,
because
I
don't
I,
don't
necessarily
know
that
this
is
the
direction
full
direction
we
want
to
go,
but
I
wanted
to
sort
of
put
this
out
to
the
community,
because
I
think
there's
a
lot
of
sort
of
related
efforts
going
on
and
I'd
like
to
see
those
kind
of
unify
and
at
least
have
some
common
direction,
if
not
merged
everything,
into
a
single
policy
but
kind
of
at
a
high
level.
G
The
observation
was
that
there's
a
bunch
of
proposals
out
for
limiting
new
pod
fields
in
different
ways,
and
these
are
currently
being
treated
as
kind
of
separate
independent
policies,
even
though
they're
all
doing
a
very
similar
trying
to
accomplish
very
similar
thing
of
limiting
what
fields
you're
allowed
to
set
on
pod,
so
kind
of
like
finer,
grained,
authorization
of
pod
settings
and
so
at
a
high
level.
This
proposal
is
to
create
a
new
policy
object
and
admission
associated
admission
controller,
which
would
be
responsible
for
limiting
all
pod
fields.
G
G
Because
this
is
such
a
like
big,
like
area
of
saying
you
know,
we're
gonna
be
the
limitation
device
for
the
entire
pod
spec.
My
one
judicial
call
out
a
few
things
that
were
out
of
scope.
The
defaulting
I
think
is
debatable,
but
I
would
rather
see
that
handled
by
a
separate
mechanism,
maybe
a
pod
preset,
and
if
the
design
goes
into
a
few
reasons.
Why
below?
G
But
also
I
want
to
keep
the
logic
on
this
simple
and
just
be
kind
of
straightforward.
Whitelist
blacklist
pattern
matching
kinds
of
restrictions,
and
if
you
want
to
do
something
more
advanced
that
calls
out
to
external
systems
or
various
hooks
and
whatnot,
then
that
can
be
a
separate
admission
controller.
So
this
is
kind
of
just
straight
up.
C
A
D
G
G
But
the
idea
is
that,
or
my
thinking
was,
that
four
basic
kinds
of
are
you
allowed
to
set
a
toleration
that
meets
this
kind
of
pattern?
Basic
pattern.
Matching
approach
would
be
included
in
this
in
this
policy
that
lets
you
both
have
one
place
for
setting
that
kind
of
policy
and
kind
of
have
consistency
around
restricting
scheduling
decisions,
but
also
by
pairing
it
with
all
of
the
other
restrictions
that
lets
you
do
elaborate.
More
elaborate
policies
like
you're
only
allowed
to
schedule
on
the
sub
nodes.
G
G
So,
just
a
couple
of
the
other
high-level
design
points
I
wanted
to
mention
I
after
looking
at
the
different
ways
of
finding
policy
and
authorizing
use
of
policy,
I
feel
pretty
strongly
that
the
namespace
level
is
the
right
level
to
enforce
these
things.
I
go
into
above
in
the
proposal.
A
few
of
the
like
reasons.
Why
or
other
approaches
don't
quite
work,
but
since
creation
of
an
object
is
something
that
happens
on
a
namespace,
then
I
think
that's
where
policy
should
be
at
I'd.
G
So
yeah,
so
that's
one
of
the
high
level
I
are
kind
of
key
design
points
here
is
that
you
know
the
policy
and
you
bind
it
to
a
set
of
namespaces
through
either
just
a
list
of
names
based
names
or
a
selector.
G
And
then
the
other
important
piece
to
call
out
is
how
policies
are
composed.
So
if
you
have
multiple
cloud
restrictions
that
apply
to
the
same
namespace,
how
are
those
restrictions
resolved
and
kind
of
the
two
basic
approaches
are
an
intersection
approach,
which
means
that
you
must
conform
to
every
one
of
the
policies
or
more
of
a
union
approach,
which
is
what
the
direction
that
cloud
security
policy
takes
it,
and
that
is
that,
if
you
meet
the
requirements
of
any
policy,
you're
allowed
the
big
advantage
of
the
intersection
approach
is
policies
can
be
composed.
G
Another
point
that
was
brought
up
is:
you
could
have
kind
of
your
generic
really
portable
restrictions
in
in
one
object
and
then
the
more
kind
of
deployment
specific
pieces
in
another
one
and
by
applying
those
both
to
the
same
namespace.
You
end
up
with
kind
of
a
composed
policy.
The
problem
with
this
approach
is,
if
you
want
to
have
a
namespace,
that's
open.
For
instance,
cube
system
needs
to
be
able
to
run
privileged
pods.
G
But
then
you
can
also
have
rules
that
open
it
back
up,
and
so
there's
this
binding
mode,
except
which
says,
basically,
if
you,
if
this
pod
restriction,
is
running
in
the
accept
mode
and
you
meet
all
of
its
requirements
than
you're
allowed,
and
so
this
is
the
example.
That's
all
the
way
at
the
very
bottom,
where
you
could
have
a
number
of
really
restrictive
policies
that
apply
to
the
entire
cluster
and
then
just
have
this
one
little
restriction
in
running
an
allow
mode.
G
G
One
other
point:
I
wanted
to
just
throw
out
is
well
two
questions
actually
I
left
on
the
on
the
description
of
the
PR
one
is.
This
has
a
lot
of
overlap
with
another
policy
proposal
by
the
policy
working
group
and
also
some
overlap
with
the
open
policy
agent,
admission,
control,
work
and
so
I
kind
of
have
a
question
of
whether
we
even
want
to
pursue
these
sort
of
first-class
baked
in
policies
or
if
we
just
want
to
defer
to
that
kind
of
more
abstracted,
open
policy,
agent
type
thing
and
then
the
other.
G
E
You
might
want
to
require
it
to
like
select
non
master
nodes,
whereas
the
Toleration
zon
a
pod
like
a
pod.
That
has
no
toleration
x'
you
can
deny
by
default
and
like
say
you
can't
tolerate
anything
and
that
way
any
any
node
that
is
tainted
in
any
way.
You're
not
going
to
be
able
to
go
on
to
and
a
pod
that
wants
to
opt
in
to
tainted
different
teams.
E
G
Yeah
so
the
kind
of
the
approach
there
that
I
was
proposing
is
you
yeah?
You
would
have
this
restriction.
That
applies
cluster
wide.
It
says
you
know.
Plants
cannot
be
scheduled
on
the
master
and
then,
if
you
wanted
to
open
up
a
specific
namespace
to,
for
instance,
say
you
can
schedule
on
the
master,
if
you
have
permission
to
create
pods
in
this
namespace
and
they're
conforming
to
these
additional
requirements,.
G
A
G
E
E
Think
it
works
I
think
it
works
well
when
you're
talking
about
like
one
field
but
I,
agree
that
breaks
down.
When
you
didn't
say
all
right
well,
we
have
25
different
aspects.
We
want
to
control.
Do
we
have
25
happy
little
individual
components
that
each
manage
their
own
field
in
a
nice
way,
but
then
are
a
nightmare
that
make
work
together.
G
A
F
Entry
yeah:
this
has
been
floundering
for
a
couple
of
releases
I.
It
has
been
suggested
the
past
to
drop
the
IP
so
fixing
the
IP
self-reporting
and
to
only
focus
on
tasting
labels
self-reporting.
So
the
update
to
this
drops
all
the
IP
stuff
I
have
it
in
a
diff
that
I
plan
to
introduce
once
this
proposal
is
merged
or
agreed
upon.
It
also
fixes
some
of
the
last
review
comments
such
as
I.
F
It
clarifies
some
of
the
configuration
around
white
listing
certain
labels
and
taints
to
be
self
reported
by
the
cubelet,
and
it
discusses
how
we
can
remove
the
self
deletion
permission
from
the
node
on
the
majority
of
kubernetes
deployments
that
don't
actually
use
that
permission.
So
I
just
wanted
to
notify
you
guys
that
this
has
been
in
fact
updated.
I,
don't
have
much
more
to
say
about
it,
but
I
will
be
probably
adding
this
as
an
agenda
item
on
the
next
sig
note
meeting,
because
it
is
a
signo
de
proposal.
B
A
Awesome
Paris:
are
you
here.
D
Yes,
okay,
cool
all
right!
Well,
first
things:
first
hi
everyone
I
am
a
co-chair
of
the
special
interest
group
for
contributor
experience.
We
decided
to
do
a
road
show
of
sorts.
There
are
37
cigs
and
working
groups
at
this
point
time,
and
it
is
very
hard
for
us
to
communicate
out
what
we
are
doing
and
all
of
you
are
contributors,
awesome
contributors
to
say
the
least.
So
we
want
to
make
sure
that
we
do
this
on
a
regular
basis,
regular
meaning
at
least
quarterly.
D
This
is
a
rather
a
long
one
because
it's
our
first
one
and
we've
been
doing
a
lot
of
work.
So
it's
important
for
us
to
just
get
out
and
introduce
ourselves
and
things
like
that.
So
what
I'm
going
to
do
is
I'm
not
going
through
I'm
not
going
to
go
through
this
entire
block
of
text
I'm
going
to
leave
it
with
you.
A
lot
of
this.
D
We
are
asking
for
feedback
on,
so
that's
why
I'm
not
going
to
go
through
a
lot
of
it
and
get
feedback
right
now,
but
if
we
have
time,
of
course,
let's,
let's
take
the
feedback
now,
first
things.
First,
we
are
trying
very
hard
and
diligently
to
make
our
automation
and
workflow
and
process
for
contributors
the
same
experience
across
repos,
so
that
we
can
a
have
a
single
point
of
truth
and
B.
D
D
So
we've
been
working
very
diligently
on
things
like
defining
commands,
defining
labels,
both
of
which
have
links
inside
of
this
document
for
your
review.
What
we
really
want
to
know
with
a
lot
of
these
things
as
a
how
you're
using
them
be,
are
you
using
them
and
see
like
what
the
pros
and
cons
for
you
to
be
using
these
things
are
I'll
just
give.
A
quick
highlight
example,
is
the
Help
Wanted
label.
D
A
lot
of
new
contributors
complain
about
this
label
because
they
feel,
like
all
of
the
help,
pointed
issues
are
generated
towards
people
who
are
already
contributing
to
the
project
and
it's
hard
for
them
to
jump
in
right.
Now
we
don't
use
a
good
first
time
issue
or
in
for
new
contributor
issue
or
label
across
the
board.
Some
people
are
using
it.
Some
people
aren't
using
it.
So
it's
like
how
can
we
get?
How
can
we
get
better
in
this
area
in
the
same
kind
of
breadth,
issue,
triage
and
issue
management
guidelines?
D
D
Again
the
labels
are
coming
up
here.
Labels
are
also
something
that
are
important
to
dem
stats.
If
you
do
not
know
what
does
that
says,
it's
the
NCS
product
for
how
we're
collecting
metrics
and
things
like
that
across
the
project.
So
right
now
we're
spending
a
lot
of
time.
We
have
a
new
product
owner
for
this,
who
is
going
to
be
really
going
to
a
lot
of
SIG's
and
asking
them?
What
do
you
want
to
know
about
this
project?
That's
important
to
you
from
a
metric
standpoint.
D
How
can
metrics
guide
what
you're
doing
for
SLA,
SSL
loans
etc?
So
the
label
it
getting
the
labeling
right
and
the
meanings
right?
It's
important
for
that,
because
we're
going
to
be
pulling
some
stats
based
on
focal
labels.
So
if
we're
not
using
the
labels
correctly
across
the
board
or
mean
the
same
things,
then
the
stats
could
look
different
or
mean
different
things.
D
Next,
how
do
you
find
out
about
these
changes?
And
this
is
again
one
of
the
main
reasons
why
we're
doing
these
Rogers
is
because
we
are
turning
things
on
and
off
in
certain
repos
and,
and
people
have
said
that
we
aren't
necessarily
communicating
these
changes
appropriately,
so
we
laid
out
in
our
charter
how
we're
actually
going
to
be
communicating
these
changes
go
forward.
So
for
you
all
who
are
interested
in
these
changes,
we
will
lazy
consensus
out
these
changes
for
the
time
box
of
at
least
72
hours
to
the
following
mailing
lists.
D
It's
a
github
issue
link
and
the
subject
of
notice.
We're
going
to
be.
You
know
in
to
contributor
experience
the
sigelei
distro
as
well
as
kubernetes
of
devs
emailed
Ostrow,
and
then
we'll
also
be
doing
community
meeting
Thursday
announcements
as
well.
So
that's
kind
of
how
we're
gonna
be
perceiving
go
forward
with
these
changes.
So
I'll
take
a
breath
now.
Does
anybody
have
any
quick
questions
or
concerns
about
automation,
workflow
process
across
three
cones
and
and
why
regulate?
That's.
E
D
Yeah,
it's
a
consistency
is
definitely
a
big
key
word
for
contributor
experience
this
year
and
I
know
that
that
has
pros
and
cons
and
owns
are
certain
repos
out
there
that
do
things
one
way
and
like
to
do
things
one
way
or
think
that
you
know,
or
you
know
it
has
advantages
so
we're
trying
to
work
through
those
and
trying
to
get
everybody
on
the
same
page.
So
I
did
say
in
a
charter
word
above
the
steering
committee
has
released
their
Charter
info.
B
B
D
Know
that
this
isn't
a
great
project
management
tool,
but
it
works
for
us
at
this
point
time
so
that
we
can
just
get
our
heads
get
our
heads
cleared.
We
also
have
a
new
version
of
the
contributor
guide.
We
would
love
for
you
to
review
and
me
comments
to
it.
The
developer
guide
is
also
on
its
way.
Do
we
have
officially
broken
ground
on
that
work?
D
For
both
of
these
things,
we
would
love
for
you
to
poke
holes,
especially
the
developer
guide
right
now,
because
we
do
those
are
different
ways
for
for
doing
things
in
certain
groups,
and
this
is
actually
the
place
to
talk
about
that
especially
API
conventions.
Things
like
that
and
again,
please
poke
holes
of
these
things.
We
want
a
kind
of
a
single
point
of
truth,
and
one
of
the
things
that
we
are
looking
forward
to
doing
is
we're
going
to
create
a
contributor
site.
D
It'll,
be
something
along
the
lines
of
contributors
that
kubernetes
that
IO
and
all
of
these
documents
will
be
surfaced
there,
instead
of
get
up,
which
is
a
major
activity,
that's
underway
right
now,
also
mentoring
that
we
have
come
up,
and
this
is
contributor
mentor
II
as
well,
not
use
our
mentoring.
We've
come
up
with
several
different
strategies
that
really
tackle
time.
We
hear
that
no
one
has
time
to
do
a
lot
of
traditional
one-on-one
mentoring
and
that's
quarterly
fine,
but
we've
come
up
with
other
ways
for
you
to
actively
contribute
in
this
area.
D
One
of
them
is
turn
mentors
on-demand.
This
is
like
a
one-hour
ask
me
anything.
Live
streams,
youtube
session,
with
five
to
six
contributors,
monthly,
so
you'll
get
on
as
a
mentor
and
answer
random
questions
like
what's
your
favorite
color
too,
what
the
heck
is
and
and
testing
here.
Why
are
my
tests
liking
and
why
are
you
into
super
Nettie's
open
source?
So
it's
really
awesome
very
quick
way
for
you
to
get
involved
and
to
help
someone's
life
a
little
easier
instead
of
to
read
the
damn
docs
as
we
call
it.
Also
outreaching.
D
We
just
completed
our
outreach
e
pilot
and
it
went
well.
We
have
two
women
who
are
now
contributing
to
the
project
as
active
contributors
for
CLI
and
we
actually
actually
pay
them
a
stipend
through
outreach,
E
and
Google
and
CN
CF
had
had
that
covered
last
time
and
it
is
a
semester
based
program,
the
cement.
This
next
semester
has
already
started,
but
if
your
crew
was
interested
in
getting
into
outreach,
e4b
fall/winter,
let
me
and
I
can
tell
you
how
to
do
that.
D
Brute
mentoring
is
my
favorite
right
now,
because
we're
mentoring
is
a
is
something
that
or
the
source
is
not
familiar
with
and
I'm,
trying
to
socialize
that
and
see.
If
we
can
make
that
a
thing
group
mentoring
is
an
awesome
way
to
get
multiple
people
up
to
speed
using
one
mentor.
So
it's
a
scalable
program,
but
it's
a
self-paced
deal
and
we
go
based
off
of
the
community
membership
guidelines
right
now.
We're
running
a
test
cohort
for
qad
young
cops
and
also
workloads
API,
where
they
have
four
to
one
mentors.
D
We
give
them
a
private
slack
channel
and
we
give
them
workshops
and
time
with
maintain
errs,
and
it
would
be
awesome
to
see
they
golf
involved
in
this
as
well.
So,
for
instance,
if
you
need
more
approvers,
we
can
do
like
four
reviewers
to
one
mentor
and
the
mentor
does
not
need
to
be
you'll
need
a
chair
or
anything
like
that.
The
person
just
needs
to
be
in
good
standing
with
your
group
and
be
at
least
the
level
that
these
people
want
to
get
into,
and
we
just
self-paced
them
through
right
now.
D
The
ten
individuals
that
we
have
at
least
four
of
them
are
going
to
be
going
into
owner
styles
after
their
three-month
period,
which
ends
up
this
month.
So
it's
very
cool,
very
different
and
then
we're
also
proposing
a
buddy
program
which
is
very
similar
to
one-on-one
mentoring,
and
that
is
except
very
different,
because
it's
a
one-time,
one-shot
deal
for
one
hour,
so
you
can
either
pair
program
with
the
person.
Do
you
code
reviews
with
the
person?
D
Ask
them
anything
they
want
and
anything
that
you
think
you
need
to
get
up
to
speed
in
that
certain
area
hoping
to
go,
live
with
at
least
a
test
for
this
in
the
next
two
to
three
weeks.
So
the
TLDR
for
mentoring
is
if
you're
interested
I'd
love
to
get
off
involved
and
see
where
your
mentoring
needs
are.
So
we
can
talk
offline
about
that
and
then
last
item
here.
That's
kind
of
a
major
item
is
slack
I
know.
D
Lately
it
has
been
a
topic
of
the
discussion
as
to
whether
or
not
we
should
be
switching
providers.
I'm
not
going
to
talk
about
that
right
now,
but
I
do
want
to
tell
you
some
stats.
We
all
boarded
2,000
new
people
in
the
last
30
days.
The
s
lack.
This
is
a
huge
deal
for
us.
We
now
have
33,000
people
in
slack
with
160
channels.
It's
a
huge
beast
for
us
and
I
thought
the
best
way
for
us
to
handle.
D
This
is
to
onboard
some
of
those
important
documents
that
you
keep
in
github
as
pinned
notes
things
like
your
charter.
Your
meeting
notes
your
agenda.
If
you
don't
already
have
this,
we
also
have
set
up
some
slack
guidelines
so
feel
free
to
view
those
and
also
join
the
slack
admin
channel.
If
you
need
anything,
that's
new
as
well
we're
always
looking
for
volunteers
for
user
office
hours.
That
happens
actually
today
and
I
think
an
hour
or
two.
These
are
talking
to
users
very
similar
to
need.
D
Our
contributor
is
going
to
format
if
you've
not
been
involved,
people
ask
questions
and
slack
and
Twitter
and
contributors
answer
them
from
a
user
perspective,
and
that
is
it's
for
me.
Loved,
like
I
said
you
don't
have
to
give
anything
back
now,
but
we
would
love
to
hear
feedback
either
on
our
mailing
list
via
slack
channel
a
pigeon
carrier
owl
doesn't
matter
just
going
to
speed
back
on
how
we
can
help
you
better
shape
nor
contributor
experience.
D
A
Share
my
screen
again
so
I
put
a
little
item
on
here.
That's
kind
of
within
this
theme
over
the
111
release
cycle.
A
few
of
us
would
like
to
start
seeing
us
more
formally
tracking
the
work
that
we're
doing.
This
came
up
with
trying
to
talk
a
little
bit
about
this
new
sub
projects.
Thing
that's
coming
out
through
some
of
the
the
defining
of
what
SIG's
do.
A
Every
signo
has
a
list
of
sub
projects,
whether
or
not
that
fits
in
perfectly
with
the
kinds
of
work
that
we
all
do
on
a
release,
release
basis,
that's
still
sort
of
maybe
up
for
debate,
but
I,
think
defining
more
of
what
we
do
and
trying
to
articulate.
The
things
to
be
done
is
something
that
we
could
do
a
much
better
job
of
over
the
110
release
cycle,
and
particularly
as
we
hit
the
release,
it
was
pretty
unclear
about
not
just
maybe
the
state
of
particular
features
or
whether
they
were
alpha
beta,
stable,
I
know.
A
One
of
them
might
have
snuck
into
a
blog
post
in
a
state
that
it
wasn't.
But
importantly,
we
want
to
just
do
a
better
job
of
tracking
what
work
there
is
to
do
for
individual
features
and
what
work
there.
It
needs
to
be
done.
So
a
few
of
us
have
started
to
sort
of
put
together
a
document
describing
more
of
the
many
stuff
that
siga
does
the
things
that
need
to
be
done.
A
The
things
that
we've
wanted
to
do,
but
haven't
been
able
to
do
so,
I'll
be
sending
out
this
document
at
some
point
between
now
and
the
next
release,
or
the
say,
boss
meeting,
but
importantly,
I
think
one
thing
we
do
want
feedback
is
determining
how
everyone
in
this
group
wants
to
start
tracking
their
work.
So
if
you
haven't
seen
it
for
some
of
the
for
every
release
for
the
the
advanced
auditing
feature,
Mick
puts
together
this
sort
of
very
detailed
bullet
points
of
exactly
what
issues
there
are
to
work
on
who
owns
them.
A
A
One
of
the
criteria
for
it
to
go
to
stable
or
so
on
and
so
forth,
and
it
also
is
a
good
opportunity
for
us
to
sort
of
acknowledge
the
work
that
our
many
contributors
to
so
the
the
sort
of
thought
that
I
want
to
leave
with
everyone
here
is:
would
we
be
interested
in
opening
one
of
these
for
more
of
our
features?
So
particularly
this
would
be
something
like
the
external
client
go.
A
E
Yeah
I
mean
umbrella
issues,
sort
of
hubs
that
everything
related
to
a
feature
mentions,
or
references
or
links
to,
and
so
you
can
get
an
at-a-glance
view
of.
What's
going
on
in
a
release
is
essential,
especially
if
I
think
what
happens.
A
lot
is
something
is
mentioned
early
on
and
release
and
five
people
say:
oh
yeah,
we're
really
interested
in
that
and
then
the
one
person
who
was
planning
to
work
on
it
gets
busy
and
no
one
realizes
that
it
didn't
actually
have
any
other
owners.
E
E
G
G
B
G
Also,
a
big
fan
of
those
umbrella
books
for
auditing.
One
thing
that
those
do,
which
is
different
from
some
of
our
other
kind
of
feature
tracking
umbrella
issues,
is
aligning
the
work
with
releases
instead
of
with
kind
of
feature
milestones.
So,
for
instance,
in
this
one,
you
see
it's
for
the
111
release
not
for
even
though
graduating
the
API
disabled
is
like
one
of
the
targets.
That's
not.
This
isn't
be
like
work
to
go
to
stable
and
the
reason
I
like
that
is
I.
G
Think
that
you
know
the
work
to
go
to
stable
the
work
to
go
to
beta
if
it
spans
multiple
releases,
those
tend
to
get
stale
and
it's
a
little
hard
to
tell
what
is
actually
going
into
a
specific
kubernetes
release.
So
this
makes
it
really
easy
to
track
the
the
release
and
the
progress
or
the
progress,
the
status
for
aligned
with
kubernetes
releases,
at
least
until
we
have
feature
branches.
If
that
ever
comes
yeah.
E
A
Okay,
awesome
I'll
be
sending
out
that
document
sometime
later
this
week,
and
hopefully
we
can
sort
of
use
this
as
part
of
our
111
planning,
which
we
should
probably
start
doing
next.