►
From YouTube: Technical Oversight Committee 2020/12/17
Description
Istio's Technical Oversight Committee for December 17th, 2020.
Topics:
- Proposal for automating docs testing
- Management of Istio Social Media and Linked In
- Tags for control plane revisions.
- Collecting feature status for fields (experimental, alpha, beta, stable)
A
C
A
Help
you
get
that
through,
because
it's
obviously
very
important
is
jacob
on
here
or
some.
A
So
we
have
the
upgraded
working
group.
Josh
you've
somewhat
been
intimately
involved
in
this.
Do
we
want
to
do
this
now
or
do
we
want
to
wait
till
next
year?.
E
A
Okay,
okay,
sam
jacob,
is
that,
okay
with
you
guys.
A
A
Okay,
well,
we
can
come
back
to,
let's
see,
give
her
a
few
minutes.
Let's
do
the
revision
tag.
Api
discussion.
F
Good
time
all
right
would
it
be
all
right
if
I
shared
the
screen
sure
yeah.
G
G
F
Like
a
little
bigger,
okay,
so
pretty
much
for
those
of
you
don't
know,
this
is
the
revision
tag
proposal
and
pretty
much.
The
idea
is
shown
in
the
diagram.
F
The
idea
is
you
label
namespaces
have
them
point
to
tags.
Instead
of
directly
to
revisions,
then
you
can
point
the
tags
wherever
you
want
to.
So.
F
The
specific
thing
that
I
was
asked
to
put
this
on
the
toc
calendar
for
is
there's
kind
of
a
long
discussion
or
debate
about
how
these
revision
tags
should
be
created
and
configured
so
whether
it
should
be
like
an
imperative
api
through
istio
control
commands
or
whether
it
should
be
a
declarative
api
through
helm,
values-
and
you
know,
by
extension,
the
istio
operator-
and
I
think,
there's
merit
to
both
sides
there
and
yeah.
Does
anybody
have
any
strong
opinions
on
this.
B
So
I
my
my
first
initial
opinion
is
that
the
tags
should
not
go
like
the
mapping
from
a
tag
to
a
revision
is
a
cluster-wide
setting
and
it
shouldn't
be
on
the
revision
itself,
because
then
you
get,
then
you
can
have
conflicts
right
like
if,
if
three
revisions
claim
the
same
tag,
what
happens
so
it's
better?
If
it's,
if
it's
declarative
it
shouldn't
be
declarative
on
the
revision,
it
should
be
declarative
in
the
cluster.
F
Okay,
so
I
guess
that
was
another
question
is:
if
you
wanted
to
make
it
declarative
somewhere,
that's
not
a
revision,
not
in
like
the
seo
operator
or
part
of
the
revision
at
install
time.
Would
you
have
things
like?
Would
you
have
to
introduce
a
new
crd
or
like
a
new
config
map
to
store
the
association
from
tags
to
revisions
or.
H
Yeah
one
thing
I
want
to
point
out
is
that,
even
if
we
configure
the
tags
in
the
easter
operator,
which
is
scope
to
revision,
it
is
by
design
not
possible
to
have
two
revisions.
Wait.
Yeah,
a
tag
refer
to
two
revisions
because
of
how
we
store
the
tags
stored
in
a
mutating
notebook.
That's
the
in
the
design
based
on
the
tag
name,
and
so
if
we
had
two
easter
operators,
both
trying
to
claim
the
same
tag,
whatever
the
last
one
applied
would
win,
there
would
not
be
somehow
like
a
bat.
F
J
B
Yeah,
so
purely
from
an
api
side,
not
from
a
implementation
site.
So
john,
I
agree
with
you
on
the
implementation.
Yes,
we
can
choose
one
but
from
an
api,
it's
better
not
to
have
that
confusion
if
we
can-
and
I
think
you
can
also.
My
other
comment
is
that
we
can
layer
imperative
on
top.
So
I
think
if
you
have
like
a
either
a
cd
or
config
map
that
it's
defining
the
mapping
right,
you
could
have
an
imperative
command.
B
F
Right
right,
yeah,
so
today,
in
this
proposal,
these
tags
exist
just
as
web
hooks,
like
john
mentioned,
and
the
imperative
command
would
just
go
in
and
modify
those
web
hooks.
F
But
I
think
one
thing
that
john
mentioned
in
his
comments
on
the
dock
is
that
this
would
make
life
kind
of
hard
for
users
who
already
have
like
helm
integrated
into
their
ci
cd
pipelines,
and
the
istio
control
commands
would
not
integrate
as
well
is
that
is
that
accurate.
H
I
am
100
fine
with
also
having
an
easter
cuddle
command
and
I
think
I'm
starting
to
think
that
that
is
actually
the
right
solution,
because
there's
a
lot
of
users
like
if
we
have
external
hod,
like
those
users,
don't
want
to
use
home
because
it's
all
managed
for
them.
So
they're
perfectly
fine
with
running
some
imperative
command,
maybe
and
the
other
users
who
have
just
like
a
single
cluster,
and
they
want
to
manage
everything
through
their
cicd
pipelines
with
home.
H
I
I
think
it's
also
important
to
see
where
we'll
be
in
six
months
or
nine
months
or
long
term
and
having
initiation
everything
go
to
a
gateway.
I
think
it's
non-controversial,
because
that
gives
us
maximum
flexibility.
We
can
do
it
per
workload.
We
have
a
lot
of
benefits
with
that
having
a
command
that
is
doing
more
than
just
applying
the
labels.
That
is
just
you
know,
checking
telemetry,
doing
gradual
rollout
and
doing
you
know
kind
of
orchestrating
the
the
switch.
It's
also
something
that
is.
B
I
G
I
If
we
proceed,
then
it's
it's
it's
you
know
we
are
back
into
networking
space.
I
mean
you
have
you
know
you
have
a
host
name,
you,
you
can
match
of
the
caller
and
send
your
routes
and
you
do
whatever
you
want
with
virtual
services
and
twenty
percent
of
workloads
go.
I
I
L
That's
all
right,
I'm
just
wondering
caution
in
that
world,
where
you're
talking
about
a
control,
plane
operator,
really
managing
the
shift
between
versions
of
a
control
plane.
Is
there
any
whirl
or
any
role
for
a
application
operator
driving
a
switch
between
control
planes.
I
I
B
I
think
we
need
to
separate
those
two
right,
so
there
might
be
a
two
levels
here,
so
the
operator
of
the
controller,
the
the
control
plane
operator,
might
be
making
a
change
to
some
particular
managed,
control,
plane,
right,
they're,
updating
it
or
whatever,
but
separately,
the
mesh
admin.
Maybe
you're
rolling
out
a
change
to
their
mesh
that
they
do
through
a
canary
of
changing
revisions
right.
So
I
think.
N
B
B
I
But
we
we
should
not
mix,
you
know
the
the
rollout
of
a
new
control
plane
configuration
with
with
the
revision
of
individuals.
You
know
crds
because
that's
very
important
that
we
definitely
need
to
discuss
and
resist
second
part,
but
I
would
not
mix
it
with
with
restarting
history
brazilian
and
doing
so.
L
I
F
B
F
B
Like
declarative
way
of
doing
things
is
what
you're
saying
so,
I'm
saying
I
don't
think
so.
I
don't
think
that
the
declarative
place
for
this
is
on
each
revision.
I
think
this
has
to
be
per
cluster
right.
This
is
a
global
cluster
setting
the
mapping.
It's
not
a
revision
setting
right
yeah,
I
I
would
prefer
having
some
just
a
config
map
start,
maybe
but
some
resource
that
they
can
control
by
a
cacd
that
it
does
the
mapping
and
then
istio.
Cuddle
is
just
mutating
that.
I
And
is
there
today
in
1.8,
they
already
have
a
resource
that
they
control,
which
is
an
imitating
workbook,
and
this
yokata
is
actually
just
going
to
change
this
mutating
in
a
controlled
way,
but
we
can
introduce
another
one,
but
if
we
have
one
already
that
can
be
modified
by
cicd
system,
why?
Why
do
we
need
one.
B
H
Of
permissions-
and
that
is
yes,
I
agree
with
you-
we
do
already
have
global
resources
like
the
validating
web
book,
there's
one
in
the
cluster
and
it's
controlled
by
hell.
So
if
we're
going
to
stop
using
helm
as
the
way
to
configure
those,
then
I
think
we
should
be
consistent
and
I'm
not
sure
that
there's
any
alternatives
for
someone
that's
deeply
integrated
with
home.
I
think
they
want
to
use
helm
to
configure
everything
and
I'm
not
sure
why
these
would
be
any
different.
I
Remember
her
hand
tree
is
taking
a
very
strong
stance
against
you
know
doing
global
stuff
cluster
level
stuff
I
mean
they,
don't
even
allow
you
to
put
labels
on
main
spaces
or
touch
namespaces.
So
I
would
say
that
if
we
follow
the
philosophy
of
helmholtz,
treats
cluster
wide
resources
separately
and.
O
If
the
custom
resource
didn't
have
a
whole
list,
but
just
had
the
pointer
for
that
namespace
that
it
was
in,
then
it
would
work
well
with
cube,
cuddle
get
and
things
like
that
that
produce
nice
custom
lists.
If
you
put
a
whole
list
in
a
single
crd,
it's
hard
to
display
that
list
with
cube
cuddle.
H
Yeah
the
concern
I
have
with
making
a
crd
is
that
something
has
to
turn
the
crd
into
a
web
book
like
that's.
The
the
only
thing
that
really
matters
at
the
end
of
the
day
is
the
web
hook.
So
what
is
going
to
turn
the
crd
into
the
web
book
either?
The
user
now
has
to
manage
multiple
different
securities.
They
pass
to
east
joe
cuddle.
H
D
B
Maybe
a
compromise
here
is,
we
start
just
start
with
the
imperative
command.
Have
it
just
mutate,
the
web
hook
and
then
see
if
users
need
us
to
give
them
more
declarative
control,
and
then
we
can
debate
whether
we
add
that
through
a
separate
thing
or
directly
on
sd
operators
back
or
somewhere
else
or
in
home
values
or
gateway
or
gateway
again,
I.
H
So
gateway,
I
think,
there's
oh,
I
think
that's
very
tricky
that
would
need
to
its
own
design,
I'm
not
convinced
that
it's
feasible
but
sven.
I,
I
mostly
agree
with
you,
but
one
concern
I
have
is
that
we
have
an
existing
bug
in
install
where
the
validating
web
hook
is
completely
broken.
If
you
use
revisions
because
it
points
to
the
old
easter
d
name,
so
we
need
some
way
to
this
was
the
origin
of
why
we
wanted
to
do
this
right.
H
B
B
F
Yeah,
I
think
that's
what
it
would
be
if
we
like
just
kind
of
put
it
in
put
it
in
the
operator
and
then
had
it
generate
a
new
web
hook.
You
know,
as
part
of
like
the
install,
that's
that's
pretty
much.
One
of
the
problems
is
that
you
could
go
back
and
forth
on
like
it's,
whichever
one
it
was
installed.
Last
for
a
given
tag
would
be
the
one
the
tag
points
to
and
it's
kind
of
confusing
semantics.
I
think.
B
I
think
we
should.
We
should
probably
table
this
discussion
and
come
back
to
it
in.
I
guess
where
are
we
doing?
This?
Is
this
an
environment
that
we're
discussing
this
mostly
under
environments,
because
I
think
we
do
need
this
to
define
exactly
what
is
the
api
here?
I
B
A
F
Yeah,
so
my
thought
on
this
is
that
the
istio
control,
like
the
imperative,
api
and
I'll
just
I'll,
say
this
really
quickly.
The
api
there
like
what
the
command
would
look
like,
wouldn't
really
change
like
what
a
user
would
do,
wouldn't
change,
maybe
the
backing
if
it's
like
a
config
map
or
a
mutating
web
hook
directly,
that
might
change
implementation
wise.
So
it's
pretty
low
commitment
to
put
an
experimental,
istio
control
command
and
let
those
users
start
using
the
revision
tags
as
opposed
to
adding
something
to
the
sql
operator,
which
is
more
official.
M
All
right,
so
I
think
this
may
not
be
a
news
or
surprise
to
everyone
here,
the
testing
we
go
through
and
the
challenges
we
go
through
during
every
release.
There's
small
modifications
which
we
did
was
to
improve
on
the
testing
sheet.
Remove
some
of
the
confusions.
M
The
current
stats
of
the
test
cases
which
exist
today
is
altogether
approximately
150
or
around
200.
The
p0s
are
65
out
of
all
of
those.
Now
the
challenge,
the
biggest
challenge
which
happens
during
the
releases,
is
only
if
small
percentage
of
that
is
automated
right.
It's
very
hard
to
get
the
manual
testing
done
it's
hard
to
get
the
ownership.
I
mean
it
takes
some
time
the
p102,
to
be
honest,
never
gets
tested
because
we
don't
have
time.
M
So
all
we
rely
on
is
the
manual
testing
most
of
the
manual
testing
on
p0
bugs,
and
then
you
know
it's
a
constant
struggle
in
every
release.
Sometimes
we
have
to
delay
the
releases
because
of
which
we
also
had
plans
where
you
know
the
the
testing
was
done.
There
were
some
issues
were
found,
whether
upgrade
downgrade
or
others
where
we
had
to
delay
the
release.
Because
of
the
fix
we
had
to
work
on,
so
these
are
some
of
the
challenges.
The
other
challenge
was,
you
know
it.
M
M
There's
another
thing
which
you
know,
lynn
suggested
that
the
prioritization
of
the
ownership
is
listed
in
that
so
pick
up,
some
of
them
as
a
default,
so
they
can
help
us
review
and
then
there
was
third
one
which
josh
suggested
rather
than
having
two
sheets.
Let's
use
one
sheet,
so
it
reduces
the
contentions
of
you
know
having
multiple
sheets
and
figure
out.
You
know
who
is
doing
what
kind
of
testing.
M
Having
said
that,
we
have
around
seven
to
eight
working
groups,
but
we
are
only
considering
five
five
working
group
here.
We
are
leaving
docs
dnr
and
the
product
security
working
group.
Here
right
with
that
map,
we
should
be
able
to
cover
all
the
test
cases,
whether
it's
p0,
p1
or
p2
priority
in
a
year
and
a
half.
However,
the
only
caveat
is
we
are
not
adding
any
more
manual
test
cases
or
if
a
new
feature
gets
added.
M
All
the
docs
test
cases
should
be
automated
as
a
criteria
to
have
it
promoted
to
alpha
or
whichever
state
it
is
in.
So
it's
a
proposal.
You
know
short
term
long
term
is
to
take
some
of
those
test
cases
get
it
automated,
because
it's
really
getting
tricky
to
get
it.
You
know
community
testing,
every
release.
A
A
L
There
are
some
of
our
docs
historically
involving
multi-cluster
and
things
like
that.
We're
a
little
bit
more
involved
and
difficult,
but
I
think
a
lot
of
that's
been
cleaned
up.
A
Yeah,
but
that's
what
I
thought
right
that
we
had
made
a
lot
of
progress
on
some
of
the
more
difficult
scenarios.
The
install
experience
is
easier.
The
configuration
experience
is
more
lean.
We
actually
have
functional
tests
that
cover
right,
so
the
the
options
would
be
right
either
you
have
a
test,
that's
for
the
dock
or
there's
a
test
that
already
exists.
That
substantially
covers
what
the
doc
says,
but
doesn't
line
for
line.
A
Follow
the
dock
right,
but
it
can
be
manually
reviewed
without
having
to
be
manually
tested
right,
because
the
test
is
effectively
testing
the
same
thing,
and
so
we
just
link
to
a
test,
and
then
the
third
case
that
john
mentioned,
which
is
the
dock,
that's
effectively
untestable
and
it's
just
marked
right.
As
you
know,
the
dock
requires
manual
review
before
the
release
to
make
sure
that
it's
still
sane.
A
B
A
B
L
K
Yeah,
so
basically,
if
you
have
any
feature
that
needs
to
be
promotion
to
alpha
or
even
beta,
I
believe
it's
a
required
to
have
automation
for
initial.
I
o
I
mean
I
recently
went
through
this
myself
with
external.
Is
control
plane
as
well,
so
there's
no
exception.
So
if
you
have
any
in
the
roadmap,
that's
mark
as
part
of
the
promotion,
this
will
be
part
of
it.
Basically,.
A
So
one
of
the
things
I
suggest
was
the
ability
to
link
a
doc
to
a
test,
as
opposed
to
requiring
the
doc
to
use
the
new
test
framework
as
a
way
to
at
least
get
some
notion
of
coverage
into
the
dock
based
on
existing
work,
because
if
you
can't
link
the
dock
to
a
test
right,
that's
testing
the
same
effective
feature.
Right
then,
there's
clearly
a
problem
right.
That's
kind
of
an
intermediate
way
to
get
coverage.
K
A
K
A
A
Right
they
they
could
go
from
p,
zeroes
to
p
ones
right,
and
if
there
are
things
that
have
nothing,
then
their
p
is
still
p
zeros
right
yeah.
It
gives
us
a
way
and-
and
you
know
maybe
the
the
correspondence
between
the
test
well
like
nothing's,
going
to
be
as
good
as
a
literal
dox
test
right.
So
that's
yeah,
that's
100
coverage,
but
we
could
make
an
assessment
of
how
good
a
job
the
test
does
at
covering
what's
in
the
dock,
and
you
could
probably
provide
an
an
indicator
of
quality
there.
B
K
Yeah
totally
I
I
was
thinking
gosh.
Maybe
there
could
be
like
we
discussed
a
feature.
How
long
can
they
stay
for
alpha?
Maybe
for
the
dog?
It's
the
same
thing.
You
know:
how
long
can
the
page
stay
untested
and
then,
if
they
pass
that
window,
they
either
need
to
seek
a
exception
or
the
page
will
just
be.
A
A
The
folks
who
have
been
working
on
docs
automation,
testing
this
idea
of
linking
as
a
to
existing
tests
as
a
form
of
an
intermediate
indication
of
coverage.
Is
that
something
that
you
guys
who
considers
themselves
the
owner
for
that.
A
So
obviously
they
should
come
up
in
btr,
but
let's
make
a
note
of
it
and
let's
try
find
an
owner
following
up
offline
schweda.
Does
that
I
think
that's
helpful
right
because
it
does
bring
help
us
correspond
existing
investment
with
the
qualification
process.
A
A
So
we
could
say
that
the
baseline
is
like
every
test
either
has
to
every
dock
has
to
be
linked
to
a
test,
have
a
test
or
if
it
has
none
of
those
things,
and
it
has
to
be
manually
qualified
for
the
release,
and
you
know
if
it's
basically
p
zeros
anything
that's
marked
p0
has
to
be
manually
qualified
if
it's
got
a
gap
right,
because,
obviously,
if
you
have
a
test,
you're,
no
longer
p0,
and
then
we
could
say,
look
you
you
must
do
one
of
either
of
those
two
things
for
the
one
not
line
release
linking
to
test
should
be
quite
cheap
right.
A
I
don't
think
that
would
be
overly
onerous
like
it's
less
work
than
actually
doing
the
manual
qualification,
so
I
don't
there
should
be
any
objection
to
that
within
the
current
1.9
planning
mitch.
You
raised
that
issue.
Does
that
sound?
Okay
to
you.
P
I
I
like,
I
would
also
go
as
far
as
to
say
I
mean
there's
like
concepts
and
stuff
that
we've
had
trouble
figuring
out
how
to
test,
because
it's
just
a
single
yaml
argument
or
whatever
there's
nothing
to
test
there
and
this
kind
of
reinforces
some
of
that
with
partial
testing
as
well.
So
it
expands
on
not
just
doctors,
but
that's
that.
B
Yeah,
I
think
the
the
point
a
few
different
people
are
making
is
that
you
know
clearly.
This
is
not
good
enough
right.
This
is
just
a
first
step.
This
does
not
mean
that
the
doc
is
is
correct
and
that
the
doc
actually
works.
It
just
means
that
we
have.
O
Q
O
O
E
A
Right
so
yeah,
but
like
the
tests,
have
to
have
reasonably
good
correspondence
right
with
what's
in
the
dock
right,
they
don't
have
reasonably
good
correspondence
like
that
like
so
I
think
the
point
is
like
working
group
leads
should
make
the
assessment
about
whether
the
test
that
that
association
between
the
doc
and
the
test
is
valid
or
not
right.
Is
it
providing
some
coverage
for
what's
described
in
the
dock
and
maybe
an
indication
of
what
or
how
much
coverage?
That
is
clearly
it's
not
a
hundred.
K
Percent
yeah:
we
have
to
enhance
the
framework
to
have
this
partial
passing
status
as
well,
and
also
the
ui
to
be
able
to
reflect
that
correctly.
B
K
A
M
Okay,
so
while
you
are
taking
notes
I'll
go
to
the
next
one,
so
it
has
happened
in
the
past
that
you
know
when
we
did
the
release
announcement
either
we
forget
to
announce
on
one
platform
or
you
know,
sometimes
on
twitter
or
some
other.
So
there
was
some
discussion
in
the
working
group
leads
meeting.
E
K
B
A
M
Yeah
like
one
once
we
have
the
you
know,
recommendation
or
agreement,
then
we
can
look
on
more
than
one
owner,
because
I
think
it
would
be
helpful
to
have
so
that
one
person
is
not
a
bottleneck.
I
think
then
we
can
go
into
the
next
stop
off.
Who
can
be
the
owner
of
this?
M
A
M
So
that
is,
I
was
about
to
say
following
up
on
the
theme,
there
was
another
question:
can
we
have
a
seo?
Linkedin
account?
It
does
not
exist
today.
I
also
like
the
idea,
because
there
are
many
platforms
where
we
do
do
the
announcement,
but
linkedin
is
not
there
unless
we
are
not
there
at
least
I'm
not
aware
so
the
question
is
it
okay
to
have
a
linkedin
account
or
a
steal.
K
Does
kubernetes?
Okay,
that's
a
good
question.
I
do
not
know
because
I
feel
like
linkedin
is
more
for
like
people
I
could
be
wrong,
but
just
so
you
guys
know
there
is
a
istio
group
in
linkedin,
I'm
actually
one
of
the
admins.
So
occasionally
people
would
pay
me
to
join
that
group
and
I've
seen
people
like
solo
publish
a
lot
of
stuff
on
that
group.
So
that's
a
group
for
people
to
publish
things.
That's
related
to
istio.
E
A
Yeah,
let
me
let
me
craig,
as
our
social
media.
E
M
Okay,
then
that
was
a
reminder.
I
I,
as
far
as
I
know,
all
the
working
group
leads
have
done
it,
but
the
last
reminder
you
know
if
you
have
not
updated
the
features
which
are
completed
and
has
mentioned
in
the
istio
2020
roadmap.
Please
go
ahead
and
do
it
by
next
week,
because
I'm
gonna
take
the
spill
over
from
the
features
which
are
not
done
to
start
working
on
the
2021
roadmap.
M
M
What
is
left
behind
so
literally,
what
we
are
doing
is
the
retro
off
the
map
at
the
end
of
the
year,
which
kind
of
helpful,
but
not
so
much
so
we're
expecting
that
once
the
2021
roadmap
is
ready
with
the
help
of
all
the
working
group
and
the
toc
members,
then
we
can
review
it
every
quarter
so
that
you
know
it
has
some
agility
and
we
can
mark
and
then
have
even
the
promotional
of
the
features
visibility.
C
A
M
M
A
M
A
Okay,
so
we
had
the
tag
discussion.
Jason.
Do
you
want
me
to
present
the
protobuf
api
feature
status
and
you
can
talk
about
it.
J
Yeah
sure
that
would
be
great
glad
to
be
the
last
design
review
for
2020.
J
J
Yeah,
so
this
has
been
reviewed
in
ux,
so
just
I
want
to
go
here
is
to
kind
of
get
an
agreement
with
the
tlc.
Since
tlc
members
are
mostly
an
api
owner,
so
they
involve
api
changes.
J
J
So
there
are
a
few
things
that
we
have
considered
the
usage
of
this
kind
of
feature
status
on
produce,
one
is
kind
of
showing
deprecated
or
alpha
early
stage
api
status
for
users
when
they
actually
use
such
api
and
notify
them
in
the
easter
cuddle
tool
like
analyze
or
or
even
there's
a
like
extra
two
doing
that,
and
also,
more
importantly,
on
the
upgrade
case
where
users
upgrade
between
easton
versions
and
we
they
have
a
way
to
know
that,
where
what
what
configuration
they
need
to
change.
J
So
for
the
design
for
proto,
so
there
are
some
designs
below
that
will
be
shared
with
all
other
configurations,
but
design,
particularly
for
proto
itself,
is
to
use
the
custom
options
as
the
way
to
label
these
proto
fields.
So
we
use
the
names
we
can
change.
But
right
now,
I'm
recommending
like
using
easter
feature
status
for
for
showing
the
status
like
alpha
beta,
stable
and
we
mentioned
below
we'll
add
a
dove
for
just
expand,
experimental
features
and
also
we'll
add
easter
feature
name.
J
So
this
name.
It
was
a
link
to
the
actual
feature
that
this
field
is
linked
to
and
will
will
activate
the
feature.
So
it
is
a
better
way
for
user
to
know
like
what
features
they're
actually
using,
and
what
status
of
that
is.
Something
to
note
is
that
I
think
the
status
here
we're
labeling
is
on
only
on
the
field
level.
So
it's
not
on
the
on
the
feature
itself
level.
So
so
it's
important
to
kind
of
differentiate
between
the
field
level
status
and
also
the
ap
the
actual
feature
status.
J
The
feature
status
will
be
the
actual
feature.
Status
will
be
tracked
in
the
feature
list
we,
which
we'll
talk
below
the
few
rules.
I
would
like
to
kind
of
note
here
is
that
individual
status
level
will
always
field
level
status
will
always
override
message
level
status.
So
an
api
owner
developer
can
label
the
status
on
the
message
level.
So,
for
example,
if
they
label
an
object,
meta
level
here,
then,
for
example,
they
label
it
alpha.
Then
it
means
labels
and
annotations
will
be
inheriting
that
quick
question
from
costin.
J
Does
that
feature
status
replace
height
from
dock?
I
might
actually
don't
know
from
height
from
dark
means
right
now
it
may
mean
different
things.
They
mean
deprecated,
it
may
mean
retired
or
something
like
we
totally
removed.
That
field.
J
O
J
Yeah
and
yeah
so
the
same
for
enums,
the
same
thing
and
also
right
now,
I'm
thinking
only
one
feature
status
can
be
provided
for
each
field
and
one
feature
name
can
be
provided
for
each
field
and
going
on
for
future
catalog.
J
So
that's
the
list
of
features
that
kind
of
I've
mentioned
about.
So
in
eso
io.
We
have
a
list
of
features
that
for
future
status,
but
it's
not
like
something
that
we
active,
track
or
or
actually
consume.
Programmatically.
J
J
So
basically,
I
think
we
want
to
build
on
top
of
that
and
potentially
in
the
future,
not
for
just
for
future
status.
We
will
have
like
other
tooling,
like
docs,
testing
and
kind
tooling
I'll
use
that
feature
list
to
to
present
information.
A
J
And
regarding
the
feature
status
names,
I
think
after
feedback
on
the
ux
working
group
so
well
for
the
future
status,
we'll
we'll
have
alphas
beta,
stable
and
we
add
a
dev
and
then
I'm
proposing
to
add
a
life
cycle
kind
of
late
name,
while
label
to
to
the
feature
status
as
well
like.
Basically,
it
describes
deprecated
or
retired,
so
they
can
be
annotated
in
parallel
and
so
meaning
like.
Maybe
a
beta
field
can
also
be
deprecated
or
retired
in
the
future,
so
in
protobots
yeah.
J
J
Yeah
just
going
on
for
the
client
tooling,
I
think
here
are
a
few
things
that
I
think
we
will
work
on
or
potentially
have
impact
on.
The
client
tooling
of
using
analyzers
to
show
like
feature
status
and
notify
their
there's.
An
alpha
or
dedicated
field
and
dogs
can
potentially
use
this.
J
I
think,
in
sake
of
time
I
would
like
to
go
to
the
timeline
and
probably
and
ask
for
the
tlc
member.
So
so,
basically,
I
think
for
1.9
release.
I
know
we're
close,
but
we
want
to
get
this
out
as
soon
as
possible.
So
I
think
the
first
that
state
we
kind
of
want
to
do
is
to
kind
of
use
the
existing
apis
and
label
the
feature.
That
is
the
feature
status,
meaning
like
alpha
beta
and
stable,
and
then
also
label
like
deprecated
kind
of
retire.
J
Basically
like
kind
of
a
cleanup
of
our
apis
and-
and
I
would
like
to
work
with-
the
working
working
group
leads
working
groups
to
on
that,
and
then
we
provide
in
one
online.
J
We
provide
a
client
tooling,
maybe
that's
an
analyze,
to
kind
of
just
notify
users
of
these
alpha
and
deprecated
fields,
and
in
1.10
and
onwards
it
will
focus
more
on
kind
of
associated
features
and
also
focus
on,
like
upgrade
cases
like
upgrade
cases
are
more
complicated
than
just
showing
pure
feature
status
as
they
need
to
know
the
state
between
each
release
and
and
present
that
devs
to
to
the
user.
J
J
So
as
for
the
tlc
here,
I
think
it
will
be
to
kind
of
probably
approve
this
stock
list,
then
on
the
dock.
But
anyone
interested
feel
free
to
put
your
names
on
there
and
also,
I
think,
as
I
work
with
work
group
leads
and
also
push
out
like
prs
for
the
api
changes.
Just
let
you
know
that
this
is
happening.
A
So
one
question
I
have
jason.
Obviously
we
don't
actually
need
to
couple
this
work
with
releases
right.
We
can
make
make
incremental
improvements
to
this
in
point
releases
because
it
doesn't
actually
change
any
behavior
in
the
product.
Yeah.
J
I
think
that
makes
sense.
I
think
the
goal
here
is
to
just
because
the
earlier,
the
better
and
user
will
have
will
be
able
to
use
this.
A
K
K
Yeah,
okay,
this
looks
really
good.
I
just
have
a
quick
question
on
the
first
bullet
label
field,
future
status
of
the
existing
fields
in
the
api.
Is
there
a
procedure
for
that?
Do
you
envision
to
go
through,
like
the
brian
avery's
checklist
for
that
for
beta?
Oh,
it's
a
fastpass
process
for
that.
J
I'm
not
sure
about
the
list,
but
I'm
the
work
I'm
imagining
is
just
to
work
with
with
individual
working
groups
leads
and
then
kind
of
make
them
check.
If
that's
something
that
agree
on
and
they
intend
to
have
the
status.
K
J
J
Or
yeah
can
brian,
where
you
probably
kind
of
comment
on
that
list,
so
I
I
don't
know
like
I
take
that
into
consideration.
H
Jason,
I
had
a
question
about
the
feature,
so
I
think
the
status
makes
perfect
sense.
I
was.
I
was
amazed
that
this
wasn't
actually
already
a
thing,
but
the
features.
If
we
have
one
feature
per
field,
it
seems
like
it's
very,
limiting
like
there's
some
fields
like
a
virtual
service
host
yeah
like
that
applies
to
like
every
feature.
Almost
you
know
and
then
there's
other
ones
where
it's
kind
of
just
like
redundant
like
the
telemetry
field
of
some
api.
Probably
is
the
telemetry
feature
like
it
seems
almost
redundant.
H
H
J
For
running
initial
one
release,
I
think
we
want
to
do
probably
minimal
one
that
with
like
less
confusing
ones
and
then
as
we
cause.
This
is
expandable.
So,
as
we
kind
of
go
on,
I
think
ideas
might
be
like
we
can
link
multiple
features
to
a
field
or
something.
A
J
Yeah,
so
that's
why
I
think
I
mentioned
in
1.10
plus
so
that
definitely
needs
more
work,
kind
of
collaborating
with
feature
less
than
yeah.
J
A
Yeah,
I
I
think
this
is
great,
so
thanks
jason,
cool,
awesome,
yep,
I'm
done
all
right,
everybody
we
are
done.
I
see
you
all
in
the
new
year
and
congratulations
on
everything
that
you've
done
in
2020.