►
From YouTube: Technical Oversight Committee 2021/06/21
Description
Istio's Technical Oversight Committee for June 21st, 2021.
Topics:
- Prioritization of Documentation Testing
A
So
last
year
we
got
a
great
guidance
from
the
toc
members
on
that
one
pager
office
deal
strategy
for
2021,
but
we
started
very
late.
So
it
was
a
rush
rush,
which
was
also
helped
for
the
kubecon,
which
will
also
help
to
set
up
the
roadmap
for
the
individual
working
group
leads.
But
I
would
like
to
start
now
that
the
july
is
starting.
A
A
C
A
Yes,
I
mean
last
year
I
started
around
october,
it
went
up
to
november
and
it
was
really
a
very
very
hard
in
december
to
even
complete
the
one
pager
before
the
istio
leads
working
completes
can
provide
the
2021
yearly
roadmap.
A
So
that's
a
great
question
if
we
start
like
since
the
july
started
right,
if
we
get
that
done
by
end
of
august,
that
will
be
a
great
time
ahead,
for
the
working
group
needs
to
start
their
yearly
roadmap.
So
it's
not
in
the
middle
of
the
release
right
and
they
are
done
by
october
end
of
november,
so
that
we
are
ready
for
the
next
year.
B
E
So
so
this
year,
cubecon
is
a
little
bit
early
too.
With
the
conference
season,
it
might
make
sense
to
be
a
little
bit
earlier.
I
think
the
conference
is
the
october-ish
so
because
maybe
end
of
september
and
some
good
middle
grad.
B
B
E
B
C
D
A
The
late
november
nina
is
late
for
the
virginia
pleats,
because
that's
the
time
of
our
release
too,
we
normally
releases
the
first
week
of
december,
so
they're
not
and
then
last
two
weeks
are
holidays,
so
the
working
completes
are
never
able
to
complete
by
the
end
of
the
year.
Ideally
the
things
from
the
toc
and
the
working
lease
should
be
completed
in
in
december
right.
So
we
are
ready
for
q1.
G
C
E
B
I've
had
one
release
worth
of
feedback
on
it.
1.1
releases
worth
of
feedback
on
our
progress
so
yeah
I
tend
to
agree.
Obviously
it's
not
something
we
can
start
now.
So
I
guess
we'll
have
a
third
release.
In
august
we
would
had
two
in
a
bit
releases
worth
of
a
bit
worth.
It
release
a
little
feedback
if
we
can
deliver
roadmap
at
a
mid
to
late
october,.
D
A
C
So
trautin
I
was
creating
this
year's
roadmap
right.
I
am
mostly
with
help
from
other
plc
members.
For
me,
the
ux
working
group
feedback
and
in
general,
some
survey
feedback
and
doing
the
empathy
session.
It
was
clear
that
day
two
operations
was
a
problem
right,
so
all
of
that
fit
nicely
into
the
day
two
operations
theme
of
the
year.
C
A
So
then,
there
are
two
things
here
right:
one
is
the
straw.
Man
bullet
point
sheet
right,
two:
what
kind
of
service
we
need
right,
because
we
still
have
enough
time
to
collect
the
user
feedback.
Then
we
should
start
thinking.
Okay.
What
do
I
need
to
make
sure
that
I
have
enough
information
to
create
this.
C
Yeah,
I
would
go
for
another
empathy
session
before
I
put
any
concrete
thoughts
down.
If
we
can
do
that
because,
at
least
for
me,
I
don't
know
about
the
other
two
members,
I'm
guessing.
C
Helpful
and
then
the
ux
surveys,
both
of
them
were
helpful,
so
we
are
continuing
the
ux
service.
We
should
plan
some
empathy
session
if
we
can
have
enough
end
user
feedback
and
then
you
know,
collect
those
two
and
start
putting
our
thoughts.
G
G
G
E
G
Sorry
about
that,
we
we
do
have
the
first
question
on
the
survey
is
basically
how
much
upgrading
to
110
have
you
and
the
options
are
some
non-production
or
test
meshes
some
production
meshes
or
all
of
our
meshes.
So
that
should
give
some
signal
into
whether
it's
sort
of
an
evaluative
use
case
or
actually
in
production.
B
B
Yeah
I
mean
we
want
that
after
111
out
of
110.,
I
mean
110
is
a
useful
data
point,
but
yeah
right
we're
still
making
progress
on
these
things
and
and
while
112
right,
which
is
the
last
release
of
the
year
right.
We,
we
can't
write
the
roadmap
having
gotten
feedback
on
112.
E
Yeah,
so
if
you
want
the
feedback
on
the
11.
mitch
said
he
has
about
a
dozen
feedback.
G
B
I
think
that's
right
I
mean
google
will
probably
be
able
to
help
as
well
feedback.
We
should
have
had
some
feedback
from
users
at
that
point
on
and
the
service
mesh
so
well.
It
will
be
close
because
we
tend
to
release
about
a
month
after
open
source.
C
I
B
Giving
time
for
the
vendors
to
get
feedback,
I
don't
know
how
material
it
is,
what
they
tend
to
get
very
directed
feedback.
If
users
have
an
issue.
C
B
E
B
E
Okay,
yeah
I'm
having
some
network
issue
today.
I
don't
know
why.
So
I
was
just
saying
I
I
think
110
is
definitely
a
significant
release.
E
B
We
were
thinking,
we
would
have
hbo
and
that's
probably
not
going
to
happen.
Sorry
bts,
which
is
now
called
h-bone.
So
people
can
understand
what
it
means.
F
B
J
B
A
B
Yeah,
I
don't
know
if
we'd
get
brian
on
your
point,
c9
being
promoted.
J
B
Right
I
mean,
I
think,
that
the
last
time
we
had
feedback
right,
it
was
significantly
bifurcated
right.
There
were
people
who
used
helm
in
part
because
it
was
the
standard
within
their
organization,
so
they
had
to
use
it.
They
felt
like
they
wanted
to
use
it,
and
then
there
were
people
who
weren't
in
that
boat
and
they
were
yeah.
K
E
Yeah,
I
remember
with
thirty
percent
roughly
for
each
of
them
and
they
were.
B
J
J
Was
it
opportunistic
or
was
it
sort
of
a
studied
preference
and
having
one
is
better
than
to
all
things
being
equal.
B
J
H
I
I'm
not
entirely
sure
that
that
stopping
boat
is
is
necessarily
a
good
idea.
I
mean
easter
cattle
is
really
hermione
stewart
with
her
on
top
of
it.
Our
problem
is
not
that
we
have
issue
capital,
I
mean
operator.
Api
is
probably
the
problem.
The
fact
that
we
have
two
apis
and
it's
difficult
to
configure,
but
if
we
trim
down
the
the
you
know,
differences
in
the
api
surface
which
are
trying
to
do
then
helm
is
helm,
and
this
geocatal
is
a
wrapper
around
him.
C
C
E
B
Yeah,
okay,
so
we're
still
roughly
going
to
target
a
september
timeline.
B
E
C
But
recording
one
time,
that's
where
I
was
going,
I
think
for
those
new
apis,
I
mean
it's
great
you're,
releasing
them
but
expecting
a
hard
end
user
to
start
using
it.
It's
very
difficult
for
them
right.
If
they're
using
eastern
production
for
a
few
releases,
they
won't
just
go
and
configure
these
alpha
apis.
B
B
Okay,
we
should
probably
move
along.
F
F
F
So
for
the
past
couple
of
releases,
I'd
say
back
back
since
I
don't
know
one
five
we've
tried
to
get
the
docs
working
group
and
the
work
group
leads
on
board
us
to
how
to
prioritize
doc.
There
are
testing
of
docs
for
different
releases,
what's
p0,
p1,
p2,
etc,
and
it's
always
a
challenge
to
get
everybody
on
board.
So
this
attempts
to
codify
that
in
a
in
a
way
that
we
can
automate
it
and
just
give
things
a
score.
So
eric
and
I
discussed
what
we
felt
was
important.
F
We
brought
it
before
docs.
We
brought
it
before
steering
steering
said
that
toc
should
make
the
decision
as
work
group
leads
for
docs
as
to
what
they
thought
of
it.
So
if
you
scroll
down
to
the
design
ideas.
F
The
things
that
we
thought
were
important
are
what
areas
of
the
documentation
have
changed
and
how
much
greater
changes
in
documentation
are
more
likely
to
need
more
testing.
We
can
do
it.
What
areas
of
the
code
that
are
being
touched
by
this
documentation
have
changed
and
how
much
go
testing
has
a
trace
command.
I
haven't
tested
it,
but
that
might
give
us
some
sort
of
an
idea
there
does
the
document
already
have
automated
tests?
F
If
it
already
has
automated
tests,
it
might
not
be
as
important
to
manually
test
it
and
what
is
being
accessed
most
often
by
users
again,
if
the,
if
the
top
two
are
filled,
then
it's
still
important
to
consider
the
bottom
two,
but
likely
not.
G
F
Important
and
then
we
thought
of
a
a
bit
of
scoring
there
so
taking
these
priorities
prior
to
taking
this
prioritized
list
and
deciding
a
number
of
stars
or
whatever
unit
you
want
to
give
it
with
16
stars
being
assigned
to
the
top
one
12
to
the
next
eight
to
the
next
and
then
four
to
the
bottom
and
collating
all
these
scores
together
in
order
to
produce
a
final
product
prioritization
for
the
dock,
and
then
we
create
a
spreadsheet
off
of
this
with
all
of
the
current
documentation.
F
B
F
I'm
not
sure
I
understood
the
question
but
I'll
answer
what
I
thought
I
understood
for
that.
So,
for
instance,
the
what
areas
of
the
documentation
have
changed
would
receive
16
stars
the
what
areas
of
the
code
have
changed
would
receive
12
stars
based
on
a
number
of
lines
for
both
of
those
and
then
eight
for
the
next
one.
If
it
doesn't
have
any
testing,
then
it
receives
the
full
eight
stars
and
four
for
the
next
one
divided
by
paychecks,
and
then
these
would
all
be
added
together.
F
B
I
I
actually
meant
more
about
the
roll
up,
so
on
what
categories
do
we
roll
up
under.
J
F
It
would,
I
guess
the
question,
is
I
mean
if
we
know
that
that
doc,
if
the
tests
for
that
doc
were
updated
in
the
release,
is
that
enough
for
us
to
say
this
page
doesn't
need
to
be
tested
or
isn't
as
important
to
test
possibly.
F
F
So
I
don't
know:
do
you
know
eric
when
you're
looking
at
doc
tests
prs?
Are
you
looking
comparing
to
the
original
document
adaptation
to
see
how
thoroughly
assessed.
K
F
So
this
is
also
for
something
that's
already
tested,
presumably
already
tested.
It's
got
a
whole
bunch
of
changes,
somebody's
going
back
and
updating
that
test
right
I
mean.
B
B
It
is
so,
if
there's
a
high
degree
of
variance
between
the
dots
and
what
the
docs
test
test,
then
it's
not
an
effective
test
agreed.
I
B
F
Yeah
that
would
lower
the
total
score
and
I
think
that
should
probably
take
into
account
code
changes
as
well
meaning
if
the
docs
tests
have
been
updated
and
there's
a
lot
of
code
changes,
then
that
also
reduces
the
score.
F
F
Okay,
I'll
figure
out
how
to
account
for
that.
F
Yeah,
so
this
is
trying
to
get
an
automated
way
of
scoring
that
just
gives
an
accurate
reflection
of
what
needs
to
be
tested.
So
we
tried
to
think
of
what
what
would
be
likely
to
affect
a
user's
experience
with
the
docs
from
one
relative
to
another.
F
So
your
thought
is:
what's
most
accessed
is
the
highest
prioritized
versus
yeah.
J
C
Based
on
access
patterns
of
users,
because
that's
the
most
critical
thing
I
mean,
if
we
keep
tweaking
around
some
deep
down
task,
that
no
one
changes,
not
sorry
that
no
one
looks
at
who
cares.
E
E
The
other
thing
I
want
to
say,
I
think
I
agree
with
niraj,
so
those
are
the
two
most
important
thing
is
whether
the
page
is
tested
and
also
you
know
how
popular
they
are,
and
we
think
whether
the
page
are
tested.
Brian
to
your
point,
early
on
some
of
our
tests
may
not
exactly
actually
test
us
through
what
the
user
would
tell
users
to
do.
E
It
would
be
nice
to
highlight
a
little
bit
on
that,
because
some
are
done
on
purpose,
and
I
knew
that
because
I
wrote
some
of
those
tests
and
we
had
to
test
a
slightly
different
them
because
of
a
tested
infrastructure
doesn't
have
like
real
key
hosts
and
search.
So
what
we
tell
user
is
actually
slightly
different
on
purpose
than
what's
been
tested,
so
that
might
be
used
for
information
to
highlight
forward
user.
C
Okay,
thanks
so
brian
just
just
to
you
know,
make
sure
I
give
my
feedback
correctly.
I
think
we
can
go
towards
something
more
smarter
and
like
rank
stacked
with
some
good
algorithm
in
it,
but
I
would
recommend
we
start
something
simpler,
see
how
it
goes
for
a
release
then
tweak
it
going.
The
other
way
is
very
difficult.
B
E
F
Yeah,
those
are
internal
metrics,
really
to
find
out.
The
top
two
are
to
find
out
how
accurate
the
tests
are,
whether
there's
likeliness
of
their
variance
between
the
tests
and
the
either
code
or
documentation.
But
if
we're
checking
when
the
test
changes,
that
might
be
good
enough.
E
Yeah,
I
think
the
variant
to
indicate
with
tests
a
little
bit
different,
that
that's
valuable
to
the
user.
You
you
know
when
they
find
out
their
stuff,
doesn't
work
and
we
tell
them
that
we
did
test
a
little
bit
different
and
maybe
even
can
link
to
point
a
link
to
them.
This
is
what
we
test.
I
think
that
could
be
valuable.
F
E
F
B
Okay,
don't
know
if
that
actually
makes
your
life
any
easier
brian,
but
I,
I
think
just
yeah
a
ranking
based
on
usage
and
then
you
know
effectively
bucketing
by
whether
they're
tested
or
not,
I'm
just
ranking
within
that.
K
F
Yeah,
I
do
think
that's
important
like
I
know
the
a
lot
of
the
concept
stuff
varies
just
because
it's
a
couple
of
example
configuration
elements
and
we
write
tests
against
them,
but
that's
not
going
to
match
up
with
the
written
three
paragraph
document
right.
E
E
Well,
user
would
be
helpful
yeah
when
they
look
at
issued
io.
They
can
see.
You
know
whether
the
test
is,
if
the
page
they
follow
through
and
find
a
problem,
they
can
at
least
tell
you
know,
wait.
This
is
not
fully
automated.
Okay,
I
see
there's
a
difference
of
what's
being
automated.
What's
what
I'm
achieving.
B
I'm
not
sure
users
care
about
whether
it's
automated
or
not.
I
think
they
care
whether
it's
tested
or
not.
F
E
C
I
think
we
are
parsing
this
little
bit
from
a
developer
point
of
view
and
from
a
user
point
of
view,
so
right
so
we
are
trying
to
say
to
a
user.
We
have
tested
it
or
not
internally,
right
now,
the
whether
we
have
tested
it
or
not
is
the
indication
only
means
automated
testing,
that's
fine,
but
as
a
user,
if
you
tell
me
hey,
you
have
a
doc
and
it's
not
tested
it's
just
bad
right.
F
C
Agree
so
there
are
pages
that
don't
receive
any
testing.
There
are
pages
that
receive
some
manual
testing
and
the
pages
which
you've
achieved
automated
testing.
I
think
trying
to
pass
these
three
things
for
a
user
is
enough.
There's
enough
noise
there.
If
we
make
further
classifications
within
automated
testing,
I
think
it's
too
much.
C
E
Yeah,
that's
a
good
point.
It
could
be
a
little
bit
too
much
for
the
user
and
it
would
be
only
valuable
when
they
follow
exactly
what
the
page
tell
them
to
do
and
it
didn't
work
for
so.
F
As
a
user
coming
in
here,
if
I
come
come
to
a
page,
let's
say
I
don't
know
a
concept
page
or
whatever,
and
I
see
hey
this
page
is
tested
and
I
go
through
it
and
try
to
use
it
and
it
doesn't
work,
I'm
probably
going
to
scratch
my
head
for
a
little
while
and
try
to
figure
out
what
I
did
wrong,
even
though
it
could
be
a
problem
with
that
doc.
B
E
E
F
F
B
Okay,
but
so
brian
and
robbie
feel,
like
you,
have
enough
feedback.
F
B
J
Yeah
yeah,
I
saw
it.
I
just
I
just
wanted
to
say
that
there
are
a
bunch
of
apis
and
their
implementations
being
dropped
into
111
and
having
the
api,
prs,
sort
of
reviewed
and
ready
to
go
earlier
in
the
release
cycle
is
nicer,
so
the
psa
is
that
yeah,
if
you,
if
you
see
kind
of
yourself,
tagged
or
anything
like
that
on
an
api
pr,
please
try
to
help
move
it
along.
E
Okay,
thanks
yeah,
thanks
for
the
reminder.
C
Yeah
before
we
go,
I
have
one
quick
question
on
the
api
aprs
since
those
comments
in
there
tend
to
be
lot
and
then
it's
difficult
to
know
which
one
of
those
conversations
are
still
alive
live
and
which
has
been
you
know
already
resolved,
can
can
when
the
pr
submitters
just
resolve
the
comments
that
you
don't
want
me
to
look
at
or
the
other
toc
that
will
help
like.
H
And
if
I
can
make
an
extra
psa,
please,
it
will
be
very
useful
to
have
you
know
a
doc
with
alternative
implementation,
what
other
people
are
doing
comparison
with
other
products,
because
it's
it's
always
difficult
to
to
understand
how
we
relate
with
other
types
of
complexity.
We
also
support
the
support.