►
From YouTube: Technical Oversight Committee 2022/11/14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
There
we
go
okay,
so
we
have
the
three
pending
approvals
to
complete
that
by
the
TOC,
which
I
guess
we
can
handle
asynchronously.
D
E
E
C
C
B
C
B
C
F
Greg
is
I,
agree.
F
I.
Could
one
of
you
guys
click
that
link
there?
It
provides
a
pretty
good
summary
of
the
issue
that
came
up.
It
was
so
this
is
a
User
submitted
issue.
Basically
I
kind
of
is
related
to
default,
cluster
selection
and
virtual
service
when
a
kubernetes
service
only
has
one
port.
So
you
have
this
kubernetes
service
defined
at
the
top
one
port
8080,
and
the
issue
is
in
terms
of
destination
rules.
F
If
you
want
to
define
a
port
level
policy
for
8080,
if
you
define
a
traffic
policy
with
a
port
level
settings
and
a
traffic
policy
with
non-port
level
settings,
they
both
end
up
pointing
to
the
same
cluster
generated
object
in
Envoy,
and
so,
if
you
want
to
scroll
down
just
a
little
bit
more,
basically
we
need
to
find
a
virtual
service
and
just
rely
on,
say,
routing
to
default.
F
Destination
service,
a
H
it
points
to
this
default,
outbound,
8080,
cluster
and
since,
in
this
particular
case,
the
the
sorry
the
destination
rule
is
applying
a
port
level
consistent
hash
policy
which
is
applied
at
the
route
object,
despite
the
fact
that
it's
pointing
to
the
right
cluster,
it's
not
pulling
in
the
right,
consistent
hash,
which
is
applied
to
the
route
object,
easy
fixes
if
they
just
add
to
their
virtual
service.
F
F
We
feel
like
this
is
just
a
simple
lookup
failure
that
you
have
a
destination
rule
defined
for
this
port
and
the
virtual
Services
doing
some
default
Port
selection,
and
it
should
be
picking
up
the
settings
for
this
destination
rule
but
Zhang,
who
raised
an
issue
that
we're
kind
of
I
don't
know
patching
leaks
as
they
pop
up
on
just
kind
of
relying
on
this
default.
F
Port
matching
Behavior,
and
so
the
discussion
is:
do
we
just
keep
going
forward
and
patching
these
leaks
like
this
as
a
come
up?
Or
do
we
need
to
come
out
and
say
like
in
this
particular
case,
for
example,
this
port
level
policy
should
not
apply
for
all
default
traffic
to
the
mesh,
for
that
host
did
I,
explain
that
pretty
well
or
are
there
any
questions.
F
So
if
if
this
consistent
hash
policy
was
stored
at
the
cluster
level
in
Envoy,
it'd
be
fine
because
virtual
Service,
you
know
it
generates
the
right
cluster
name
and
points
to
that
one,
and
normally
that
would
work
out.
But
since
consistent
hash
is
something
that's
stored
at
the
route
object
level
in
Envoy,
it's
not
actually
selecting
the
right
policy
to
apply
in
that
default.
Port
scenario.
F
And
so
the
discussion
is,
we
have
a
destination
rule
with
a
port
level
policy,
but
since
that
destination,
rule
or
sorry
that
service
in
kubernetes
only
has
one
port
that
Port
level
policy
is
technically
defining
all
traffic
for
that
host
in
the
mesh,
even
though
it's
technically
just
a
port
level
policy,
rather
than
a
top
level
traffic
policy.
F
Yeah,
so
just
looking
I
guess
for
confirmation,
where
we
we
just
keep,
keep
on
trucking
with
I,
don't
know
the
default
Behavior
or
if
we
say
enough
is
enough:
I,
don't
know,
unfortunately
Rama
or
Zhang,
who
weren't
able
to
attend
today.
So
it's
just
me,
foreign.
B
Are
there
any
Trucking
issues
in
Envoy?
For
this
right,
I
mean
we
had
a
similar
issue
before
with
weighted
traffic
split
right
where
it
was
a
feature
of
Route
instead
of
a
feature
of
the
cluster
right
or
we
weren't
able
to
use
composite
clusters
to
do
it.
F
No
I
I
didn't
think
to
look
for
any
open
issues
in
Envoy
all
right,
I
guess
it
seemed
more
like
a
istio
issue,
just
in
terms
of
how
we're
doing
this
default
policy
matching
when
a
service
only
has
one
port
behind
the
scenes.
Foreign.
B
I'm,
trying
to
remember
weighted
Tropic
split
was
certainly
one
of
them.
D
D
F
Yeah
there's
two
different
solutions
that
work
for
this
scenario:
yeah
linear
audio
was
coming
through
a
little
choppy,
but
I
think
I
got
the
gist
of
it.
F
There's
two
solutions:
one
in
the
virtual
service:
they
specify
the
port
and
then
that
way
it
tells
to
tells
the
studio
to
select
the
correct,
Port
level
traffic
policy.
The
other
option
is
rather
than
defining
this
consistent
hash
load
balancer
for
that
specific
Port,
just
Define
it
for
top
level.
So
those
are
two
different
solutions
that
do
resolve
it.
G
F
G
F
So
I,
let's
see
yeah
so
the
destination
rule,
that's
just
at
the
top
of
the
screen
share
right
now.
The
question
is:
does
this
destination
rule
Define
one
clust,
cluster
and
Envoy
with
the
default
traffic
policy
settings?
Or
does
it
also
or
sorry
so
the
default
and
then
also
a
port
level
8080
traffic
policy,
because
in
this
case
they
point
to
the
same
thing
since
there's
only
one
port
to
find
so
the
default
cluster
is
picking
up
the
8080
Port
level
settings.
G
F
I'd
have
to
look
at
that
logic
in
clusterbuilder.go
to
see
how
the
cluster
name
is
selected
in
the.
H
B
But
we
do
not
as
yet
have
the
the
features
that
are
in
the
hashing
part
of
the
routing
API
attached.
The
routing
API.
G
G
G
B
G
B
D
D
E
F
Okay
yeah.
Basically
the
proposed
change
that
Rama
has
in
the
pr.
It's
only
impacting
the
RDS
generated
object
and
it's
it's
adding
new
Fields
rather
than
modifying
any
existing,
because
the
route
object
is
still
pointing
to
the
same
cluster
name,
but
now
it's
properly
applying
the
consistent
hash
settings
that
are
defined
for
that
port
in
the
destination
wall.
For
that
route.
That
host
sorry.
F
Yeah
I
can
update
it
with
the
proper
XDS
and
give
you
a
ping.
F
And
yeah
I
guess:
Mirage
also
raised
a
good
issue
just
in
terms
of
how
does
that
output
differ
when
a
service
has
two
separate
Service
ports.
F
E
F
Correct
and
so
yeah
it's
a
question
of.
Is
this
a
breaking
change,
or
is
it
a
continuing
to
support
what
the
spec
and
our
default
Behavior
implies
and
whether
or
not
we
change
the
default
Behavior
by
not
doing
this
particular
patch
and
whether
or
not
we
need
to
call
out
I,
don't
know
that
new
Behavior.
I
Yeah,
so
my
understanding
is
this
is
a
convenience,
basically
with
saving
the
use
of
the
cloud,
the
port
manually
in
two
places
when
they
only
have
one
put.
But
the
concern
is
that
you
know
what,
if
they
have
multiple
causes
one
day
they
added
another
ports
and
then
the
behavior
could
be
a
little
bit
unpredictable.
I
D
B
Right,
you
know
in
in
a
situation,
you
know,
without
a
more
without
a
more
specific
match
right.
We
just
end
up
ordering
the
destination
Rules
by
age,
effectively,
right,
which
is
also
confusing,
but
it's
the
best
that
we've
done
in
these
situations
so
actually
having
pork
discrimination,
probably
helps
the
user
not
hinders
them.
K
So
I
I
have
two
different
concerns
here.
One
is
regarding
the
breaking
or
not
breaking
existing
Behavior
I.
Think
at
this
point,
if
some
user
is
relying
on
a
bug,
probably
should
preserve
the
bug
and
opt-in
or
have
a
mechanism
where
the
user
can
get
a
correct
Behavior
with
with
either
a
global
option
or
organizational,
but
we
cannot
afford
to
to
break
any
existing
Behavior
if
we
believe
some
user
may
legitimately
have
used.
This
second
is
also
related
to
gamma.
K
Gamma
doesn't
have
this,
but
probably
will
have
to
have
it
at
some
point,
so
destination
rule
or
a
variant
of
destination
rule
will
probably
exist
in
gamma
as
a
policy
or
part
of
the
API
eventually.
So,
if
we
find
a
way
to
Express
this
correctly,
it
would
be
a
good
idea
to
discuss
it
with
government
and
because
they're
already
discussed
some
service
binding
or
I.
Don't
remember
how
it's
their
code
was.
They
want
to
put
some
TLS
settings
so
this
is,
they
are
moving
in
that
direction.
K
B
B
K
If
we
break
some
user,
we
have,
you
know
many
managed
products
which
automatically
upgrade
these
nodes
are
not
something
that
I
mean.
We
expect
at
this
point
in
history,
to
be
users
to
be
able
to
upgrade
without
having
to
retribute
these
notes
and
having
to
exchange.
So
that's
really
my
criteria.
If
a
user
can
do
an
upgrade
to
117
warranty
and
they
don't
have
to
really
read
these
nodes
and
start
to
make
changes
in
the
in
their
configs
and
it's
good.
K
E
K
F
I
was
just
going
to
say
from
my
understanding
on
the
original
issue.
This
isn't
like
a
customer
did
an
istio
upgrade
and
they
all
of
a
sudden
started
seeing
this
change
in
Behavior.
This
is
suddenly
deploying
a
new
istio
instance
and
just
a
a
general
bug.
K
So
if
it
requires
user
to
do
something
to
get
any
Behavior,
then
it's
good
I
have
no
problem
with
that.
But
if
a
user,
if
there
is
any
change
in
Behavior,
because
we
don't
know
how
they
are
refusing,
what
is
relying
on
I
mean
it's,
it's
getting
very
complicated
to
find
out
all
the
possible
use
cases
and.
G
K
Was
saying
second
part
about
the
gamma
and
if
we
move
to
a
new
resource,
for
example,
with
the
proper
behavior
that
will
be
a
pretty
simple
migration,
they
will
be
encouraged
to
move
to
gamma.
They
will
get
the
correct
Behavior
if
they
use
gamma
and
if
they
keep
using
destination
rule
no
problem
I
mean
they
keep
the
old
buggy
Behavior,
but
it's
stable.
K
E
B
B
It
can
be
non-deterministic
full
stop,
so
I'm
highly
I
find
it
very,
very
unlikely
that
any
customer
is
relying
on
any
behavior
in
this
situation,
where
they
have
two
destination
rules
for
the
same
service.
B
K
K
So
you
apply
with
the
workers
editor
without
port
and
you
apply
with
Port,
but
in
a
more
genetic
and
use
your
system,
who
is
the
one
with
Port
industry
system?
Big
North,
because
you
have
a
more
specific
destination
with
or
they
are
merged.
B
K
E
B
D
G
B
B
K
This
also
has
some
implication
on
purchase,
specializations
and
other
features
where,
where
I
mean
it's
a
pretty
genetic
problem,
where
do
you
want?
Do
you
want
properties
to
be
Associated
route?
I
mean
slash.
Fugo
has
a
property
as
less
but
have
a
different
property,
but
if
they
go
to
the
same
destination
or
do
we
treat
it
as
a
property
of
the
destination
itself
and
in
Envoy
a
lot
of
things
are:
are
kind
of
treated
as
a
property
of
the
route
so
Foo
and
bar.
B
You
know
attaching
lots
of
stuff
to
the
right.
Well,
very
powerful
is
also
harder
to
configure
right
like
if
I
want
to
rely
on
the
same
header
to
Hash
right
when
I
talk
to
this
service
right,
you
could
make
an
argument
that
one
Envoy
is
doing
in
route
is
a
little
too
complicated,
Maybe
and
there's
maybe
more
a
feature
of
their
processing
pipeline
than
the
actual
intended
user.
Behavior.
K
D
B
K
G
K
D
B
B
G
B
Now
we
could
I,
don't
know,
try
and
do
some
analysis
of
usage
and
say
well
the
overwhelming
majority
of
users
whenever
they
refer
to
the
same
cluster.
They
use
the
same
hashing
mechanism.
Yeah
it'll
be
pointless
to
do
that
analysis
in
istio
because
we
force
it
and
I.
Don't
know
how
to
do
that.
Analysis
for
non-steopaths.
G
B
D
B
G
B
Has
that
proven
to
be
insufficient
to
users
and
I
think
the
evidence
my
lack
of,
although
we
should
probably
do
some
a
little
bit
more
research
is
no
it's
fine.
Yeah
custom
in
the
in
the
thing
that
you
were
doing
was
that
pattern
insufficient
right
of
attaching
the
the
hashing
key
extraction
to
Cluster
or,
logically
to
Cluster,
even
if
it's
not
physically
attached
in
XDS.
B
K
K
B
K
G
B
Least,
for
key
extraction
from
http
right,
consistent,
hash
based
on
other
attributes
of
traffic,
would
work
it
out
for,
in
theory,.
K
Yes,
actually,
that's
a
very
good
point.
We
would
have
consistent
hash
with
TCP
to.
K
B
G
K
K
B
Right,
so
what
what's
going
to
happen
in
gamma
is
somebody's
going
to
attach
a
policy
resource
to
a
service
declaring
the
load,
balancing
properties
and
that
will
include
key
extraction
and
then
that
will
be
the
default.
And
then
somebody
wants
to
override
it
per
round
right.
Then
you
can
attach
a
policy
resource
to
the
route
which
I
think
you
can
do
in
gamma,
but
I
think
almost
nobody
would
do
that.
K
B
D
G
I
So
without
the
ports
I
would
expect.
Let's
just
say
we
don't
have
any
baggage.
If
people
didn't
specify
ports
in
the
destination
in
the
traffic
policy,
I
would
expect
that
low
violence
traffic
policy
would
apply
to
all
the
ports.
I
G
D
G
B
K
B
B
K
A
So
we
have
eight
minutes
left
in
this
meeting.
Can
we
resolve
this
issue
in
the
next
eight
minutes
or
shall
we
take
this
offline
and
look
at
other.
B
I
think
I
think
we
have
resolved
it
right,
which
is
we
need
to.
We
will
fix
the
the
selection
ordering
to
take
Port
as
a
priority
as
a
bug.
B
I'm
sorry,
though,
you
kind
of
repeated
that
support
is
now
part
of
the
selection
priority
order.
I
say
so.
Most
specific
Port
wins
within
two
destination
rules
that
have
the
same
selectivity.
I
D
B
G
These
are
applying
at
two
different
points.
You
got
to
pick
one
that
I
would
say
you
will
put
the
workload
first,
I
guess
because
you're
resolving
this
eventually
at
the
client
right
right.
I
I
G
B
G
J
I
I
E
E
So
this
is
our
quarterly
upgrade
check-in
that
we've
committed
to
from
the
user
experience
working
group.
You
all
recall
kind
of
our
history.
The
upgrades
have
been
an
issue
in
the
product.
We've
had
a
number
of
hypothesis
how
to
move
them
forward,
and
so
we're
watching
these
quarterly
to
see
if
any
of
those
hypotheses
are
playing.
Now,
oh,
and
by
the
way,
all
of
this
data
comes
courtesy
of
Google.
E
So
thank
you
to
Google
for
extending
the
courtesy
beyond
my
tenure
with
the
company
and
thank
you
to
Martin
ostrowski
for
taking
over
ownership
of
these
reports
moving
forward.
He
and
I
will
work
together
on
the
presentations
on
a
quarterly
basis.
This
shows
our
average
upgrade
distance,
one
being
such
as
like
114
to
115,
2
being
like
113
to
115
and
three
plus,
being
more
than
that.
We
support
one
and
two
and
it's
great
to
see
two
becoming
more
prevalent.
E
However,
we're
seeing
a
lot
of
three
plus
would
love
to
talk
to
those
users
and
get
an
idea
if
they
understand
that
they're
on
an
untested
path
and
are
happy
taking
on
that
risk
themselves
or
exactly
what
their
reasoning
is
there,
but
that
has
become
much
more
prevalent
over
the
previous
12
months
to
do
large
upgrades.
The
good
news
about
three
plus
skip
versions
is
that
these
users
are
are
getting
up
to
date.
Okay,
and
as
we'll
see,
that's
that's
something
not
all
of
our
users
are
doing.
Also.
E
This
does
show
the
raw
number
of
cluster
upgrades
that
we're
seeing
across
the
fleet
and
you'll
notice.
It
does
not
have
hockey
stick
growth,
it
is
improving
and
getting
higher,
but
it's
not
keeping
up
with
the
growth
of
the
istio
project,
and
we
we
can
see
that
here.
One
thing
to
note:
this
is
not
a
per
cluster
chart.
E
This
is
a
data
error,
don't
worry
about
it.
Something
happened
on
the
Google
side
to
mess
up
our
data
collection
for
about
a
month
there
we
do
see
new
versions
getting
picked
up
rather
aggressively.
Here's
a
114
has
a
nice
uptick
in
utilization.
Very
recently,
we
do
have
this
mysterious
massive
uptick
in
one
for
utilization.
E
At
the
same
time,
we
did
track
that
down
to
an
individual
account
who
has
since
turned
down
their
one
for
instances,
they're
not
exactly
clear
on
why
they
did
either
of
those,
but
they
have
in
fact,
this
decline
here
is
a
real
decline.
So
we
expect
that
to
just
have
been
a
very
large
anomaly
in
the
data
set.
E
We
do
see
that
110
has
some
serious
staying
power
in
open
source
that
it's
not
really
fading
off
and,
of
course,
that's
quite
an
old
release
at
this
point
so
that
that
is
somewhat
concerning.
But
we
do
have
this
nice
adoption
curve,
which
we're
very
happy
with
to
see
that
overall,
istio
utilization
is
increasing
exponentially
over
time.
E
This
takes
that
exact
same
data
that
arranges
it
around
our
end
of
life
date.
So
zero
is
the
day
that
we
end
of
life.
A
given
minor
version
release
on
this
chart.
You
can
see
again,
110
has
it
did
drop
off
after
end
of
life,
but
it
has
not
continued
to
drop
off.
We
had
this
Spike
right
around
the
EOL
date
and
then
just
continued
usage
and
even
some
growth
in
overall
utilization
of
110
up
to
about
one
year
after
end
of
life.
At
this
point,
so
110
is
a
little
bit
concerning.
E
However,
we
do
see
other
releases
that
have
a
nice
fade
off.
This
blue
line
for
I
think
that's
112.
shows
that
are
our
use.
Utilization
is
decreasing,
although
often
this
end
mark
is
due
to
a
partial
month
worth
of
data,
and
so
we
can't
trust
the
last
data
point
overall
end
of
life
is
not
a
strong
signal
to
istio
users
that
they
should
begin
to
migrate
to
another
version.
111
continues
to
have
its
utilization
grow
well
after
end
of
life,
so
they're
they're
not
responding
to
this.
C
E
By
the
way,
this
is
only
accounting
for
open
source
uses
of
istio
on
gcp.
This
does
not
count
Google
Cloud's,
vendored
and
paid
product
which
obviously
because
it
because
it's
managed
would
have
a
different
curve.
E
E
You
can
see
a
downtick
every
time
that
we
end
of
life
aversion
and
then
it
grows
as
users
begin
to
upgrade
to
supported
versions
and
then
another
downtick,
and
you
can
clearly
see
that
users
upgrades
are
not
keeping
up
with
the
Cadence
of
end
of
life,
progressively
we're
seeing
fewer
and
fewer
users
on
a
supported
minor
version.
We've
asked
oh
yeah
Niraj,
oh
Niraj
has
to
drop
thanks.
E
Nirosh
we've
asked
users,
or
we've
asked
in
the
past
how
this
compares
to
other
Open
Source
Products
and
not
had
great
data
on
it,
but
the
datadog
container
report
just
came
out
and
on
kubernetes
they
found
that
the
vast
majority
well
over
50
percent
of
kubernetes
clusters
are
running
v121,
which
is
recently
end
of
life.
I
think
it's
four
months
out
of
date,
18
months
old,
they
provide
14
months
of
support.
E
So
this
v122
at
that
time
was
just
about
to
go
out
of
support,
so
we
Are
Not
Alone
by
a
long
shot
in
having
a
vast
number
of
users
on
an
unsupported
minor
version.
We
we
actually
are
more
or
less
on
Pace
with
kubernetes
I,
don't
know
if
that's
something
to
boast
about
or
something
to
be
disappointed
about,
but.
E
If
we
consider
that
so
I
don't
actually
have
the
data
from
the
report,
all
I
have
is
this
infographic,
which
does
make
it
a
little
bit
difficult
to
draw
conclusions,
but
assuming
that
this
is
that
these
numbers
add
up
to
about
20
percent
that
were
on
a
supported,
minor
version.
We
would
be
outperforming
kubernetes
by
about
10
percent.
E
B
E
E
If
we
look
at
the
same
data
based
on
vulnerability
rather
than
based
on
minor
version,
97
of
istio
users
are
running
a
version
or
istio
components
in
gke
are
on
a
version
that
is
vulnerable
to
various
cves.
We
can
see
that
at
each
cve
release
that
number
drops
or
Rises
to
a
100
percent,
and
then
users
begin
to
patch
over
time.
We
do
see
that
the
most
that
we
ever
have
patched
in
the
last
you
know
two
years
is
less
than
20
percent
patched
against
all
known
cves.
E
E
Let's
see,
I
know
we're
at
the
top
of
the
hour,
so
I'll
make
this
brief.
We
asked
our
users
and
surveys
how
easy
it
was
to
upgrade
to
each
of
these
versions,
and
this
charts
the
data
five
being
easier
than
previous
releases,
one
being
much
harder
than
previous
releases.
Overall,
we
do
see
a
trend
that
releases
are
getting
easier
and
easier
to
upgrade
to
I
also
have
introduced
this
new
dimension
to
the
data.
E
These
lighter
zones
at
the
top
are
not
actually
upgrades
they're
fresh
installs
of
istio,
so
we
can
see
that
fresh
installs
are
actually
heavily
biased
towards
users,
stating
that
it's
very
easy
to
install
and
onboard
onto
istio,
which
fits
with
our
anecdotal
experience.
However,
that
does
show
that
the
the
highest
approval
ratings
are
actually
around
installs
and
not
necessarily
around
upgrades.
E
We
also
ask
our
users
what
what
mechanism
they
use
to
upgrade,
whether
it's
in
place
or
revision
based
we
have
out
of
the
this-
is
across
all
three
upgrade
surveys
that
we're
covering
right.
Here
we
have
33
responses,
11
of
33
or
about
one
in
three
prefer
revision
based
upgrades.
The
reason
is
reasons
that
they
gave
that
they
didn't
give
many
reasons.
Actually,
this
is
the
only
two
responses
that
we
got
to.
Why
do
you
prefer
this
particular
upgrade
mechanism?
E
18
out
of
the
33
said
that
they
prefer
in
place,
and
the
reasons
tend
to
hinge
around
Simplicity
and
ease
of
operation.
If
users
get
around
to
upgrading
istio
at
all,
they
don't
have
a
long
period
of
time
over
which
to
do
the
upgrade,
and
they
don't
have
a
long
period
of
time
over
which
to
study
and
design
their
upgrade.
So
they
tend
to
express
a
preference
for
In-Place
upgrades
as
simpler
as
a
project.
We've
called
out
that
these
are
while
simpler.
E
These
are,
in
fact,
substantially
more
risky
for
users,
but
that
doesn't
seem
to
dissuade
our
users
from
pursuing
them.
I
see
no
benefits,
for
instance,
of
revision
based
upgrades,
because
Ingress
gateways
are
still
upgraded
in
place.
That's
something
that
we
have
experimented
with
in
our
flux:
integration
with
cicd
support
for
non-in-place
upgrades
of
gateways
for
more
Progressive
rollouts
there,
but
not
a
lot
of
uptake
in
the
community.
Yet
because
it
adds
complexity.
E
And
if
we
then
take
this
data
around
mechanism
and
reverse
apply
it
to
upgrade
satisfaction.
We
can
see
that
our
users
are
most
satisfied
with
upgrade
with
In-Place
upgrades.
We
just
don't
have
a
ton
of
users
who
don't
know
what
upgrade
mechanism
they're,
using
and
and
who
also
rated
their
satisfaction
and
slightly
less
positive
trend,
but
still
in
the
positive
direction
on
upgrade
in
revision.
Based
upgrade
satisfaction.
E
So
concluding
thoughts,
you
know
it's
it's
clear
that
we've
still
got
a
ways
to
go,
but
there
is
some
positive
news
there
in
terms
of
upgrade
satisfaction.
We
did
have
several
users
call
out
in
the
comments
better
CI
CD
documentation
on
how
to
upgrade
istio
using
CI
CD.
That's
an
area
that
we've
already
begun
to
invest
in
the
user.
Experience
working
group
but
I
do
hope
to
see
that
investment
grow
over
the
coming
year
and
gain
additional
visibility
for
our
users.
D
K
My
usual
comment:
I
think
it's
a
10
times.
I
make
it.
Maybe
we
should
listen
to
users
that
you
know
they
don't
enjoy
upgrading
every
three
months
and
you
know,
increase
time
between
releases
and
have
fewer
releases
with
longer
support.
So
we
do
what
the
customers
want
to
not
try
to
make
them
do
what
we
want.
That's
my.
K
A
E
Did
we
did
change
our
support
Windows
about
a
little
over
a
year
ago
now
to
expand
by
six
weeks
to
support
to
support
skip
version
upgrades
from
say,
112
to
114
over
a
six-week
window?
Unfortunately,
what
we
saw
with
that
change
was
that
effectively
users-
let's
see
here
it
is
that
was
right
around
here.
E
We
saw
that
an
unprecedented
number
of
users
were
on
a
supported
minor
version,
but
users
effectively
just
moved
their
behavior
back
by
six
weeks.
As
a
result
of
that
policy
change,
I
would
not
be
opposed
to
experimenting
with
a
policy
change
again,
for
instance,
picking
one
of
our
releases
to
be
LTS
and
seeing
what
that
does
to
user
Behavior
over
the
coming
years
or
yeah
over
the
coming
year,
but
I
would
want
to
do
it
on
a
somewhat
experimental
or
contingent
basis.
After
we
saw
user
Behavior
change
in
the
wrong
direction.
K
Declaring
a
version
on
LTS
and
saying
we
support
it
for
a
year
would
require
a
bit
of
investment,
so
so
the
you
know
to
have
the,
but
for
the
user
perspective
we
can
figure
out
if
a
lot
of
people
start
to
start
to
stick
with
that
version
for
a
year.
That
will
probably
be
a
success.
Sign,
not
a
problem.
A
Okay
Mitch
are,
we
done?
Was
your
topic.
E
Yeah,
my
topic
is
done
costing
I.
Think
that's
an
interesting
point
since
we're
so
far
over
and
I
think
we've
lost.
Most
people
probably
would
be
good
to
maybe
bring
that
up
at
the
next
Toc
meeting.
Yeah.
A
I
agree
because
that's
something
dear
to
my
heart
as
well
I
have
a
30
second
question,
hopefully
less
so
for
OSS.
It
still
has
it
always
been
a
OSS
project
since
the
start,
the
context
that
I'm
working
with
cncf
folks
to
migrate
it
to
the
the
UCLA,
the
CNC
fcla
and
they're,
asking
that
question.