►
From YouTube: Technical Oversight Committee 2021/05/24
Description
Istio's Technical Oversight Committee for May 24th, 2021.
Topics:
- PSA on Test Stability
- 1.9 Upgrade Experience Discussion
- Alpha promotion of injected Gateways
A
11
release
managers.
I
think
I
remember
that
iris
wanted
to
be
one
of
the
release:
managers
yeah
and
last
time
we
were
talking
about
someone
from
solo
and
if
anyone
else
wanted
to
be
so
then
do
you
have
an
update
on
this.
B
Yes,
ryan
king,
we
are
working
with
him
to
submit,
hopefully
his
first
pr
to
issue
soon.
I
think
that's
the
only
requirement
right
to
be
release
manager.
A
C
We
also
have
a
nomination
from
intel
steve
zhang
his
he's
down
in
the
notes
from
last
week,
or
or
maybe
it's
this
week.
If
you
look
below
I've
got
his
email
address,
this
is
z.
Is
it.
A
A
All
right
thanks
thanks
match,
so
I'll,
just
spend
another
minute
on
this.
Do
we
need?
I
guess?
Last
time
we
were
trying
to
figure
out
if
you
need
someone
more
experienced,
who
has
done
this
before?
A
A
What
do
you
think
swindlin
and
louie-
I
don't
see
josh
in
here.
D
A
B
A
Is
ryan
based
in
europe.
A
Awesome
thanks
docs
for
offline
review,
istio
with
mcs
discovery.
Nathan.
Do
you
have
anything
to
discuss
or
should
just.
F
Take
it
offline,
not
a
whole
lot,
it's
already
been
through
review
and
environments
and
networking
it's
been
approved
there,
so
I'm
actually
already
pretty
close
to
the
implementation
there.
Just
what
one
quick
kind
of
note
we,
we
initially
were
kind
of
hoping
that
we
would
be
able
to
drive
all
of
our
service
discovery
through
mcs.
F
In
other
words,
sdo
would
no
longer
have
to
go
to
every
individual
cluster
and
and
just
kind
of
like
get
endpoints
and
and
rather
just
use
mcs
to
to
get
all
the
endpoints
that
needed.
Unfortunately,
it's
it's
really
not
going
to
quite
work
out
as
well
as
we
would
have
liked.
So
we've
kind
of
like
changed
our
approach,
a
little
bit
where
mcs
is
really
just
serving
as
a
more
or
less
a
filter
on
discoverability
throughout
the
mesh.
F
So
we
we
basically
use
service
exports
to
to
determine
whether
or
not
a
a
service
is
going
to
be
effectively
cluster
local
or
mesh
wide.
So
we
just
use
mcs
to
kind
of
drive
that
decision,
but
but
the
doc's
there
it's
been
approved.
If
you
haven't
seen
it,
this
is
an
fyi.
It's
out
there.
It's
going
to
land
pretty
soon,
so
give
it
a
look
over
and
let
me
know
if
you
have
any
comments.
A
F
So
the
an
mcs
controller
would
be
responsible
for
taking
service
export
and
turning
it
to
service
import
and
endpoint
slice
throughout
the
mesh
is
still
istio
is
just
using
service
export
right
now
to
drive
that
kind
of
like
filter
and
service
discovery
that
I
was
talking
about.
However,
there
would
be
a
follow-on
work
to
actually
make
istio
also
be
an
mcs
controller
such
that
like
kubernetes,
like
raw
kubernetes
services,
would
also
be
impacted,
because
you
know
we
would.
F
D
Yeah
for
about
service
imports,
I'm
still
completely
confused
what
is
their
purpose
and
what
they
do,
because
normally
in
istio
we
have
the
concept
for
import
where
the
user
is
declaring
that
hey.
I
want
to
use
this
cluster
wherein,
while
in
mcs
it's
automatically
created,
the
user
has
no
impact.
It's
just
some
object.
That
is
not
clear
what
what
it
does
since
we
are
watching
service
exports.
D
Of
truth
from
the
exports,
so
we
probably
need
to
provide
some
feedback
and
see
how
we
can
harmonize
the
concepts
of
import
from
mcs.
B
F
To
be
honest,
I
I
kind
of
look
at
mcs
as
not
really
an
api.
We
want
our
users
to
deal
with.
I
think
I
think
istio
we're
going
to
want
like
a
better
discoverability
policy
in
hdo
and
have
them
use
that
more
abstract
layer
and
whether
or
not
that
means
that
we
under
the
covers
are
generating
mcs
resources.
F
That's
fine,
but
but
basically
just
just
use
a
better
api,
because
I
think
there
are
going
to
be
a
lot
a
lot
more
cases
in
terms
of
how
we
want
to
tweak
the
discoverability
of
a
service
or
or
it's
endpoints
throughout
the
mesh.
A
Yeah,
so
what
I'm
going
to
suggest
is
it
looks
like
the
first
implementation
nate
that
you're
doing
does
not
have
this
policy
anyways.
It's
not
agreeing
on
that.
A
A
But
for
the
first
is
other
team
members
can
look
at
the
document
on
implementation.
Explore
concerns
this.
Is
it
yeah?
That's
it
all
right
awesome.
Let's
move
on
john!
You
have
your
one
minute
thing.
H
Do
you
want
to
talk
about
it,
yeah,
just
a
quick
wall?
Everyone
is
here.
We
have
an
issue
with
our
tests.
We
don't
know
what
it
is,
but
basically
almost
all
tests
are
failing
at
a
much
higher
rate
than
these
two
we're
looking
into
it,
but
just
wanted
to
give
some
awareness
about
this.
So
if
you
see
your
test
family,
it's
possibly
not
your
pr
this
time
and
it's
actually
just
a
test
flick.
A
H
It's
not
too
bad,
yet
we've
been
retesting
a
lot.
It's
like
it's
maybe
like
two
to
three
percent
of
tests
randomly
fail.
So
it's
not
that
high,
but
high
enough
that
you'll
have
to
retouch
your
pr
at
least
once
probably
okay
got
it.
A
Yep
all
right
mitch,
this
is
basically
re-opening
and
talking
about
the
upgrade
experience
survey
that
you
had
right.
B
A
C
So
the
one
follow-up
that
we
discussed
two
weeks
ago
was
that
we'd
like
to
announce
support
for
skip
version
upgrades
as
long
as
testing
is
sufficient
or
and
we're
not
calling
them
that
anymore,
but
from
one
eight
to
110,
so
the
users
can
upgrade
directly
so
that
that
was
what
we
talked
about
two
weeks
ago,
sam
and
I
have
a
blog
that
should
be
going
out
like
tomorrow
or
the
next
day,
so
watch
that
for
approval
explaining
to
users
how
to
do
a
direct
upgrade
from
one
eight
to
110.
C
I
think
that
was
the
only
direct
follow-up
that
we
discussed
two
weeks
ago.
So
if
there's
any
other
questions
or
any
other
follow-up,
this
is
kind
of
the
space.
For
that.
A
So
one
question
from
me
mitch:
so
this
is:
is
this
going
to
be
a
norm?
That's
going
to
be
maintained
so,
with
our
support
window
of
n
minus
1,
we
are
going
to
allow
skip
work
or
whatever
the
new
name
is
upgrades
going
forward,
or
this
is
just
a
one-off.
I
hope
not.
C
No,
this
is
this
is
not
intended
to
be
a
one-off.
The
upgrade
working
group
has
done
a
ton
to
automate
tests
of
this
scenario.
It
is
fairly
limited.
This
release
we're
giving
them
a
specific
way
to
do
the
upgrade.
It's
not
supported
across
all
of
our
upgrade
mechanisms
just
yet,
but
our
intent
is
to
build
on
this
as
we
move
forward.
C
I
think
our
youtubers
will
be
happy
to
see
it.
I
think
one
thing
that
could
make
it
a
little
bit
better
this
time
for
you
to
do
a
direct
upgrade
from
one
eight
to
110.
Those
releases
did
not
overlap
in
their
support
windows
at
all,
so
you
have
to
leave
the
support
window
to
take
advantage
of
our
upgrading
once
every
six
months.
It
would
be
great
if
users
could
upgrade
once
every
six
months
while
staying
in
the
support
window
for
their
releases.
C
So
we
did
have
a
few
proposals.
Back
in
november
around
extending
the
istio
release
cycle
sort
our
support
life
cycle.
C
We
decided
to
defer
those
until
time
to
deprecate
one
eight
so
that
we
could
make
a
decision
at
the
one
eight
end
of
lifetime.
Unfortunately,
that
conversation
got
delayed
and
one
eight
is
already
gone
end
of
life,
but
now
would
be
the
time
to
talk
about
it.
I
Yeah
I
mean
I
guess
back
when
we
discussed
whether
we
would
extend
out
you
know,
extend
the
support
window
a
little
bit.
I
guess
the
question
was
what
what
version
do
we
think
would
be
a
good
base
for
doing
that
on
and
whether
there
are
any
technical
impediments
and
that
was
kind
of
the
question
about
1.8.
I
I
A
D
Oh
yeah,
I
was
just
going
to
ask
john
if
gator
injection
is
1.9,
because
I
think
that's
pretty
much
a
good
point.
So
that's
my
answer,
then
one
nine
is
the
best
point
to
start
a
long-term,
stable
sport,
because
gator
injection
is
making
a
lot
of
upgrade
scenarios
and
a
lot
of
cross-version
upgrades
much
easier
and
cleaner.
D
So,
ideally,
one
line
would
should
become
kind
of
a
long-term
supported.
H
Version
yeah,
I
was
going
to
say,
like
the
there's,
not
that
much
aside
from
what
costs
mentioned
on
one
eight
versus
one:
nine,
not
as
much
as
previous
versions.
You
know
we
had
like
mixer,
easterd
and
whatnot,
but
how
long
we
extend
it
for
makes
a
big
difference
like
if
we
extend
it
for
three
weeks
to
say:
oh
we're,
giving
you
three
weeks
to
upgrade
and
stay
in
the
window.
H
That
actually
means
absolutely
nothing,
because
we
already
know
when
our
next
security
release
is
going
to
be-
and
we
know
it's
after
three
weeks,
so
that's
kind
of
meaningless.
If
we
extend
it
for
a
long
time,
then
that
probably
means
that
we
need
to
continue
doing
that
and
we've
gone
to
n
minus
two,
perhaps
kind
of
accidentally.
So
I
think
it
it's
hard
to
answer
without
knowing
the
extension
window.
C
D
Three
releases
is
not
a
bad
idea,
I
mean
it's
a
nice
number,
it's
you
know,
but
stable
release,
one
nine.
You
have
a
version
110,
which
is,
you
know,
relatively
stable,
and
then
you
have
111,
which
is
the
most
is
going
to
be
a
bit
rough
around
the
edges.
So
it's
kind
of
a
good
compromise.
D
A
H
I
don't
think
so.
I
spend
a
lot
of
time
on
on
backboards.
I
mean
think
about
right
now.
If
I
want
to
make
a
fix,
I
have
to
go
three
releases
right.
I
go
master
110
one,
nine,
it's
gonna
be
one
eight,
so
you
have
to
do
four
times
the
work
and
it's
not
just
slapping
the
cherry
pick
label
on
there.
It
almost
never
backwards
cleanly.
D
I
would
love
that
google
is,
you
know,
doing
a
lot
of
reporting
1.9.
So
it's
a
defensible
piece.
C
Yeah,
as
I
understand
it,
most
of
our
pat
or
most
of
our
patches
that
are
not
supported
in
like
say
one
eight
right
now
we
have
a
vendor
who's,
ending
up,
doing
the
work
oftentimes
in
open
source
so
that
they
can
provide
support
to
their
users.
So
this
would
just
mean
that,
instead
of
one
vendor
doing
each
patch,
this
would
be
more
like
a
community
oriented
activity.
A
D
I
D
A
C
Introduced
a
concept
that
we
should
talk
about
as
well,
because
he
mentioned
one
nine
being
the
first.
I
think
you
use
the
word
long
term
release.
The
question
would
be:
do
we
want
every
release
to
be
supported,
say,
n,
minus
two
and
then
to
john's
point.
C
We
always
have
four
concurrent
branches
that
we're
supporting
as
developers
or
do
we
wanna
have
some
of
our
releases
supported
longer
for
longer
than
others
that
that
policy
would
require
a
bit
more
explaining
a
little
bit
more
detail
to
for
our
users
to
understand
it,
but
could
mean
that
our
users
have
to
operate
less
often.
While
we
still
only
have
those
four
branches
to
support.
C
D
Sorry
for
for
the
one
like
ubuntu,
where
you
have
some
releases
and
some
releases
as
normal,
so
one
line
will
be
supported
for
preview.
I
mean
it's
the
simplest
evolution
from
from
what
we
have
right.
Now
we
have
one
four,
which
is
a
very,
very
long
term
supported,
even
if
it
is
not
supported,
and
then
we
have
one
nine
and
then
we
have
one
twelve.
Maybe
one.
A
So
then,
what
coston
is
saying
does
become
important
right.
So
if
one
line
becomes
the
next
lts,
which
there's
nine
months
or
a
year,
then
the
cust,
then
the
user
will
go
to
112,
for
example,
and
we
support
skip
version
upgrades
we'll
have
to
put
whatever
we
have
to
put
in
one
nine
to
make
that
possible
right.
That's
what
he
was
saying
you
don't
have
to
put
in
features,
but
you
have
to
make
that
script.
Skip
version
upgrade
possible.
D
A
I
I
I
H
H
Discussion
too
much,
but
it
seems
like
every
month
we
have
this
conversation.
It
takes
up
the
full
meeting,
maybe
if
someone's
interested
they
could
write
a
design
doc
and
then
we
can
discuss
it.
D
H
A
You
hope
so
so
what
my
suggestion
here
would
be
folks
who
are
interested
in
this
topic.
A
Work
on
this
dog
come
add
comments
and
add
sections:
let's
try
to
narrow
this
down
in
by
the
next
toc
or
one
after
that
and
see
what
we
can
do.
I
agree
with
kosten
and
both
john
that
we
have
been
talking
about
it
for
a
while.
It
takes
up
a
lot
of
time
and
I'm
gonna
stop
that
conversation
now,
but
we'll
have
to
bring
it
up
again.
By
doing
some
discussions.
I
I
C
A
H
Yeah,
I
I
didn't
want
us
to
go
too
often
with
this,
because
I
I
think
we
will,
if
we
don't,
have
a
cap.
So
this
is
so
we've
added
a
new
feature
which
is
gateway
injection,
which
I
think
most
niraj
linen
span
have
been
actively
discussing
this.
But
for
those
who
are
not
aware,
it's
basically
doing
the
sidecar
injection
that
we
do.
You
know
for
side,
cars,
but
with
the
gateway,
and
so
what
this
gives
us
is
the
ability
to
more
simply
manage
your
gateways
so
to
update
them.
H
So
there's
kind
of
we
started
out
with
this
promotion
doc.
We
would
like
to
get
it
to
alpha,
ideally
in
110,
as
kind
of
like
a
back
port
support
version,
because
all
the
functionality
is
already
there,
but
I
mean
whatever
we
need
to
do
is
fine,
I
suppose-
and
it
kind
of
led
us
to
adding
more
documentation
for
the
future
and
in
the
documentation.
H
We
had
a
bit
of
controversy
on
how
exactly
we
want
to
position
this
to
users,
and
so
part
of
the
discussion
is,
I
mean
now,
there's
kind
of
quite
a
few
different
ways
to
deploy
gateways.
You
can
use
helm,
easter,
cuddle,
plane,
yaml
and
injection
or
not
injection,
and
we
really
don't
want
to
give
people
six
choices
to
make
when
they're
first
installing
istio,
because
that
is
way
too
much
burden
on
the
user
and
most
of
them
don't
care
so
in
the
pr
which
I
should
have
linked
here.
Oh,
I
didn't
link
there.
H
Okay,
we
kind
of
recommended
using
the
plain
yaml
approach,
because
it
offers
the
most
flexibility
in
terms
of
upgrades
and
it
uses
just
plain.
You
know
kubernetes
service
and
deployment
versus
helm
in
easter
cuddle,
but
there
was
some
discussion
about
whether
that
was
a
good
idea
or
not,
and
someone
suggested
we
bring
it
up
to
the
toc.
So
here
I
am,
I
think,
that's
kind
of
a
high
level
summary
but
feel
free
to
ask.
If
there's
anything
else,
I
should
explain.
I
H
Yeah,
so
the
new
kubernetes
gateway
api.
I
do
not
currently
have
any
plans
to
make
it
change
the
deployment
of
the
gateways
at
all,
so
the
api
will
configure
the
gateways
as
as
is
today,
but
the
actual
deployment
of
the
service
and
deployment,
and
you
know
hpa
and
whatever
is
kind
of
orthogonal.
H
In
the
current
state.
There
has
been
some
discussion
about
making
the
gateway
api
actually
provision
the
deployment
and
service,
but
I'm
a
bit
skeptical
of
it
and
haven't
looked
into
it
too
much.
H
D
Yeah
but
long
time
I
think
it's
clear
that
we'll
probably
have
some
form
of
automatic
control
controller.
Yes,
so.
A
So
I
think
that
topic
for
this
pr-
I
I
don't
think
it
matters
that
much
right
when
the
support
and
the
maturity
comes
for
gateway
classes.
We
can
discuss
again
whether
we
want
to
have
a
controller
for
it
or
not.
Right.
A
D
A
No,
I
agree
with
you
and,
and
I
think
then
the
so
so
louis
do
you
want
to
continue
on
this
path
or
do?
Can
we
like
move
towards
us
like
asking
or
resolving
some
other.
A
I
D
D
You
know
it
means
that
if
I,
if
I'm
a
developer-
and
I
want
to
deploy
a
git
with
my
own
namespace
or
wherever
I
want-
I
have
my
gateway
object.
I
have
a
service
object
that
I
create
and
I
can
put
whatever
load
balance
or
whatever
setting.
I
want
as
a
service
object
using
kubernetes
documentation,
and
then
I
just
need
to
put
a
small
deployment
to
create
the
actual
workload
that
is
going
with
the
gateway.
So
it's
it
means
that
it's
not
us
easier,
that
is
maintaining
the
gateway
or
our
deployments.
It's
a
user
doing.
D
D
G
H
Like
yes,
we
can
expose
a
helm
chart
and
what
happened
and
will
continue
to
happen
with
the
helm
chart
is
that
folks
will
ask
for
every
single
field
in
the
kubernetes
api,
which
includes
service
deployment
and
everything
else
to
be
exposed,
not
just
most
of
them
practically
every
single
one
and
what
ends
up
happening
is
our
values.emil
becomes
this
mess
of
an
api?
That's
not
consistent.
H
It's
not
documented,
and
it's
not
very
well
usable.
So,
for
example,
if
I
want
to
add
a
port
to
the
service,
I
need
to
go
figure
out
how
we
map
that
into
some
random
api
field.
I
don't
know
how
to
do
that
today.
I'd
have
to
look,
and
it's
not
documented,
so
I'd
probably
have
to
look
directly
at
the
helm
chart
with
this
model.
H
H
G
H
I
think
the
problem
is
that
if
you
want
to
expose
a
very
small
opinionated
layer
on
top
of
kubernetes,
then
maybe
a
custom
resource
on
top.
That
makes
sense,
but
that's
not
what
we
want
to
do.
Our
users
demand
just
that
almost
every
single
field
in
these
apis,
and
so
we
cannot
possibly
make
a
better
api
that
offers
all
of
the
configuration
of
service
and
deployment
than
service
and
deployment.
Already
all
right,
like
there's
no
better
api
for
those
to
extend
every
field
than
the
existing
one.
H
A
H
The
difference
is
that,
with
easter
d,
we
are
shipping
the
east
ud
application
and,
like
the
user,
doesn't
really
need
to
care
about
ports
on
easterd
right.
There's
like
five
ports
and
they're
kind
of
implementation,
details
with
the
gateway
it
is
their
own
application.
It
may
be
envoy,
it
may
be
configured
by
each
duty,
but
it's
effectively
theirs
and
they
choose
what
ports
they
choose.
What
labels
are
on
it?
They
choose
everything
and
it's
it's
a
user
application,
despite
the
fact
that
it's
running
our
image.
D
Let
me
give
you
an
example:
if
you,
if
you
create
a
gateway,
we
are
telling
people
like
in
their
charts
or
install
they
put
virtual
services
gateways
and
everything
else.
If
they
want
to
add
the
port
5000
and
the
insert
gateway
object.
Normally,
you
would
expect
that
they
just
put
the
port
5000
and
everything
works,
but
that's
not
true,
because
they
need
to
go
back
and
reinstall
the
sdo
to
with
an
extra
option
to
have
the
four
five
thousand
exploding
the
service,
because
now
it
is
split
it.
D
B
B
A
B
H
Yeah
as
part
of
this
document
mirage,
if
you
scroll
down
a
bit,
there's
a
kind
of
deployment
topology
as
well,
and
we're
trying
to
move
people
from
thinking
of
gateway.
As
this
like
one
like
monolith,
easter
english
gateway,
has
to
have.
That
name
has
to
be
an
easter
system
to
being
a
bit
more
flexible,
you
can
have
multiple
gateways
and
different
name
spaces
and
they're
kind
of
your
own
application
to
run
so
like.
H
I
agree
that,
yes,
a
lot
of
people
probably
think
of
it
that
way
today,
but
I
don't
think
that's
the
way
we
want
people
to
think
of
them.
A
I
don't
know
if
that's
totally
correct,
just
because
I
think
different
organizations
have
different
thresholds
for
who
can
configure
what,
but
I
I
I
will
let
others
also
speak
on
this
topic.
Swin
also
had
concerns,
and
I
think
louis
was
also
trying
to
speak
in
between.
E
I
think
my
my
concerns
were
more
about
just
what
we
tell
the
user
is
the
default,
not
necessarily
whether
we
support
the
simple
email
or
not,
because
I
I
agree
with
everything
that
john
and
customer
say.
I
think
we
need
it
for
the
users
that
have
that
it's
just.
E
E
H
My
personal
preference
is
the
simple
yaml,
but
I
am
slightly
concerned
that
it
is
kind
of
the
newer
one
like
easter,
cuddle
and
helm
are
already
present,
and
so
I
would
be
fine
with
us
doing
another
one
as
long
as
we
do
have
one
as
the
recommended
option,
because
I
don't
want
users
to
come
to
this
page
and
be
like
there's
three
options.
I
don't
care
just
I
just
want
to
do
one
of
them,
so
I
think
that's.
H
It's
it's
a
bit
of
a
mix,
so
there's
there's
kind
of
two
parts:
there's
things
that
are
injected,
which
is
the
minimum
set
of
things
that
you
need
to
install
a
gateway.
So
then,
you
know
all
the
like:
the
internal
environment,
variables,
readiness,
probes,
that
sort
of
thing
and
then
there's
things
that
are
completely
optional,
like
you
can
configure
your
own
security
context,
your
own
resources,
cpu
memory.
H
Whatever
those
things
are,
we
already
have
like
an
api
to
configure
them
in
helm,
and
so,
if
you
install
with
helm
or
easter
cuddle,
they
use
the
same.
They'll
have
the
same
output
with
those
they're
using
the
same
charts.
H
You
would
basically
get
a
deployment
that
has
the
resources,
the
security
context,
all
the
optional
fields
as
part
of
the
deployment
and
then
when
the
pods
spun
up
the
required
fields,
which
includes
the
image,
most
importantly
because
that's
changing
are
injected
with
the
simple
yaml.
All
the
optional
fields
are
up
to
you
entirely.
H
So
you
have
basically
a
bare
bones:
minimum
deployment
to
get.
You
know
that
you
even
can
use
with
kubernetes,
and
then
we
inject
just
the
bare
bones,
minimal
stuff
to
run
a
gateway,
and
so
all
the
resources,
security
contacts
etc
are
left
up
to
you.
Instead.
Does
that
make
sense.
H
I
would
say
so
one
natural
upside
of
the
helmet
easter
cuddle
is
that
when
you
do
install
it,
you
get
basically
what
you
get
today.
You
get
the
full
thing
where
we've
already
configured
some
same
resources,
we've
already
configured
hpa
and
pod
disruption
budget,
and
you
kind
of
get
this
all
included
package.
It
just
also
happens
to
be
using
injection,
which
makes
upgrades
a
bit
easier,
but
the
simple
one
you
have
more
control,
but
you
don't
necessarily
have
all
those
things.
H
The
biggest
benefit
of
the
simple
one
is
that
the
deployment
and
the
service
are
not
necessarily
coupled,
and
so,
if
you
scroll
all
the
way
down
to
the
bottom,
there's
an
example
of
an
upgrade
method:
yeah
where
you
basically
would
spin
up
another
deployment
and
you
could
slowly
shift
traffic
over
by
scaling
up
or
down
that
deployment.
H
So
this
kind
of
depends
on
the
simple
yaml,
because
we
need
to
preserve
the
same
service,
but
you
have
two
different
deployments,
and
so,
if
you
have
the
service
and
the
deployment
coupled
together
like
an
easter
cuddle
and
helm,
it's
it's
quite
challenging
to
do
this
and
to
answer
rob's
question.
Is
there
a
reason
we
don't
use
rolling
upgrades
for
the
deployments
we
we
do
so
there's
two
recommended
upgrade
paths.
One
is
just
that
you
just
do
a
rolling
restart
on
the
deployment
and
the
other
one.
H
Is
this
the
issue
with
the
rolling
restart,
which
is
probably
fine
for
a
large
number
of
users,
but
that
you
don't
have
as
much
control?
So
if
the
pod
spins
up,
then
the
other
pods
will
send
down
and
kubernetes
will
keep
doing
that
repeatedly
with
this.
H
J
J
D
But
you
know
we
have
a
whole
api
in
istio
to
support
version-based
upgrades.
You
know,
traffic
shifting
and
all
this
stuff
I
mean
it's.
Both
traffic
shifting
and
in
place
upgrade
that's
fine.
D
I
I
have
a
question
about
this:
are
we
saying
that
the
the
versions
that
we
support
all
the
apis
who
support
the
instrument
for
version
based
upgrade
and
traffic
shifting,
are
not
used,
because
that
will
simplify
a
lot
our
code?
If
we
don't
have
to
support
this
kind
of
I
mean
it's
a
core
feature
in
networking.
If,
if
we
get
feedback
from
the
users
that
they
don't
like
to
do
version
shifting
and
for
their
own
applications,
yeah.
A
H
Right
so
so
I
want
to.
H
B
H
Of
in-place
upgrade
is
not
like
the
in-place
upgrade
of
previous
non-injected
versions
like
it's
still
fairly
safe,
especially
now
that
I
know
you
can
pause
the
rollout.
I
had
no
idea,
you
could
do
that.
That's
pretty
cool,
it's
just
slightly
less
controlled
because
you
and
harder
to
roll
back
et
cetera.
A
H
A
Okay-
let's
see
so
from
my
side
of
things
at
least
the
way
I'm
looking
at
it.
We
can
keep
the
three
the
three
options
here.
I
see
the
value
in
the
simple
approach,
but
again
from
my
experience
when
I
have
seen
most,
the
people
need
some
templating
for
these
kind
of
things.
In
fact,
many
of
the
customers
that
I
deal
with
they
even
have
to
have
a
templating
on
top
of
health
because
they
have
different
environments
that
they
roll
these
things
out
for
testing.
A
B
H
Installing
the
control
plane
and
we,
if
we
get
feedback
that
people
like
to
simply
gamble,
then
we
can
reuse
revisit
in
the
future.
D
Okay,
well,
I
I
have
some
concern
with
again.
The
api
used
by
operator
is
more
or
less
more
frequently
getting
out
of
date
of
the
rest
of
the
of
the
of
the
stuff
and
and
and
it's
we
are
trying
to
have.
You
know
this
move
to
different
languages.
We
are
trying
to
do
a
lot
of
things
to
improve
the
you
know,
operations
of
ethio
and
that's
not
very
friendly
for
operators.
B
B
D
How
about
for
one
thing,
if
the
concern
is
safety,
we
just
keep
the
current
install
that
has
worked
for
a
long
time
without
injection
and
in
parallel
we
provide
maybe
a
simplified
helm
chart
that
is
just
doing.
The
injection
and
people
will
have
the
choice
to
use
either
the
new
one,
especially
if
they
install
a
new
namespace
or
keep
the
existing
stable
ones.
So
it's
it's
kind
of
best
of
both
worlds.
Instead
of
trying
to
combine
them.
D
Well,
we
we,
we,
I
don't
think
we
have.
We
have
the
helm
chart
where
we
enable
the
injection
as
an
option,
but
we
also
have
a
100
options.
I'm
saying
just
a
clean
chart
separated
from
the
other
one.
So
it's
helm,
it's
injection-based,
it
has
a
pin
api
and
it's
completely
separate.
I
mean
it's
not
even.
D
D
That's
a
problem
we
have
for
for
for
everything
I
mean.
Actually
we
don't
have
to
it's
a
new
chart,
new
api.
It's
not
it's!
Not!
We
don't
have
to
make
it
consistent.
That's
a
new
chart
where
we
tell
people
hey.
You
install
the
chat
in
your
own
name
space.
You
follow
best
practices.
You
don't
have
to
support
all
the
mounted
secrets.
Everything
possibly
can
default
to
svs.
D
A
So
I
don't
think
we
should
try
to
decide
creating
a
new
helm
chart
in
this
meeting
without
an
actual
proposal.
We
have
lots
of
issues
when
we
try
to
change
things
at
the
infrastructure
level
and
people
are
certainly
surprised
by
it,
but
I
do
think
we
can
decide
right
now
saying
of
three,
which
is
the
reason,
which
is
the
recommended
approach
for
110
and
111
and
then
based
on
feedback.
You
can.
We
can
go
towards
caution's
approach,
also,
but
constant.
I
D
Maintenance
do
have
tests
for
what
helm
and
is
your
current
install.
I
mean
that's,
that's
already
there.
So
when
you
do
the
test
with
helm,
you'll
use
installation
with
helm,
installs
a
new,
simplified
term
chart.
If
you
install
with
your
cattle,
you
use
the
steel
cutter.
H
I
I
don't
understand
how
it
could
work,
because
the
entire
point
of
the
simple
yaml
is
that
you
directly
modify
the
kubernetes
api.
If
we
put
it
in
a
helm,
then
we
either
have
to
one
there's
no
configuration
yeah
and
then
that's
useless,
because
we
want
people
to
configure
it
or
two.
We
add
our
own
api
on
top
of
helm,
at
which
point
we'll
slowly
work
our
way
back
to
the
current
one,
and
it
will
be
painful
and
very
confusing
for
users.
So.
D
Don't
forget,
don't
forget
that
people
don't
use
helms,
always
as
they
are,
but
they
just
sometimes
they
just
work
it.
They
take
the
helm
and
they
modify
it.
So
the
user
can
take
our
hem
chart
and
modify
it
for
his
own
purpose
and
customize
it
in
place
and
there's
no
point
in
using
health
if
they
do
that
they
can
just
copy.
D
E
A
D
I
explain
in
the
chat
the
problem
with
moving
home
to
beta,
we
discussed
in
previous
environments
is
that
the
api
surplus
is
horrible
and
it
cannot
be
supported
here
as
a
better
api
in
in,
in
particular,
in
ingress
I
mean
there
are
2
000
options,
but
for
the
english
chart
and
and
and
most
of
them
don't
make
sense
and
are
deprecated.
But
this
way
you
can
have
a
defensible
apis
that
we
can
support
long
term.
A
We
are
deprecating
our
beta
installation
methods
which
doesn't
sound
right.
D
J
D
J
Yes,
I
I
I
think
so
as
well
and
if
that's
the
case,
then
we
should
start
working
on
or
we
should
continue
working
on
this
kind
of
thread,
without
necessarily
thinking
about
writing
exactly
what
power
is
just
without
thinking
about.
What's
for
110,
necessarily
or
111,
and
then
at
some
point
when
it
actually
matures,
we
can
talk
about.
A
A
A
All
right,
so
is
that
fair
enough?
Can
we
give
our
decision
to
john
and
move
on.
A
H
A
E
B
I
B
A
B
J
I
D
G
Okay,
so
a
couple
points
here,
I
I
know
john-
has
been
used
in
this
simple
yamo
for
at
least
two
discussions
I
participated.
G
I
just
feel
like
when
you
say
simple
yaml:
do
you
imply
that
there
is
a
complex
yaml
or
something
else,
so
I
mean
if
we
can,
can
we
just
say
yamo
instead
of
a
simple
yamo,
that's
point
one
I'm
trying
to
make.
Second.
Is
that
if
we
do
suggest
that
we
do
recommend.
G
G
Also,
I
want
to
add
that
when
I
first
start
using,
I
still
I
was
looking
for
a
yama
file
similar
to
let's
say:
if
you
want
to
uninstall
inject
english
controller,
we
provide
a
huge
yaml
file.
You
don't
even
look
at
that
content.
It's
like
okay,
great
I'm,
gonna
use,
keep
color
apply
that
yeah
my
file
great.
I
I
see
my
stuff
up
running
in
kubernetes,
I'm
done
with
it
right.
I
don't
even
care
what's
inside
the
yama
file,
because
I
know
this
installs
njx
ingress
controller.
G
G
So
I
like,
where
we're
moving
to
just
use
a
yama
file
and
if
it
has
a
direction,
I
think
we
should
be
having
this
capability,
but
if
we
say
we're
going
to
continue
just
use
the
operator
and
use
the
home
chart
all
that
stuff,
but
you
know
then
soon.
G
A
Yeah,
no
a
really
valid
point
strong.
I
think,
on
the
first
two
issues
around
whether
it's
a
simple
yaml
versus
complex
cml
and
whether
we
should
just
call
it
yaml.
I
will
let
john
pitch
in
on
that.
G
A
Are
the
traps
yeah
and
I
do
agree-
I
mean
yeah,
I
mean
people
who
are
used
to
installing
nginx
ingress.
I
mean
they
might
have
found.
You
know.
Steel
increase
gateway,
a
bit
more
herculean,
so
I
guess
we
are
moving
in
the
right
direction.
B
A
All
right,
I
think
we
have
an
agreement
here.
Is
this
decision
good
for
everyone?
If
so,
we
will
move
on
or
actually
we
will
end
the
meeting
with
four
nights
over.