►
From YouTube: Istio Environments Meeting 2021-09-01
Description
Istio Environments Meeting 2021-09-01
A
Yeah,
so
I
wanted
to
talk
about
something
we
brought
up
in
previous
environments
meetings.
It's
the
idea
that
the
default
revision
should
be
able
to
win
leader
elections
basically
every
time,
and
if
you
make
a
revision,
the
default
revision
it
should
become,
it
should
become
the
leader.
A
So
as
some
background,
basically,
we
have
a
bunch
of
controllers
that
only
one
sdod
revision
or
actually
not
not
even
revision.
Only
one
sqo
d
replica
should
run,
for
example
like
when
we
patch
web
hooks
updating,
ingress
status
and
other
stuff.
We
actually
only
want
one
instance
to
be
doing
those
things,
so
the
idea
is
we
just
added
this
idea
of
the
default
revision,
which
is
like
the
user,
designating
a
revision
as
kind
of
the
primary
revision
in
use.
A
So
it
would
make
sense
that
that
revision
should
be
the
one
responsible
for
running
all
of
these
singleton
controllers,
the
singleton
logic.
So
that's
that's
kind
of
the
background.
A
The
designs
so
john
has
actually
added,
I
think,
in
anticipation
of
needing
something
like
this
some
work
into
kubernetes
to
so
right
now
in
kubernetes,
the
leader
election
is
kind
of
it.
Wouldn't
let
us
do
this
prioritized
it.
Wouldn't.
Let
us
do
this
thing
where
we
need
to
kick
out
an
old
leader
and
you
know
usurp
them
based
on
our
priority.
A
So
there's
some
changes
that
john
has
opened
into
kubernetes
that
make
it
so
that
you
can
have
different
leader
election
clients
have
a
different
priority,
so
they
can
boot
out
existing
leaders.
It's
looking
unlikely
that
that
logic
will
get
merged
soon.
I
don't
necessarily
know
when
that
would
make
it
in,
but
the
idea
here
is
that
we
could
just
fork
the
the
altered
leader
election
library
into
istio
temporarily
yeah,
so
go.
B
Ahead,
sorry,
why
do
we
need
this
complicated
change?
I
mean
we
seem
to
have
a
much
simpler
solution
to
just
you
know.
We
watch
the
mutating
workbook
anyways
and
we
know
when
we
are
default
and
whenever
default
we
relinquish
master
no.
A
Is
we
rely
on
the
kubernetes
leader
election
logic
and
the
kubernetes
leader
election
logic?
The
logic
we
need
to
change
is
actually
like
lock,
acquisition
and
that
doesn't
exist
in
our
code
that
exists
in
library
code.
There's
like.
D
Go
ahead,
yeah
with
stamp
set,
and
also
we
don't
want
the
logic
to
be
as
simple
as
if
I'm
not
the
default,
one
don't
take
the
leader
election
lock,
because
if
you
end
up
with
no
default
or
all
the
default
pods
are
down,
then
you
may
have
an
outage
right.
I
think
we
want
the
default
to
be
priority,
but
not
only
if
that
makes
sense.
Well,.
B
B
And,
and
and
and
again
knowing
if
you
are
default
or
not,
is
important
for
other
reasons
I
mean
having
the
functionality
to
know
you
are
default
or
not
that
that's
useful
in
general,
and
once
you
know
you
know
you
just
reading
question
and
we
have.
I
think
there
is
an
api
to
to
stop
being
the
master
I
mean
to
to
to
drop
the
master
ship.
B
B
Then
stop
doing
whatever
we
are
doing
in
in
I
mean
that's
super
clean
and,
and
it
you
know
consistent
with
what
behavior
we
want
to
have.
I
mean
it's
not,
but
anyway
yeah
if
they
change
it
under.
A
A
I
mean
we
do
have
some
users
who
have
multiple
revisions
right,
not
for
upgrade
purposes,
but
for
difference
of
configuration
right.
So
in
those
cases
you
might
want
just
some
revision
handling
these
things.
It's
not
like
a
total.
A
If
the
default
revision
is
down,
we
want
to
stop
operation
or
we
don't
care,
because
operation
is
down
anyway
type
deal.
B
Why
complicate
things
when
we
can
say
just
you?
You
always
have
a
default
revision
that
is
responsible
for
patching
web
hooks
and
doing
status
and
all
the
other
stuff,
because
otherwise
it's
it's
again
very
complicated
know,
which
one
is
I
mean
by
by
the
one
who
owns
who
who
wins
the
mastership
election
is
that
is
becoming
the
default
so
because
it's
the
one
updating,
status
and
and
configs
and
other
stuff.
So
what's.
B
What's
the
benefit
of
making,
this
is
more
complicated
than
just
saying
that
there
is
always
one
default.
That
is,
you
know
responsible
for
the
for
the
controllers
for
some
and
its
master,
and
that's
it
I
mean
no
more
hard
to
debug
figure
out,
which
one
is
that
now
the
master
revision,
one
revision,
two
division,
three,
because
then
you
have
priorities,
you
need.
You
have
a
very
complicated
system
in
place.
When
you
can,
you
already
have
default,
which
solves
this
problem.
A
A
B
So,
to
be
clear,
I
mean
my
concern
is
the
primary
concern
is
that
the
intent
for
default
is
to
have
you
know
almost
all
production
workloads
to
use
default,
because
that's
really
what
production
is
using
and
that's,
and
for
the
other
revisions
to
be
used
for
upgrade
or
for
you
know,
experiments
or
whatever
else,
but
we
want
users
to
have
a
deterministic
behavior.
I
mean
to
know
always
that
the
stable
versions
are
the
default.
One
is
used
consistently
in
order
to
debug.
If
there
is
a
problem,
they
know
where
to
look.
A
Okay,
no,
that
makes
sense
that
makes
sense
I'll,
also
look
in
so
to
make
sure
that
what
we're
even
describing
now
with
only
having
the
default
able
to
claim
it,
I
think,
that's
possible
with
the
current
implementation,
but
it
might
be
cleaner,
it'd
pretty
much
definitely
be
cleaner
still
in
terms
of
like
how
it's
implemented
to
fork
that
code
over,
but
that
obviously
has
a
pretty
tremendous
cost.
So,
okay
I'll
look
back
into
that
yeah!
That's
everything
for
that
proposal.
E
A
Yeah
I'll
come
back
to
you
all
next
week
once
I've,
because
I
I
just
need
to
look
into
the
implementation
details
to
make
sure
this
is
viable
without
yeah
forking.
B
Or
we
can,
we
can
separate
the
feature
definition
when
and
then
the
requirements
which
are
you
know
you
need
to
have.
You
know,
prioritized
leader
election
from
implementation,
which
may
change
is
not
necessarily
that
way
we
have
both.
All
the
you
know,
implementation
details
in
in
the
same
document
I
mean
okay,
the
feature
which
is
leader
election
to
be
to
be
done
by
default,
but.
D
Yeah,
I
definitely
agree.
The
high
level
idea
is
something
that
we
have
to
do.
I
find
with
either
implementation
as
long
as
we
can
convince
ourselves
that
we'll
always
have
a
default
revision
and
we
won't
somehow
end
up
with
no
leader
or
something
that
would
not
be
good.
But
if
we
can
do
that,
then
we
should
be
fine.
B
D
I
think
it's
the
opposite
issue
actually
is
that
you
have
one.
One
version
is
controlling
the
alpha
one,
one
version's
extremely
alpha
two
so
which,
whichever
one
gets
picked,
the
other
one
will
be
broken.
Yeah.
B
A
Yeah,
I
think
I
think
one
risk
that
maybe
I
haven't
discussed
is
this
is
pretty
core
logic,
so
it's
really
important
that
we
have
some
revision
doing
it
and
it
requires
us
to
have
a
really
good
way
of
knowing
which
revision
is
the
default
revision
right
now.
That's
the
web
hook
and
john
brought
this
up
as
a
comment
in
the
dock.
I
do
wonder
if
this
makes
it
so
that
we
need
a
better
way
like
a
better
way
to
canonicalize,
like.
B
Oh
yes,
oh
yes,
absolutely,
and
I
have
this
problem
myself
in
some
other
project,
so
relying
on
webhooks,
it's
a
very
messy
business
because
you
may
have
clusters
where
you
don't
even
have
permissions
to
create
webhook
and
you
use
you
know,
manual,
injection
or
other
mechanism.
So
I
would
really
really
like
to
have
a
config
map
that
you
know
when,
when
when
we
we
write
the
configma
for
tags,
we
can
write
a
configmap
in
htr
systems
that
that
can
have
the
default
and
the
current
status.
A
Is
the
idea
with
if
we
had
a
config
map
that
held
the
mappings
and
I'm
I'm
a
fan
of
that
idea?
Is
the
idea
then,
that
we
have
a
controller
that
generates
all
of
them
the
web?
Hooks
from
that
config
map
or
yeah?
I
mean.
B
Your
logic
that
you
are
doing
today,
I
mean
or
or
is
your
catalog
default-
will
update
the
config
map
to
to
set
the
default,
and
you
will
manipulate
the
configmgr.
I
mean
either.
Is
your
set
default
or
whatever
other
tools
that
that
may
may
want
to
to
do
the
equivalent
role.
A
A
Well,
we
we
did
it,
I
thought
we
made
it
so
that
well,
it
hasn't
been
done,
but
we
agreed
that
we
need
to
be
able
to
specify
revision
tags
through
the
base
chart.
E
A
B
B
B
A
That's
a
that's
a
good
point
that
we
could
maybe
merge
two
different
config
maps,
it's
complicated
but
yeah
that
could
work
yeah,
okay,
yeah,
actually
another
thing
I
wanted
to
bring
up
well.
Actually
I
can
bring
that
up
later
because
there
are
other
items
on
the
agenda
and
I
don't
want
to
go
too
far
off
topic
so.
B
One
more
thought
on
this
configmap
subject,
since
this
is
an
internal
configmap,
we
don't
need
to
have
a
wonderful
api
for
it,
but
we
should,
you
know,
learn
what
we,
what
they
did
wrong
with
the
old
config
maps
and
and
and
flag
and
start
it
flat.
Basically,
because
patching
config
maps
is
difficult
if
it's
not
flat.
B
So,
basically,
something
that
is,
you
know,
key
value
per
that
can
be
used
as
environment
variables,
loaded
and
so
so
very
easy
to
parse,
very
easy
to
to
patch
and
intended
for
tools,
basically
that
internet
that
integrate
with
with
this
geo
to
you
know,
if
you
have,
if
someone
writes
a
ui
to
to
select
you
know,
I
want
to
deploy
this
csgo
with
this
version
or
whatever
you
get.
What
I
mean
right,
yeah.
A
Okay,
cool
john:
do
you
would
you
like
to
share
the
docker
you're
right?
If
I
just
share
it,.
D
I
think
you
can
just
share
it.
That's
fine,
okay,
cool,
it's
not
very
long
yeah.
So
every
week
I'm
gonna
come
back
with
the
new
proposal
on
how
to
install
gateways.
So
this
is
the
deployment
of
the
week
just
kidding.
Hopefully
so,
we've
been
talking
a
bit
in
the
past
about
automatically
deploying
gateways
based
on
gateway
resources,
but
we
never
really
had
a
dock
to
discuss
this.
D
So
I
kind
of
drafted
something
up
real,
quick
on
what
I
think
that
would
look
like,
and
I
wanted
to
get
some
early
kind
of
feedback
and
opinions
on
what
people
think
of
this.
Yeah
carson's
got
next
week's
idea
or
this
week.
D
So
basically,
the
high
level
idea
is
that
a
user
would
create
a
gateway
like
on
the
left,
and
this
is
a
kubernetes
gateway
resource.
It's
just
a
standard
one
in
this
case
they're
exposing
port
one,
two,
three
four
and
port
80.,
and
today
they
would
then
need
to
go,
create
a
deployment
and
service
for
the
gateway.
Make
sure
that
the
service
has
those
same
ports
exposed,
and
you
know
if
they
ever
try
to
add
a
port.
They
have
to
make
sure
it's
kept
in
sync
et
cetera.
D
D
That's
pretty
much
the
high
level.
I
think
it's
pretty
simple
any
questions
so
far.
D
Okay,
so
I
think
the
biggest
so
I
go
ahead
and
default.
D
D
D
D
I
think
that's
that's
one
option
yeah
I
mean
we
could
bundle
it
in
an
easter
egg
or
we
could
make
a
separate
thing.
I
don't
know:
okay,
okay
yeah,
so
in
terms
of
customization,
I
think
that's
that's
usually
where
operators
are.
I
mean
I
usually
hate
operators.
So
for
me
to
propose
an
operator.
D
We've
all
heard
we
ran
about
this
before
probably
the
reason
why
it
doesn't
suck
for
us,
I
think,
is
because
we
already
have
a
mechanism
to
configure
arbitrary
pod
things,
which
is
the
injection,
and
so
in
the
above
example.
I
showed
just
like
the
bare
bones
deployment,
which
is
like
the
minimum
we
need.
D
If
the
user
wants
to
customize
that
that's
fine,
they
can
just
create
a
gateway
template
or
a
custom
injection
template
with
whatever
fields
they
want,
and
then
you
know
we're
injecting
the
gateway,
so
they
get
any
customization
they
want
and
we
don't
have
to
like,
create
a
separate
api
that
has
all
these
different
fields.
D
The
one
thing
that's
left,
then,
is
the
service
so
about
half
the
service
is
the
ports
which
we're
managing
for
them.
That's
kind
of
one
of
the
benefits.
There's
a
few
other
customizations
that
people
want
in
service.
One
is
the
type
and
then
there's
like
the
load,
balancer
ip,
so
load
balancer,
ip's
already
fueled
in
gateway.
So
we
can
keep
that
and
then
the
other
thing
is
a
bunch
of
annotations
like
because
different
implementations
of
different
things
like
this.
D
This
is
not
the
full
gk
one,
because
it's
long
but
there's
like
a
annotation
to
make
an
internal
load
balancer
there's
like
30,
aws
annotations,
so
a
lot
of
times,
services
are
customized
through
that
I
was
thinking
that
we
can
just
copy
the
metadata
over
from
the
gateway
onto
all
the
objects
we
create.
B
B
D
Yes,
so
right
now
the
current
design
doesn't
have
doesn't
allow
merging
no
merging
allowed.
We
could.
We
could
allow
merging
and
we'd
have
some
sort
of
way
to
like
declare
one
like
the
parent
and
the
other
ones
like
the
mergies.
I
don't
know,
I
don't
know
what
we
call
it.
I
just
in
the
initial.
D
I
don't
have
it
in
the
stock,
but
I
have
thought
about
it.
A
bit
yeah.
B
So
as
a
gator,
it
doesn't
have
hostname
basically,
so
so
because
the
use
case
you
would
be,
you
have
five
host
names.
It's
a
common
use
case.
You
have
example.com,
for
example,
booking
for
the
example.com.
You
create
a
gateway
for
each
each
gateway
delegate
to
different
namespace
and
it's
a
separate
resource
but
they're.
All
port
eight
is.
D
D
D
Why
you
would
want
to
split
the
gateway
object,
I
believe
is
if
you
want
to
share
the
same
ip
and
everything
it's
really
just
if
management
is
easier
like
if
you
have
a
gigantic
gateway
with
like
4
000
domains
or
something
it
may
be
kind
of
a
pain
to
manage
in
one
object,
it.
B
You
know,
but
the
delegation,
all
that
permission
stuff
where
were
okay,
let
me
I'll
take
a
look.
D
D
Adam,
can
you
scroll
back
down?
I
forget
what
what
else
was
on
here?
Yeah,
that's
pretty
much
it.
I
think
I
mentioned
everything.
Oh
yeah,
the
other
thing
would
be
like
pod
disruption,
budget
and
hpa.
D
We
could
add,
like
an
api
on
top
and
automatically
create
those
configured
by
annotations
or
something,
but
we
could
also
just
have
the
user
create
those
if
they
want,
like
the
naming
of
the
deployment,
is
deterministic
based
on
the
gateway,
so
they
can
always
create
those
and
attach
them
after
the
fact,
and
it
would
kind
of
compose
well,
the
other
two
things
are
just
upgrades:
that's
we
kind
of
already
have
an
established
upgrade
pattern
of
you
restart
the
pod
in
place.
D
We
could
make
the
controller
smarter
and
do
more
advanced
upgrades
and
safer
upgrades-
that's
probably
outside
the
mvp
though,
but
we
could
explore
that
in
the
future
and
kind
of
going
back
to
what
koston
said
in
terms
of
where
we
put
this.
It's
not
like
this
would
not
be
that
hard
to
implement,
but
it
does
have
like
very
high
rbac
privileges,
creating
services
and
deployments
in
any
name
space,
so
we
could
consider
making
it
a
separate
controller
and
or
in
easter
d.
D
Perhaps
both
options
I
don't
know
either
either
one
is
technically
possible.
We
could
explore
which
one
is
preferable
to
users
or
like
if
we
want
the
screen
used
to
ecosystem
project
versus
first
class
for
long
term
or
short
form,
or
anything
like
that.
D
So
that's
pretty
much
it
because
right
now
this
is
more
rfc
than
thorough
design
doc.
I
just
want
to
get
people's
thoughts
on
this.
B
Yeah,
you
know
you
know
my
my
thoughts
on
the
idea,
I'm
100
for
it.
On
the
on
on
using
a
separate
controller,
I
can
see
a
lot
of
benefits
because
study
is
becoming
very
complicated
and
having
smaller
repositories
that
are
focused
on
on
you
know
what
task
and
can
be
deployed
independently
and
also
eventually
linked
into
east
geod.
That
will
give
us
a
bit
more
velocity
and
avoid
polluting
history
with
stuff
that
may
or
may
not
work.
So
we
can
get
feedback
and
so
yeah.
It's
it's
not
micros.
B
A
All
right,
so
I
I
don't
have
a
whole
lot
of
background
with
the
service
apis.
So
with
this,
this
idea
basically
replaces
having
to
do
like
creating
a
gateway
manually
that
gets
injected
yep.
D
A
D
A
B
D
Yeah
my
vision
for
how
the
user
experience
for
install
would
be.
Is
you
actually
just
do
like
helm,
install
easter
ode
and
then
that's
it?
You
never
have
to
do
anything
else.
I
mean
you
installed
cds,
too,
but
then
you
can
just
create
gateways
like
you
normally
would
like
if
you're,
using
a
cloud
ingress
or
a
gateway
provider,
it's
kind
of
the
same
thing
like
I
just
for
like
gke,
I
just
grade
ingress
and
it
goes
provision
some
cloud
load
balancer
for
me.
I
never
have
to
do
any
deployment
steps
beforehand.
B
And
the
nice
thing
is
that
istio
d1
is
upgraded,
it
can,
you
know,
check
hey,
my
version
is
1,
10,
5
and,
and
the
deployment
for
gateway
is
one.
This
one
is
the
previous
version
and
start
restarting
it.
Basically.
So
when
you
upgrade
this,
you
can
take
care
of
of
making
sure
the
categories
are
up.
The
same
version.
D
Yeah,
oh
one
thing
I
should
mention
is
that
if
we
have
multi
or
not
multi-primary
the
opposite,
one,
where
there's
a
shared
control
plane
with
a
config
cluster
like
we'll,
have
to
figure
out,
you
know
the
gateway
will
live
in
one
place,
but
the
actual
deployments
may
live
in
many
clusters.
So
we'll
have
to
figure
that
out
as
well.
B
Yes,
absolutely,
and,
and
and
also
we
have
the
option
in
in
this
case,
to
have
user
designate
where
they
want
to
keep
the
gateways
because
gateways
you're
making
workloads
in
cluster
workload
cluster,
where
you
run
all
kind
of
unprivileged
stuff.
But
you
may
want
to
consolidate
all
the
gateways
in
a
you
know:
super
secure
cluster
or
you
have
secrets,
certificates
and
other
things.
So
you
don't
have
the
secrets
leaking
basically
and
that's
an
important
use
case
for
security,
conscious
users.
Basically.
D
No,
I
should
update
it.
So
if
you
added
a
new
port,
say
443,
then
it
would
go
up
to
the
service.
A
Is
the
idea
that
the
controller
would
generate
the
service
and
deployment
from
the
existing
gateway
chart.
D
Maybe
I
don't
think
so.
Actually
so
like
we
have
both.
D
B
D
I
would
generally
say
yes,
but
we
need
to
be
a
bit
careful,
because
I
did
do
like
a
proof
of
concept
of
this
and
I
was
making
sure
that
the
service
was
exactly
as
we
specified.
But
then
there
was
some
other
controller
that
was
always
adding
its
own
annotations
to
the
service,
and
so
it
was
like
in
a
loop
basically.
D
How
exactly
that
will
work?
I
think
we
probably
have
to
look
at
what
other
things
are
doing.
It
should
be
something
that's
generally
like.
This
is
not
the
first
thing,
that's
creating
services
and
appointments.
So
there's
probably
a
lot
of
prior
art
on
what
the
best
practices
are.
There
creative.
B
Yeah
yeah
have
a
long
practice
of
doing
this.
John
one
one
comment
I
have
about
the
this
thought
injection
and
injection
template.
It's
probably
segue
to
what
I
want
to
discuss
next
week
about
the
injection
less.
But
if
you
have
this
controller
and
this
controller
can
read
config
maps
and
everything
else,
there
is
no
need
to
actually
create
the
port.
Have
the
port
called
a
web
hook
where
hook
patch?
Is
a
pod
and
create
support?
Really
you
can
just
you
know,
have
this
controller
read.
B
B
D
Yeah
it
does
it
changes
how
upgrades
work,
maybe
in
a
good
way,
because
if
we
do
that,
then
it
means
that
the
the
gateway
controller
is
in
complete
control
of
the
upgrade
because
they
like
the
pods,
will
not
upgrade
until
it
decides
that
it's
going
to
kind
of
rerun
the
injection
template
and
set
the
image
to
whatever
the
the
active
one
is.
If
we
do
it
the
way
it's
proposed
here,
then
it's
actually
the
injection
template,
which
is
largely
driven
by
the
control
plane.
D
Admin
like
the
easterd
admin
that
does
it
like
a
restart
of
a
pod.
They
get
the
new
image
right,
both
have
pros
and
cons.
I
think
I'm
not
sure
that
one's
inherently
better
than
the
other
one,
but
it
is
a
it's
a
it's,
a
bigger
change
than
just
saying
that
we're
avoiding
a
round
trip
to
the
web
hook.
I
think.
B
It
I
agree
with
you:
it's
it's
basically
saying
that
the
gateway
controller,
that
is
you
know,
creating
port,
is
responsible
for
the
ports
and
all
the
lifecycle
of
the
port.
So
so,
if
it
thinks
that
the
template
has
changed,
you
know
it
will
reapply
that
the
temperature
redeploys
the
pods
in
a
controlled
manner.
B
If
it
detects
that
the
you
know
that
tag
changed,
it
will
redeploy
them
again
and
it
can.
You
can
do
smart
things,
it's
not
like
a
rolling
restart.
I
mean
it's.
It's
a
bit
more
smarter.
B
B
Yeah,
in
particular,
if
you,
if
we
go
with
the
advanced
solution,
where
you
actually
deploy
the
gateways
in
a
separate
cluster
in
the
first
place,
because
again
you
want
to
to
keep
certificates
separate
and
separation
of
whatever.
Then
you
cannot
even
do
like
rolling
upgrades,
because
you
do
not
have
permissions
in
that
cluster
also
managed.
D
B
D
Right
now,
it's
only
reading
the
gateway,
api
resources.
In
theory,
there's
no
reason
I
couldn't
use
the
easter
gateway
ones
other
than
I
like
the
gateway
api
once
more.
No.
B
D
B
B
Yeah
still
ecosystem,
and
we
we
should
think
about
if
we
want
to
also
nah,
let's
not
think
about
it,
just
a
new
api,
new
gateway.
E
Yeah
I
mean
I'm
I'm
looking
over
this.
I
need
to
take
a
closer
look,
but
no
the
gate,
the
gateway
apas
are
where
we
need
to
be
headed.
So
I
mean
this.
E
E
You
know
other
things
in
in
this
year
or
or
at
least
figure
out
a
migration
path
or
whatever,
but
no
I
mean
on
at
a
cursory.
It
looks
fine.
It
looks
actually
like,
like
a
good
proposal.
D
D
D
So
then,
who
is
our
audience
for
using
this?
Automated
thing
seems
like
it
should
be,
probably
the
hopefully
large
percentage
of
not
super
bespoke
custom
users
and
then
how
big
is
that
group
right
if
that's
80
90
of
users
and
that's
that's
awesome,
and
I
think
this
makes
a
lot
of
sense
if
only
five
percent
of
users
think
this
is
is
useful,
then
it's
probably
not
really
worth
it
right.
B
John,
I
I
don't
necessarily
agree
with
this
statement.
I
think
this
should
be
the
default.
This
should
be
what
we
recommend
people
and
if
they
want
to
do
something
different,
they
can
do
it
by
themselves.
I
mean
it's
it's
too
difficult
to
support,
helm,
chart
histocatly,
install
and
20
other
ways
to
forget.
D
B
B
Both
so
I
thought
about
our
discussion
and
I
said
you're
very
convinced
no,
the
simplified
gateway
was
intended.
I
think
it's
for,
for
the
current
gateway
and
for
current
users
that
use
these
two
apis
and
it's
a
better
experience
than
what
we
have.
B
What
I'm
saying
here
is
that
for
the
new
kubernetes
api,
you
should
not
repeat
the
mistake
of
having
two
helm
charts
and
is
your
cattle
and
swells,
but
use
only
this.
This
new
model
that
doesn't
mean
we
should
not
have
the
simplified
gateway
for
the
existing
users
for
the
existing
histo
gateway.
It's
a
bit
more
refined.
D
Well,
yeah
I
it
could
be,
but
what
we'll
have
to
think
about
is
migration
like
we
want
to
make
the
movement
to
gateway
api
as
frictionless
as
possible?
D
And
that
means
that
we
either
need
to
support
existing
deployments.
So
they
can
just
change
the
api
and
not
the
deployment
or
it
would
need
to
like
take
over
an
existing
helm
or
easter
cuddle
controlled
deployment
right,
because
we
can't
tell
people
in
order
to
adopt
a
new
api.
You
have
to
go,
create
a
new
service
and
get
a
new
ip.
That's
probably
not
going
to
add.
B
B
No,
but
but
you
cannot
migrate
the
ipad
transformation
system
to
ingress.
That's
a
problem.
So
right
you,
if
you
migrate,
you
need
to
keep
it
in
your
system,
so
we
don't
have
the
problem
with
migration
if
we
put
it
into
your
system
because
it
cannot
be
migrated
as
far
as
you
know.
Well,
it
can,
but
it's
complicated.
D
Yeah
okay
sounds
like
we
have
a
lot
of
things
to
figure
out
here,
but
seems
something
that
we
should
do
so.
E
Yeah
and
as
far
as
the
mvp
is
concerned,
though,
I
don't
think
you
have
to
worry
about
migration.
I
think
we
should
try
and
understand
how
people
like
how
we
wish
to
do
migration,
but
I
I
don't
think
that
that
needs
to
be
done
as
part
of
the
mvp.
I
think
this
is
more
just
like
pushing
new
greenfield.
E
You
know
and,
like
you
know,
large
customers
that
are
doing
crazy
things
with
ingress
like
like
we'll
be
able
to,
hopefully
solicit
more,
but
at
least
for
like
new
onboarding
users
of
istio,
which
you
know
there
always
are
like
this
is
something
we
should
kind
of
strongly
be
pushing
them
kind
of
towards,
but
yeah.
Let's
just
give
some
consideration
at
least
to
migration,
but
I
don't
think
it
has
to
be
implemented.
At
least
not.
You
know,
anytime
soon,.
D
Okay,
I
I
mean
I
already
have
some
codes
hacked
together,
so
I
can
easily
get
that
up
in
eastern
ecosystem
for
now
and
then
we
can
refine
it.
The
hard
part
will
certainly
be
the
productionization.
You
know
creating
a
service
in
deployments,
not
so
hard,
but
what.
B
Is
the
process
those
days
to
to
get
stuff
into
this
ecosystem?
You
just
push
it
there.
E
E
D
F
One
thing
to
kind
of
caveat
to
watch
out
for
I've,
seen
in
other,
like
in
the
operator
project,
where
we
were
modifying
gateway
services,
server
side
patch
has
some
odd
behaviors
on
the
service
and
in
the
past
we've
seen
cases
where
that
effectively
meant
that
changing
anything
about
the
in
this
case,
it'd
be
the
gateway
object
or
the
operator
object
with
regard
to
the
gateway
resulted
in
the
gateway
ip
address
being
recalculated
if
it
was
not
explicitly
requested,
which
does
cause
problems
for
load,
balancers
and
the
like.