►
From YouTube: Istio Environments Meeting 2021-09-22
Description
Istio Environments Meeting 2021-09-22
A
All
right,
yeah
before
we
get
to
the
agenda,
just
one
quick
process
thing:
the
environments
working
group
owns
about
five
docs
tests
that
haven't
been
automated.
I
think
we
went
through
and
manually
verified
that
these
docs
are
working
for
the
last
release,
but
yeah.
If
we
can
get
owners
for
some
of
these
that'd
be
great.
We
don't
have
to
assign
owners
now,
but
just
to
make
you
all
wear
well
in
advance
of
the
release.
A
And
then
yeah
looks
like
john.
Has
a
few
topics.
B
Oh
wow,
we
got
a
lot
yeah
yeah.
I
wanted
to
discuss.
I
forgot,
which
one
this
one
is
oh
yeah,
so
there's
been
a
proposal
to,
as
the
description
says,
terminate
envoy
when
the
connections
become
zero,
so
the
idea
is
kind
of
a
more
graceful
shutdown
right
now
we
just
always
wait
five
seconds,
which
is
either
too
long
or
too
short.
B
B
The
kind
of
open
question
is:
how
do
we
like
what
knobs
do
we
expose
to
configure
it,
because
we
we
kind
of
need,
like
a
minimum
amount
of
time,
to
wait
otherwise,
there's
a
race
condition
where,
like,
if
the
very
second,
that
cubelet
kills
a
pod,
they
send
a
request
or
something,
and
we
already
have
an
existing
configuration
flag.
So
what
do
we
do
with
that?
One
there's!
If
you
scroll
down
a
bit
there's
a
long
comment
that
I
had
about
various
different
options.
B
Well,
that's
the
thing
is
we
don't
detect
the
application
being
alive,
but
we
detect
whether
there's
open
connections,
so
it
may
not
even
be
a
good
idea.
I
think
it's
certainly
not
100
reliable.
If
you
have
an
application
that,
for
example,
it
shuts
down
it,
does
a
bunch
of
work
and
then
tries
to
like
write
into
some
database
at
the
last
second,
because
there
wouldn't
be
an
open
connection.
B
However
long
they
they're
setting
it
today
like
they
would
already
have
to
set
that
today
to
wait
a
long
time
to
stay
alive.
C
D
Are
we
lame
duck,
I
think
yeah
so
immediately
we
go.
B
C
B
Yeah,
I
mean
the
way
we
get
it
in
the
pr
is
somewhat
hacky
like
it's
using
the
stats,
which
is
a
bit
suspicious,
but.
C
With
with
the
injection,
unless
I
mean
having
the
starter
in
the
application,
it's
very
easy
because
application,
we
start
the
application.
I
don't
know
if
we
can
inject
some.
C
I
think
we
discussed
in
the
past
by
adding
some
some
binary
to
the
container,
either
through
to
unique
container
to
mount
volume
or
something
yeah,
and
that
will
be
something
that
will
solve
all
our
problems.
Basically,
because
it
will
launch
the
application
it
will
detect
when
it
dies
it,
it
can
yeah.
C
And
it's
consistent
with
what
we
are
doing
with
with
docker
vms
and
other
places
where
we
have
full
control
and
where
we
don't
have
this
problem
in
the
first
place,
we.
C
To
start
is
another
thing
that
is
perfect,
because
then
you
can
you
can.
That
would
be
a
clean
implementation
of
way
to
start
actually
because
yeah.
B
B
B
B
C
And
it
will
perfectly
consistent
with
with
the
injection
less
thingy,
but
I
mean
if
you
build
a
docker
image,
you
probably
have
the
binary
already
you
don't
have
any
problem.
If
not
it's.
I
love
this
idea
because
it's
it's.
C
A
Sorry
can
we
go
back
for
some
context
on
the
existing
options
and
the
option
of
this
pr
ads?
Yeah.
Sorry,
I
just
don't
have
all
the
context
here.
B
Yeah,
the
existing
option
is
basically
there's.
This
environment
variable
called
termination
drain
duration
and
defaults
to
five
seconds,
and
basically
we
just
wait
that
amount
of
time
and
then
we
exit
there's
also
a
kubernetes
setting
which
is
somewhere
named
termination
grace
period
seconds,
which
is
basically
when
the
pod
exits
will
send
a
sig
term
to
the
pod
and
then,
after
the
grace
period,
it
will
send
a
sig
kill.
So
if
you
don't
gracefully
shut
down
within
30
seconds
as
a
default,
then
it
will
kind
of
forcefully
kill.
You.
C
B
Yeah,
but
it's
also
like
it's,
neither
it's
always
either
too
too
slow
or
too
fast
right.
It's
never
just
right
so,
but
in
this
I
mean
the
way
this
is
implemented,
it's
actually
at
minimum
five
seconds
and
could
be
longer,
so
it
doesn't
actually
help
the
shutdown
quickly
case,
otherwise
we're
at
risk
of
like
a
last
second
request,
and
then
we
see,
there's
no
connections
and
we
close
too
early
yeah.
F
G
If
the
application
doesn't
doesn't
close
all
of
the
connections,
does
it
hang
or
is
there
like
a
like
a
force
like
a
force,
kill
well.
C
B
C
So
if
it
doesn't
exit
on
sick
terms,
then
it's
it's
going
to
hang
for
30
seconds,
but
and
we'll
die
anyway
after
five
seconds.
Yeah.
B
C
The
application
is
expectation
of
the
developer
is
that
they
will
get
30
seconds
to
clean
up
and
do
whatever.
So
if
they
expect
that
their
connection,
without
these
years,
they'll
have
30
seconds
where
they
can
still
keep
the
connections
alive
and
whatever
and
for
long-lived
connection.
Sometimes
it's
useful
to
have
this.
B
B
Well,
I'm
just
concerned
about
modifying
their
container,
we're
not
we're
not
like
modifying
the
docker
image,
but
we're
modifying
like
the
container.
B
B
Oh,
I
think
it's
definitely
worth
worth
checking
out
and
maybe
we
find
out
it's
it's
great
and
we
should
just
turn
it
on
for
everyone
or
at
the
very
least,
an
option
like
right
now.
The
waiting
till
we
start
is
an
option
and
there's
downsides,
but
there's
also
plenty
of
upsides,
and
I
think
it's
so.
C
So
effectively
we
are
talking
about
pilot
agent.
Here
I
mean
we
are
going
to
pop
a
pilot
agent
to
do
some
volume
and
then
mountains,
application
and
run
pilot
h
and
ram,
which
I
think
someone
already
implemented
the
next
option
pilot
agent.
So
it's
kind
of
like
we'd
signal.
B
C
B
Signal
to
the
container
like
we
mount
some
shared
bds
socket
or
something,
and
we
we
send
some
signal
like
I,
I've
started
up
and
then
I'm
shutting
down.
C
Okay-
and
that
happens
to
be
the
same
thing
we
are
doing
for
injectionless
for
docker,
meaning
that
we
probably
this
will
also
binary
the
process.
Grpc
bootstrap
and-
and
so
we
don't
have
to
run
a
proxy
container.
G
Yeah
I
mean
this
sounds
great
yeah.
The
only
thing
I
guess
I
would
say
is
like
like:
where
should
the
user
be
able
to
configure
this
right?
Is
this?
Would
this
be
a
candidate
for
like
a
proxy
config?
You
know
part
of
the
crd.
You
know
that
you
guys
are
implementing
or
like,
or
is
this
going
to
be
like
a
global
thing?
You
know
when
the
user
does
configure
this.
I
think
this
is
a
good
step
forward.
G
Honestly,
the
only
downside
I
guess
I
would
see
is
that
I,
after
working
with
a
lot
of
people,
people
don't
do
graceful
shutdowns,
very
well
in
their
kind
of
environment
like
they're,
not
listening
for
sick
terms,
and
you
know
a
lot
of
users
have
poorly
app
poorly
written
applications
that
basically
have
to
be
forced
killed.
You
know,
I
I
don't
know.
We
can't
worry
about
those
kinds
of
things
like
if
people
are
doing
bad
practices
and
and
things
like
that
and
they're
just
going
to
get
of
course
killed.
C
C
B
H
Okay,
so
doesn't
mean
I
no
longer
needs
to
configure
this
at
stop
time,
because
now
we
have
this
intelligence
to
terminate
envoy
when
there's
no
active
connections,
especially
for
digitalists,
because
they,
our
users,
are
struggling
with
how
to
config
this
for
this
service
right,
so
we
do
45
seconds,
but
so,
if
user
doesn't
do
anything,
this
is
pretty
transparent
to
them.
Right,
if
I
understand
correctly.
C
C
Is
there
any
reason
not
to
have
it
by
default
in
at
some
point,
I
mean
after
we
test
it
and
including
startup.
H
Yeah,
it
would
be
really
nice
if
this
is
the
stable
and
we
enabled
by
default.
So
this
triggers
me
a
related
question,
so
you
know
how
we
had
like
hold
the
application
until
your
proxy
is
ready
right.
So
we
talked
about
like
the
need
to
hold
the
product.
I
guess
it's
a
whole
day
application
and
it's,
I
think
it's
hold
the
proxies
from
stop
until
the
application
stuff.
H
B
C
It
will
replace
it
and
it's
much
better
because
the
starter
can
know
you
know
it
has
far
more
control.
I
mean
you
can
communicate
with
the
proxy
to
some
socket
or
some
other
mechanism,
and-
and
you
can
you
know
you
don't
have
to
have
the
weight
slips.
100
milliseconds
try
something.
B
G
Oh
yeah,
so
my
question
was:
what
are
we
going
to
use
to
say
you
know
to
turn
this
from?
You
know
false
to
true
right.
So
if
it's
an
environment
variable
or
we're
just
going
to
like
give
it
some
time
to
kind
of
incubate
or
you
know,
get
some
feedback
from
you
know
some
users
or
or
like
how?
How
do
we
intend
to
measure
that
so
that
we
know
I
don't
know,
because
if
it
was
up
to
me
in
my
world,
I
would
just
turn
this
on
right.
C
B
C
C
So
john
I'll
I'll
try
with
my
my
starter:
that's
the
one
I'm
experimenting
with
and
I'll.
Let
you
know
if
it
works,
it
doesn't
work
and
then
we
can
sync
up
and
figure
out
what
how
to
move
forward.
Okay,.
B
A
Well,
because
we
have
a
lot
of
stuff
I'll
move
it
to
the
bottom,
okay
yeah.
What
is
what
is
the
case
today,
where
the
proxy
exits
before
the
application
with
so
that
this
would
fix
that
case
as
well.
E
B
No,
I
mean
this
is
just
real
quick.
Just
we
publish
the
helm
charts
now
it
would
be
there's
a
lot
of
easter
charts
that
people
were
unofficially
publishing,
so
it'd
be
nice
if
people
went
and
like
started
or
something
so,
it
will
show
up
first
so
and
use
them.
Of
course,.
A
That's
it
cool.
Do
we
want
to
do
an
announcement
or
put
it
somewhere,
where
I
think
a
lot
of
community
people
would
probably
start
anyway.
B
C
C
G
So
kind
of
along
similar
lines.
Lately
you
know:
we've
had
you
know
people
you
know,
request
annotations
and
labels
and
other
things
that
aren't
visible
in
helm.
My
my
feeling
is
that
once
we
actually
go
live
with
this,
there's
going
to
be
kind
of
more
requests,
for
you
know
other
things
that
istio
does
support
within
our
helm
charts
should
we
come
up
with
with
some
kind
of
a
kind
of
a
policy
or
a
mechanism,
or
are
we
just
going
to?
G
You
know
just
approve
a
lot
of
these
like
configurable
items
that
people
are
going
to
be
writing
issues
come
on
because,
right
now,
what
I'm
thinking
happens
is
people
put
the
charts
into
their
own
cacd
and
then
from
there
they're
able
to
kind
of
modify
them.
You
know
as
needed
versus
if
they're,
using
an
official
home
repo,
I
think
there's
going
to
be
a
desire
for
them
to
kind
of
not
kind
of
work
in
that
old
workflow.
C
You
know,
I
think
I
think
we
discussed,
I
don't
know
if
you
missed,
I
think
few
meetings
back.
We
discussed
this
same
subject
and
someone
was
sending
a
pr
or
something
my
opinion
at
that
time,
and
I
don't
think
we
anyone
disagreed
strongly
was
that
we
should
support
everything
that
the
normal
health
charge.
I
mean
if
it's
a
standard
pattern
like
annotations
labels
that
is
commonly
used
in
the
helm
world.
We
definitely
should
support
it
and
annotation
labels
are
fit
in
this
category.
C
So
if
it's
part
of
I
think
john
did
the
experiment
with
creating
a
helm
chart
from
helm,
create
init
or
whatever
yeah,
and
there
are
a
bunch
of
things
there
that
are
presumably
common.
C
G
Exactly
so
yeah
so
yeah,
so
we
should
probably
just
like
put
some
yeah
going
to
customize
yeah,
so
we
should
probably
include
that
as
part
of
the
the
home
documentation
I
I
can
take
that.
I
can
take
that
action
item,
but.
C
G
C
A
Yeah,
so
I
just
wanted
to
kind
of
bring
to
the
working
group
attention.
It
seems
like
there's
a
few
pretty
severe
issues
with
the
revision-based
upgrade
using
the
in
cluster
operator,
and
I
don't
necessarily
know
who
should
own
these
issues
right
now
or
who
has
expertise
in
this
area.
So
I
just
kind
of
wanted
to
get
some
visibility
on
it.
I
looked
into
those
these
issues
when
they
were
reported
and
it
seems
like
the
wrong
revision
of
enclosure
operator
is
reconciling
revisioned
istio
operator
resources.
A
So
it's
it's
pretty
broken
and
I
guess
overall
maybe
to
raise
some
discussion
on
whether
we
want
to
continue
to
support
the
enclosure
operator.
Yeah.
That's
that's!
Basically,
that's
basically
it
because
I
I
don't
know
who
owns
this
feature
right
now,.
B
I'm
a
I'm
a
bit
skeptical
of
saying
it's
deprecated,
just
because
of
the
churn,
but
I
I
could
do
kind
of
think
that
we
should
go
and
on
the
dock
say
like
this
is
not
the
recommended
approach.
We
strongly
recommend
use
other
ones
because
right
now,
if
you
go,
it's
the
second
one
on
the
list,
it
doesn't
really
have
anything
telling
you
not
to
use
it,
so
people
will
just
use
it.
I
think
we
should
disincentivize
people
from
using
it
because
there's
a
ton
of
bugs
it's
not
just
this
one.
A
Yeah,
I
think
the
community
has
kind
of
come
to
understand
that
it's
not
as
supported
or
at
least
from
the
issues
and
slack
threads,
I'm
reading
it's
kind
of
people
warning
each
other,
but
not
not
using
the
end
cluster
operator,
but
yeah.
B
But
it's
a
bad
experience
to
find
that
out
by
wasting
eight
hours,
trying
to
get
it
to
work
and
I
think
it'll
be
way
less
useful.
Once
we
have
the
gateway
stuff,
because,
although
we'll
install
is
easter,
od
and
if
you've
already
installed
used
to
od
or
you've
already
installed
the
operator,
you
could
have
just
installed
eastwood
directly,
so
there's
kind
of
minimal
benefit
to
use
it.
Now
that
we'll
have
one
deployment
to
install,
whereas
when
we
first
made
it,
we
had
like
10.
B
C
B
E
E
Yeah,
and
I
mean
this
is
notwithstanding
what
what
we
designed
with
with
operator.
I
don't
think
if,
if
there's
something,
that's
obviously
broken,
we
should
fix
it.
C
So
I
think
that
will
also
reduce
the
number
of
bugs
if
we
kind
of
de-emphasize
it
in
the
documentation.
C
E
Yes,
so
so
helm,
I
I
guess
it's
not
really
straightforward.
There's
there's
not
you
know,
there's
not
really
an
explicit
path.
I
I
think
we
could
work
on
finding
something
for
operated.
Istio,
cattle.
That's
that's!
A
very
straightforward
move.
C
E
I
think
that's
that's
largely
true.
The
the
only
possible
wrinkle
could
be
if
somebody
started
using
the
non-held
fields,
in
which
case
there's
some
manual
migration
involved,
but
it's
it's
not
anything
terribly
difficult.
Okay,.
B
E
I
I
don't
have
any
concerns,
I
I
think
it
sounds
like
that's
the
consensus,
so
you
know
I
I
think
it.
It
makes
sense
to
have
as
few
installation
paths
as
possible
and
if,
if
really
this,
there's
no
longer
anybody
that
absolutely
needs
operator,
I
you
know
it
used
to
be
that
that
some
folks,
we
were
using
that
as
as
a
as
a
kind
of
a
basis
for
for
a
service
offering.
E
C
Okay,
so
we're
left
with
two
installations:
there
is
your
cattle
method
and
yeah,
that's
good
enough.
B
C
So
we
continue
to
support
them.
I
mean
dedication,
it
means
scares.
People
means
that
we
are,
you
know,
removing
support.
There
is
not
a
huge
cost
to
continue
to
support
it
at
the
level
it
is
today.
So
if
it
works
today,
it
can
work
five
releases
from
now
and
it's
less
scary
for
users
and
minutes
yeah.
C
I
Yes,
yeah,
and
I
just
wanted
to-
I
call
what
martin
was
saying
and
the
fact
that
we
have
this
issues
opened
by
a
customer.
That
means
the
people
still
using
it,
so
we
should
fix
them,
and
I
have
no
issue
to
take
on
those
issues
and
signed
to
me
and
trying
to
fix
up.
H
I
C
Speaking
of
deprecation,
one
thing
that
we
do
need
to
deprecate-
probably
I
don't
know
if
it's
official
or
it's
part
of
the
istio
cartel
operator
api.
We
have
the
gateways,
we
have
all
kinds
of
options
that
are
probably
not
doing
anything
anymore
yeah
and
I
don't
think
we
have
any
doc
or
any
any
indication
that
those
are
just
empty
strings.
Basically,
I
mean
do
nothing.
C
C
A
I
so
I
do
have
a
quick
question
on
how
operator
in
cluster
upgrade
is
supposed
to
work.
If
you
have
revisions,
because
I
noticed
when
you,
when
you're
doing
an
upgrade
it'll
just
reconcile
older
and
newer
crds
over
top
of
each
other
kind
of
it'll
just
kind
of
flip-flop,
I
don't
know
how
that's
supposed
to
necessarily
work.
E
Well,
it's
the
best
yeah
connect,
so
canary
was
was
never
I
mean
it's
it's
supported
in
theory,
but
but
in
practice
it's
it's
very
tricky
to
set
it
up
so
that
it
works
correctly,
which
is,
which
is
why
you
know
it's
a
area.
It's.
It
was
never
really
advertised
as
as
something
that
we
we
recommend
operator
for.
But
but
the
short
answer
is
that
you
can
install
operator
sorted,
so
it
only
watches
its
own
revision
and
you
need
multiple
copies
of
operator
to
have
canary
work
correctly.
E
A
Okay,
yeah
right
yeah.
What
I'm
saying
is,
even
if
you
have
the
multiple
revisions
of
the
enclosure
operator
when
they
reconcile
my
understanding,
is
they'll
overwrite,
the
base
of
the
other.
A
C
Let's
not
get
too
far,
I
mean
we
want
just
to
fix,
bugs
and
try
to
reduce.
So
if
it
flips
flip
flops,
we
can
just
not
touch
them.
You
know
not
update
it
or
so
idea
is
to
minimize
the
amount
of
changes
and
support
we
need
to
do
for
the
operator
if
it's
no
longer
recommended.
E
Yeah,
so
just
just
on
that,
specifically,
if
I
remember
it's
been
a
while
since
since
I
actually
set
this
up,
but
I
I
think
the
way
we
dealt
with
that
is
is
just
not
to
install
base
in
one
of
the
operators.
H
Yeah
yeah,
I
think
what
sam
is
on
to
something,
though,
because
if
the
in-cluster
operator
has
a
big
limitation,
you
know
we
should
highlight,
and
it
would
help
people
make
decisions
too
like
if
the
same
you
know
we
don't
have
enough
people
to
support
and
we're
not
recommended-
and
this
could
be
one
of
the
critical
reason
why.
B
E
Yeah,
I
I
think
that
that
makes
sense,
given
that
even
that
canary
is
is,
is
in
white
use
now
and-
and
I
think
we
could-
we
should
probably
explicitly
say
that
it's
not
recommended
for
canary
upgrade
yeah
makes
sense.
B
B
C
And
also
we
should
advertise
and
make
clear
in
the
website.
I
don't
know
how
how
we
can
explain
to
the
users
that
we
are
committed
to
support
all
the
features
that
we
launched,
but
some
of
them
will
be
in
maintenance
mode
meanings
that
will
work
for
a
very
very
long
time,
but
we
still
do
not
recommend
people
to
use
it,
but
we
recommend
the
new
apis
kind
of.
H
Yeah
we
need
to
craft
their
messages
pretty
clearly
and
also
making
sure
user
knows
you
know
there
are
alternatives,
hopefully,
are
way
better
out
there,
which,
in
the
case
of
the
networking
api,
I'm
not
sure
you
know,
you
know
the
kubernetes
gateway
api.
You
know
user
would
be
willing
to
switch
at
the
moment.
C
C
We
may
make
some
changes,
for
example,
but
the
base
we
can.
We
can
say
that
operator
is
only
operating
on
history
d
itself,
but
you
will
need
to
do
helmet
installed
or
stock
for
the
base,
so
we
have
full
control
over
the
base
yeah
and
that
will
solve
some
of
the
current
form
because
the
canary
will
install
it
with
helm
or
these
two
catalan.
Okay,
let's
fine
tune
it.
I
think
we
spend
enough
time
on.
H
A
I
can
I
can.
A
Mode
change.
C
I
did
other
to
sneak
it
in
for
gateways.
We
will
recommend
the
new
new
way.
I
mean
standalone
user,
operated
independently
or
automated
when
it's
available
and
we
are
going
to
dip.
Well,
I
don't
know
deprecate
or
maintain
I
mean
I
don't
know
if
we
would
still
want
to
support
the
operator
managing
gateways.
That's
my
my
concern.
When
we
have
both
helm,
clean
charts
and
automatic
mode
and.
B
C
A
Yeah,
let
me
represent
this
next
one
yeah,
so
hopefully
this
is
pretty
quick.
I
just
wanted
to
get
some
opinions
from
environments.
So
basically
there
was
a
problem
in
our
web
hook,
patching
where
we
removed
leader
election
from
the
validating
web
book,
and
we
we
never
had
it.
I
don't
think
for
the
mutating.
I
could
be
wrong.
A
We
might
have
removed
it
like
a
year
ago,
but
basically
this
is
a
problem,
because
when
you
have
multiple
replicas
of
sdod
they
can
they
can
fight
each
other
when
they're
updating
their
respective
web
hook.
So,
basically
the
thought
was.
We
don't
need
leader
election,
because
only
one
revision
should
be
patching
and
then
the
problem
there
is
okay.
Well,
you
can
have
multiple
replicas
in
a
revision
that
you
do
need
to
leader
elect
among,
so
this
pr
basically
adds
a
new
leader
election
type
that
says:
okay.
A
A
B
C
Mean
it's,
for
example,
it
is
for
for
objects
that
are
shared.
It
will
be
catastrophic
because
we
already
had
the
bug
where
yeah
two
revisions
were
fighting
with
each
other,
because
they
are
not
in
the
important
yeah.
I
go
ahead.
Go
ahead,
you're
also
working
on
having
the
default
revision,
so
so
the
default
to
win
and
and
to
become
the
validator
and
to
shouldn't
the
default
version,
also
be
the
one
that
is
patching
and
doing
all
the
other
stuff.
Instead
of
having
a
random
red
green,
whatever.
A
Not
exactly
so
for
web
hook,
patching
only
the
given
revision
knows
the
cert
to
patch
in
and
each
revision
handles
its
own
web
hook.
A
C
F
C
C
Necessary
because
otherwise
they
cannot
look
at
each
other.
So
if
you
have
work,
I
mean
the
whole.
The
premise
of
of
revision
is
that
you
at
the
end,
all
workers
can
communicate
with
each
other.
So
the.
B
C
Typically
not,
but
then
I
don't
know
because
the
downside
of
this
is
that
it
will
be
very
desirable
to
have
a
default
or
or
some
dedicated
hype,
because
that
that's
a
dangerous
part,
that's
the
most
dangerous
part
in
istio.
C
Let's
table
this,
I
have
another
topic
which
is
very
closely
related
with
this.
So
I
can.
I
can
discuss
the
parameters
so
we'll
go
ahead
and
do
the
other
stuff
and.
A
F
A
I
mean
it's
fine
in
that
case,
because
the
old
revision
is
broken
among,
like
the
older
vision
is
going
to
be
fighting
each
other
for
the
old
web.
Hooks
like
it
was,
but
at
least
the
new
revision
is
fixed,
so
it
doesn't,
it
doesn't
make
things
any
worse.
I
think
it
makes
things
strictly
better
for
the
new
revisions.
C
D
B
C
There
is
another
alternative
really
which
is
to
I
mean:
are
we
going
to
install
the
east-west
gateway
or
some
similar
gateway
by
default?
We
discussed
in
the
past.
I
know
you
didn't
necessarily
agree
that
we
should
have
a
gateway
and
have
the
web
hooks
and
everything
behind
the
gateways
that
is
actually
dispatching.
So,
instead
of
having.
C
I
had
a
pr
at
some
point
to
have
the
default
version.
Also,
you
know
the
same
product
studio
deploy
a
gateway.
So
basically
there
will
not
be.
You
know
one
different
web
per
revision.
There
will
be
only
one
web
hook
that
is
a
default
and
default
can
dispatch
through
to
a
gateway
route
to
different
revisions.
C
That
has
other
benefits.
I
mean
you
know
you.
Can
you
can
put
a
real
certificate,
so
you
don't
have
to
patch
anything.
You
can
support
better
multi
like
externalize
your
dna
and
because
external
city
is
really
a
gateway.
I
mean
these
dude
is
behind
the
gateway,
and
should
we
explore
this
again?
Should
we
discuss.
C
It
is
not
hard,
I
mean
we
had
it
before
in
in
one
zero.
I
don't
know
three
releases
back
before
releases
back
yesterday
had
the
side
car,
which
you
know,
acted
as
a
gate
or
can
act
as
a
gateway.
A
J
A
C
It
also
means
that
we
can
implement
percent
switch
between
canadians,
because
right
now.
C
We
have
a
very
strict
routing
between
revisions
and
histories,
but
with
a
gateway
I
mean
having
a
default
gateway
in
the
terminate,
xds
and
all
requests.
Basically
in
the
default
we
can
have,
you
know,
20
percent
go
to
a
canary
and
then
it
kind
of
is
down,
go
back
to
the
principle,
so
pretty
much
what
we
are
doing
for
for
revision
for
regular
applications
when
we
do
traffic
shifting
and
subsets.
C
Let's
think
about
it,
I
I
don't
know
it's
it's
probably
more
work,
but
it's
it's
it's
going
to
solve
many
problems
and
create
more
opportunities
in
especially
with
multi-cluster.
You
know,
fallbacks
and
a
lot
of
other
things
will
be
possible
with
that.
C
A
Okay,
cool
yeah.
That
makes
sense
to
me
if,
if
we
could
review
this
pr,
though,
and
either
say
like
it's,
it's
you
know
maybe
too
dangerous
or
move
it
forward,
just
because
it's
a
it's
a
111
bug
so
yeah.
Thank
you.
That's
that's
everything
on
that
topic.
A
I
think
it
results
in
a
lot
of
error
logs
in
sdod,
because
it'll,
try
and
like
update
the
web
hook
and
then
get
a
conflict.
I
don't
think
it's
like
a
release,
blocker
type
thing,
or
it's
not
like
a
huge
issue,
but
it's
a
it.
C
Is
a
huge
issue:
if
you
have
twist
yods
that
use
different
certificates,
because
then
then
one
of
them
will
be
broken.
C
B
H
D
A
So
then,
this
is
not
a
problem.
In
that
case,
this
is
just
for
the
same
revision.
C
Too
much
about
it
it's,
but
we
still
should
still
look
at
if,
if
using
a
gate,
we
wouldn't
improve
other
things,
but
not
in
this
context,
not
for
this
bug,
but.
A
Okay,
can
we
can
we
think
of
any
other
cases
where
we
want
a
single
replica
from
each
revision
to
win
a
leader
election?
If
not,
I'm
happy
to
close
this
with
won't
fix.
H
A
Okay,
so
maybe
this
is
not
a
big
issue:
okay,
cool.
A
And
yeah
constantine
and
john,
you
can
get
your
yours
in
a
head
at
this
one.
C
Are
we
validation,
or
are
you
still
discussing
validation,
so
the
validation
url?
Does
that
there's
a
few
things
about
validation
that
I'm?
Actually?
I
have
question
about
people
more
familiar
with
validation.
If
they
can
answer,
my
understanding
has
been
that
validation
is
not
actually
doing
anything
any
lookup
in
the
in
the
kubernetes
cluster
is
more
or
less
stateless.
I
mean
you
can
have
a
validation
web
hook.
C
That
is,
you
know,
holding
all
virtually
it's
version
specific,
but
not
revision
specific
and
it's
not
dependent
on
what
you
have
what
else
you
have
in
the
cluster.
So
someone
could
have
a
single.
You
know:
east
uio,
validate
that
all
validation
in
the
world
are
pointing
to
and
that
will
validate
revisions
without
any
well
there's
some
privacy
implications.
C
You
don't
want
to
have
as
this
ui
or
do
world
validation,
but
it
could
be
something
provided
by
the
net
is
provided
by
someone
or
someone
else,
not
necessarily
that
we
is
planning
to
have
such
a
central
validator
for
gateway.
C
Gt
but
other
vendors,
probably
you
can
do
the
same
thing
so
basically
they
don't
expect
engine
x
to
improve,
to
install
gateway,
validate
or
easier
to
install
a
gate,
evaluator
and
you
know,
link
it
to
install
a
data
validator,
so
the
one
per
kubernetes
install
the
question
is:
if
we
can
do
something
similar
for
easter,
where
we
kind
of
move
validation,
you
know
maybe
out
of
history,
even
or
in
any
studies
that
is
dedicated
for
addition
or
or
you
know,
for
centralized
geode
or
external
studies.
C
C
B
B
One
thing
we
could
also
do
is
we
could
probably
move
probably
eighty
percent
of
the
logic
to
the
open
api
schema.
So
kubernetes
will
do
it
for
us,
although
if
we
don't
move
100,
then
it
doesn't
really
help
us
that
much
because
we
still
have
the
webhook
well.
C
C
And
the
other
question
is:
if,
if
we
can
make
it
backward
compatible,
so
if
you
have
112
if
it
is
capable
to
validate
for
111
and
because
we
had
the
discussion
about
which
validator
do
you
use,
if
you
have
1
11
112
at
the
same
time
in
the
cluster?
Obviously
you
cannot
create
one
to
have
resources
because
111
will
be
confused,
but
if
we
use
the
112
validator
and
112
can
validate
111.
C
Okay,
let's
explore
it,
let's,
let's,
let's
spend
some
because
validation
has
been
hasn't
changed
in
a
very
very
long
time
and
and
with
open
api
validation.
Improving
it's
not
even
clear
how
much
it's.
B
C
Probably
aligned
with
what,
with
kubernetes
when
kubernetes
gateway,
we
moved
to
bennett's
gateway
six
months
from
now
or
one
year
from
now,
whenever,
since
kubernetes
will
take
over
gate,
probably
that'll
be
a
good
time
to
schedule
for,
for,
for
validation,
changes
to
involve
and
keep
it
stable
until
then,
but
yeah,
because
then
we'll
no
longer
have
to
fight.
At
least
your
gateway
and
probably
other
apis
will
will
migrate
upstream
and
become
yeah.
B
J
What's
what's
the
proposal
here,
I'm
sorry
I
might
have.
The
proposal
is
to
to
figure
out
if
we
can.
C
Improve
validation
and
and
turn
it
into
a
you
know,
revision
independent
service
that
could
be
provided
by
you
know.
In
a
multi-cluster
environment
you
have
only
one
validator,
you
know
running
somewhere
and
we
you
don't
have
to
to
worry
about
you
know.
Validation
is
revision,
a
revision
b
and
it's
you
know.
B
B
B
I
B
C
Another
benefit
of
a
central
you
know:
east
ui
of
slaves.
Validated.
Is
that
in
a
cicd
system
you
could
just
call
this
with
proper
certificate,
and
you
can
validate
your
own
configs
without
actually
going
to
a
specific
cluster
yeah.
C
C
Okay,
no
proposal,
actually
it's
just
opening
for
discussion
and
and
ideas
I
mean
if
we,
if
we
are
interested.
A
B
B
That's
pretty
much
it
I
I
think
we
should
do
this.
Is
there
any
concerns
comments,
just
review
the
pr
one
thing
we
kind
of
talked
about
before
is
like:
where
will
it
run
and
kind
of
the
conclusion
that
I
personally
came
to
was
that
running
it
in
east?
B
2D
is
the
kind
of
only
thing
that
really
makes
makes
a
lot
of
sense
because,
like
the
whole
point,
is
that
there's
zero
friction
to
getting
started
and
if
you
have
to
go,
install
the
gateway
operator
to
use
the
gateway,
then
we've
kind
of
defeated
a
lot
of
the
purpose.
So,
with
this,
the
user
will
just
do
helm,
install
htod,
create
a
five
line,
gateway
ammo
and
everything
will
just
automatically
work
out
of
the
box.
So
it's
pretty
nice
experience.
C
B
C
No,
it
should
be,
it
should
be
included
as
an
option
in
in
in
in
http,
but
as
it's
not
not
part
of
history
binary,
but
a
separate
controller.
If
we
can
do
it.
So
so
it's
if
I,
if
I
have
eq
110
yeah
and
I
want
to
use
gateway
controller,
I
can
take
gateway
controller
112,
put
it
in
in
a
different
class
because
not
and
then
then
it's
you
know
faster
iteration.
Basically
we
can.
We
can
iterate
on
gateway
much
faster
because,
especially
if
you
start
generating
materializing.
C
The
is
your
resources
right
now
it's
in
memory,
but
you
could.
If
you
have
a
controller,
you
can
create
your
gateways
to
virtual
service
and
everything
that
is
grant
created
in
memory,
and
then
we
cannot
support
for
gate
much
faster
and
without.
B
B
A
B
Yeah,
so
that's
that's
pretty
much.
The
main
controversy
is
where
it
lives
and
the
permissions,
but
if
we're
okay
with
that,
then
it's
pretty
straightforward.
There
is
some
not
invented
here,
but
I
have
a
long
comment
explaining
why
I
did
that
and
I
I
think
it's
for
the
best.
It's
actually
it's
extremely
simple,
like
just
creating
these
three
resources,
but
it
it's
it's
very
hard
to
write
a
controller
and
kubernetes.
That
does
the
right
thing.
Apparently,
but
I've
learned.
C
So
we're
going
to
document
this
as
it
it
doesn't
work
for
it
for
each
okay,
no.
C
I
was
wondering
if
in
is
a
three
installed
mechanism
for
gateways
that
we
have
today
for
easter
gateway
and
should
have
a
fourth
one
that
say:
hey,
create
a
bernates
gateway
and
then
use
virtual
service
and
use
your
gateway.
In
addition
to
that.
But
instead
of
you
know,
using
help,
install
gateway
or
whatever
is
to
cut.
Operators
say
that
I
create
a
resource,
kubernetes
gateway
and
that
will
automatically
create
the
in.