►
From YouTube: Combined WG Meeting 2022/03/02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
C
D
Yeah,
let
me
present.
D
Yeah,
so
this
is
an
rc
that
we
are
presenting.
It's
based
on
a
a
reference
design
that
aspen
mesh
has
done
for
istio,
1.9
and
still
1.11..
We
previously
received
feedback
from
john.
This
was,
I
think,
I
think,
july
of
last
year,
just
to
see
if
the
you
know,
the
overall
direction
was
kind
of
you
know,
kind
of
something
that
you
know
he
would
approve
of
in
the
future.
I
don't
want
to
speak
for
him.
D
D
So
for
a
little
bit
of
background,
this
is
now
stable
feature
in
kubernetes
yeah,
there's
a
couple
of
configurable
things
that
people
have
to
do
to
opt
in
to
get
dual
stack
behavior,
but
it's
no
longer
behind
a
feature
gate
and
at
least
more
of
our
customers
are
asking
about
support
for
dual
stack
networking.
D
We
have
ipv4
only
support
and
there's
ipv6
only
support,
so
so
yeah.
So
as
an
effort
that
we've
undertaken,
we've
got
this
working,
the
big
call
outs.
So
this
is
a
lengthy
document
and
it
would,
it
could
consume
the
entire
time
of
this
meeting
right.
I'm
not
going
to
I'm
not
going
to
really
ask
people
to
review
this
as
part
of
this
meeting.
This
is
more
of
a
what
are
some
of
the
high
level
changes
for
this
as
well.
D
As
you
know,
please
review
this
offline
and
then
we'll
talk
about
it
at
a
future
meeting.
You
know
if
there's
any
other
comments.
Hopefully
everything
can
be
addressed.
Kind
of
as
comments
are
going
up,
rather
than
kind
of
taking
up
meeting
time
right
so
yeah.
So
we
give
a
little
bit
of
behavior.
D
You
know
and
as
far
as
like
what
the
current
current
status
is
of
istio,
if
you
try
to
install
it
on
a
dual
stack
cluster,
this
isn't
supported
so
obviously
like
things
are
going
to
be
broken,
but
the
big
call
out
that
I
want
to
make
is
that
if,
if
we
are
supporting
dual
stack
services-
and
let
me
zoom
in
here.
D
The
big
call
out
here
is
that,
in
order
to
get
this
working,
ellie
says
we
see
fit.
Is
you
know
if
a
client
is
actually
requesting
or
is
making
a
a
request
over
ipv4
right?
So
they
get
the
the
the
virtual
ip
of
the
service
and
they
get
an
ipv4
address
or
they
get
an
ipv6
address
as
it
flows
from
the
client
and
it
flows
into
the
actual
server
application.
D
D
Product
page
does
resolve
the
ip
from
here.
It
makes
an
ipv6
request.
Let's
just
say
it
goes
over
the
ipv6
virtual
outbound
listener
that
we
have
created
in
addition
to
the
ipv4
one
from
there.
It
gets
routed
to
the
appropriate
listener.
You
know
it
does
a
match.
You
know
it
goes
through
routes.
It
goes
through
a
new
cluster
outbound,
six
cluster,
which
is
actually
associated
with
the
endpoints
associated
with
the
pod
endpoint
of
that
particular
service.
D
So
I
just
wanted
to
make
this
clear
so
that,
as
as,
as
we
start
putting
up
the
prs
for
this,
that
people
are
aware
of
of
like
why
there
has
been
at
least
a
doubling
of
this
john.
E
D
Maybe
inkjet
has
a
better
answer
for
this,
but
the
routing
is
set
up
completely
independently
and
as
far
as
I
re
recall,
so
I
think
q
proxy
sets
up
ip6
tables
rules
kind
of
for
that
so
yeah.
So
if
you
make
a
call
over
ipv6,
it
gets
routed
through
ipv6
internally.
D
So
so
it's
going
to
depend
on
how
your
networking
is
set
up.
So
you,
when
you
create
a
a
service
in
a
dual
stack
cluster,
you
don't
have
to
actually
specify
these
new
fields,
ip
families
and
ip
family
policy.
These
are
new
fields
and
actually
there's
there's
issues
with
with
the
new
fields
too.
So,
if
you
want
to
change,
say
http
bin
into
being
an
ipv6
or
sorry
a
dual
stack
service,
you
actually
have
to
delete
the
service
like
it's.
It's
considered
immutable,
but.
E
G
John
you're,
asking
about
the
behavior
current
implementation
of
kubernetes,
with
a
specific
cni
plug-in
or
about
specification
I
mean,
I
don't
think
they
have
any
specifications
that
says
that
you
must
do
what
or
it
will
only
go
to
ipv4
addresses
for
for
one
cluster.
E
Yeah,
I
would
say
I'm
looking
for
a
specification
if
it
exists
and
if
not,
then
what
the
prior
art
is
and
if.
If
they
did
the
right
thing.
G
I
mean
release
where
we
support
vms
and
we
support
not
only
kubernetes.
We
are
not
bound
by
kubernetes
limitation
or
rules,
because
in
reality
you
know
you
can
have
a
vm.
Once
you
have
a
vm,
it
implies
that
the
vm
can
have
an
ipv4
address
can
have
an
ipv6
address
and
we
should
be
able
to
load
balance
across
them.
So
I
think
we
should
have
a
very
strong
stance
that
when
you
call
a
service
you
will
load
balance
across
both
ipv4
and
ipv6,
because
otherwise
it's
you
know
will
break
a
lot
of
other
things.
G
E
G
What
what
it
does
and
what
yeast
your
behavior
will
be,
because
you
are
a
mesh
we
overlay
on
top
of
the
existing
network.
So
we
can
I've
seen
a
lot
of
mesh
networks
that
just
assign
one
ipv6
to
everyone
and
use
ipv6
as
deep
or
what
kind
of
overlay
ip
and
then
that
can
be
mapped
to
ipv4
ipv6
or
some
tunneling
protocols
that
that
are
in
between
them
and
that's
something
we
need
to
consider
about
was
to
also
avoid
you
know,
doubling
the
the
stack,
because
we
have
dns
interception.
G
We
have
a
lot
of
other
features.
We
could
say
that
services
resolve
to
a
single
address
which
can
be
ipv6,
and
then
we
have
a
single.
We
don't
have
to
copy
it
twice.
D
Yeah
john,
to
get
back
to
your
question,
though,
about
pod
ips,
the
pod
actually
does
get
both
both
an
ipv4
and
an
ipv6.
Like
always,
it's
only
that
the
routing
with
service
ips
is
actually
changed
when
the
service
definition
is
changed
right
so,
but
then
listen
on
both
ips
right,
which
makes
it
that
is
correct.
That
is
correct,
so,
like
you,
can
still
have
an
ipv4
only
service
or
an
ipv6,
only
service
in
your
environment,
even
though
you
have
both
pod
ips.
G
G
And
and
health
checks
would
probably
address
itself,
so
it's
not
it's
a
different
problem
about
which
one
is
a
language
and
it's
not
a
knife,
but
you
are
talking
about
a
single
cluster,
but
if
you
have
a
mesh
with
multiple
clusters,
some
cluster
v4,
of
course,
the
v6
and
some
cluster
dual
stock.
G
D
Yeah,
so
we
well,
we
have
documentation
ourselves
for
for
how
our
customers
actually
kind
of
need
to
do
this,
and
so
so,
as
part
of
this
effort,
yes
like
we
would
have
to
document
kubernetes,
I
don't
believe
you
can
do
it
in
place
with
openshift.
It's
just
a
few
commands
with
like
openshift,
four
or
nine
and
higher,
but
as
far
as
istio,
I
think
you
have
to
do
like
a
rolling
kubernetes
restart
it's
a
little
bit
more
drastic.
D
If
people
are
trying
to
go
from
an
ipv4
only
cluster,
I
mean
at
least
from
what
I've
seen
you
know,
at
least
with
kubernetes,
not
openshift.
It
may
want
to
be
like
a
hopefully
you've
treated
your
cluster
as
a
cattle.
You
know
and
can
kind
of
just
kind
of
rebuild
that
easily.
H
Okay,
I
have
seen
that
there
are
considerable
differences
between
this
dwell
stack
implementation
in
kubernetes
in
version
1.18
and
1.20.
So
that's
why
I
asked.
D
Yeah
yeah
and
that's
actually
why
up
here
we
actually
call
out
like
a
couple
of
things.
You
know
one
so
one
stable
and
1.23
so
either
we
support,
in
my
opinion,
1.23
or
higher
or
1.22
and
higher,
but
probably
not
not
anything
further
than
that.
It
actually
goes
back
all
the
way
to
1.16,
but
that
was
considered
alpha
and
then
from
alpha.
I
think
yeah
they
went
to
beta,
I
think
in
118
or
119..
I
can't
remember
exactly
when
and
then
it
went
from
yeah
beta
from
that
time
period
of
stable.
D
So
you
know,
given
that
this
is
going
to
be.
At
least
you
know
what
we're
calling
is
an
experimental
feature.
You
know
that
the
user
is
going
to
have
to
opt
in
you
know
it
would
be
nice
to
be
able
to
build
off.
Of
you
know
a
stable
networking
cni
within
kubernetes.
G
So
I
want
to
reach
a
conclusion
on
the
endpoints
being
mix
of
ip4
and
ipv6.
I
don't
see
any
alternative
to
to
you
know
allowing
that,
and
I
I
would
really
like
to
to
have
a
clear.
You
know
statement
that
that
this
will
be
the
case,
so
we
are
not
confused
later
and
and
also
to
to
separate
the
two
problems
I
mean.
G
The
problem
of
having
a
mix
of
ipv4
v6
and
dual
in
the
endpoint
pool
should
be
that
p0
and
and
the
thing
that
we
we
need
to
deliver
at
stable
as
soon
as
possible,
because
again
forget
about
kubernetes.
You
have
a
vm
that
is
ipv6
on
your.
You
have
a
vm
that
is
dual
needs
to
be
supported.
We
claim
to
support
vms
so
having
endpoints,
even
even
with
with
services
being
ipv4
only,
and
I
want
to
decouple
the
problem
of
supporting
dual
start
for
service
ips
dips.
D
Okay,
I
mean
yeah
yeah,
we'll
have
to
talk
more
about
that
yeah,
like
please,
add
some
more
discussions
in
as
far
as
like
what
are
things
yes,
because
we're
also
working
with
envoy
to
get
this.
This
kind
of
better
support
for
multiple
addresses
as
well,
so
so
that
we
don't
have
to
reduce
this
right.
So,
at
least
in
the
first
phase,
we
can
wait.
G
I
mean
we
can
support
typical
survey
service
deep
slight
pv4
only
and
and
since
we
control
dns,
we
can
make
sure
that
it
resolves
to
ipv4,
always
and
and
and
and
until
android
supports
better.
You
know
xds,
not
bloated
xds,
and
then
we
can
start
support
for
it.
It
doesn't
have
to
be
everything
at
once.
We
can
easily
add
ipv6
support
on
endpoints
today
without
changing
the
xds,
without
changing
almost
anything
else
and
and
get
it
to
stable,
basically
in
terms
of
testing
and
everything
else,
because
the
relatively
small
problem.
F
So
the
part,
the
problem
for
that
is
basically,
I
mix
the
ip
family.
So
if
I
can
come
back
with
any
of
the
ip
families,
there
is
a
mix
for
you.
Basically,
I
can
go
to
ipv4
ipv6
address
for
that
right.
Yes,
it's
a
letter.
It's
a
balance
in
between.
G
F
So
initially,
when
we
designed
this-
and
we
talked
about
this-
the
possibility
for
that-
but
the
decision
we
made
the
truth
is
we
want
to
strict
ipv4
going
to
ipv4
traffic
and
ipv6
go
to
ipv6
traffic.
So
it's
just
for
the
security
reason.
Instead
of
balancing
between
the
two.
If
one
of
them
doesn't
work,
then
we
will
lose
half
of
the
traffic.
G
I'm
not
sure
I
understand
the
the
concerns
here.
I
mean
it's,
that's
reality.
You
have
users,
have
you
know,
clusters
that
are
ipv4
only
they
have
vms
and
and
and
you
know
what
kind
of
environment
that
ipv6
we
cannot
control
and
dictate
that
it
must
be
ipv6
and
ipv4
and
and
you
don't
want
to
lose
half
of
the
capacity
for
the
service
just
before,
because
I
mean
what
would
be
the.
F
Is
it
will
be
better
to
be
make
more
clear
to
when
we
do
a
demo?
For
that.
D
Okay,
yeah,
let's
take,
I
think,
there's
still
some
more
discussion
to
be
had
there.
Let's
kind
of
take
this
again
like
already
24
minutes
deep
right,
but
but
yeah
you
are
right,
like
you
can't
do
ipv4
maps
on
ipv6
addresses.
That
is
something
that
that
is
done.
We'll
have
to
clarify
some
of
the
the
reasons
why
probably
in
the
stock,
so
yeah
so
hopefully,
hopefully
we
can
make
the
case
for
what
it
is
that
we're
trying
to
do.
D
But
at
the
same
time
you
know,
I
do
know
that
there
are
concerns
so
yeah
like
let's
try
to
resolve
this
within
the
document
after
this
meeting
or
or
you
know,
side
discussions
with
us
costing.
I
do
understand
your
concerns,
though
anyway
so
yeah,
so
for
environments.
D
Sorry,
the
I
guess
the
thing
for
environments
is
that
we're
proposing
yeah,
just
environmental
support
or
sorry
environmental
variable
kind
of
opt-in.
I
know
that
we
don't
want
to
use
environmental
variables
kind
of
long-term,
the
reason
being
like
this
would
be
considered
experimental
and
then
probably
in
the
next
release
or
the
release
after
it
depends
on
kind
of
how
the
integration
testing
is
coming
along.
D
Yes,
we
actually
want
to
auto,
detect
it
right,
and
so
we
want
to
be
able
to
actually
do
some
more
determination
and
say:
okay,
yes,
we're
in
a
dual
second
environment.
We
have
dual
sect
services.
We
have
a
mix
of
services,
you
know
et
cetera
and
actually
just
have
it
kind
of
work
similar
to
what
ipv6,
only
you
know
does
right
now,
right
so
yeah.
D
So
that's
kind
of
why
we
are
not
proposing
a
change
to
api,
but
actually
a
change
to
like
long
term
kind
of
auto
detection
and
that's
another
thing
for
environmental
support
or
sorry
environments.
Again
I
go
through
here.
I
list
all
the
changes
down
here
like
you
can
actually
see
an
example
as
well.
D
I
go
through
of
like
here's,
you
know,
here's
a
test
setup
and
then
I
provide
a
config
dump
with
at
least
a
working
working
implementation
of
this
and
then
some
of
the
limitations
and
then
other
things
that
we're
working
on
within
envoy
right
which
which
he
actually
did
address
right.
So
yeah.
There
is
memory,
consumption
and
usage
issues.
You
know,
at
least
with
this
approach,
there's
also
like
load
like
load
balancing.
There's
waiting
issues,
there's
other
concerns.
D
You
know,
especially
as
you
get
into
more
more
advanced
use
cases
right
like
vms
multi-clusters,
right
that
all
kind
of
need
to
be
addressed
but
kind
of
before
I
do
that
ying
chung.
Did
you
just
want
to
give
a
demo
real
quick
of
what
it
is
that
we
do
have.
B
F
Can
okay,
so
I
just
do
just
I
just
show
I
create.
There
are
three
name
spaces.
Basically
I
have
one
is
ipv6
only
for
the
services
and
I
you
can
see
my
endpoint
as
ipv6.
Only
and
then
I
have
the
ipv4
one.
The
service
is
only
ipv4
and
I
have
a
dual
stack
one.
It
has
the
both
ipv4
and
ipv6
service
and
it
has
different
endpoints
for
that.
F
So
I
can
do
a
check
say
like
I
can
do
the
traffic
going
through
each
one
of
them.
So
in,
as
I
said
in
this
case,
we
ipv4
going
to
ipv4
address
and
ipv6
go
to
pc.
We
don't
mix
the
traffic
between.
So
this
is
the
case
like
I'm
going
to
do
a
stack
cluster
and
I
can
use
ipv41
or
I
can
use
ipv6
one,
the
service.
We
have
to
create
ourselves
that
you
can
print
out.
F
What's
the
remote
that
dressy
is
so
you
can
see
what
he
is,
and
also
you
can
see
from
here
is
what's
addressed,
try
to
reach
to
so
if
I'm
doing
the
ipv6
one
you
can
see
the
ipv6
addresses
try
to
connect
here
and
the
remote
address
is
from
the
ipv6
as
well
similar
thing
I
can
go
into
any
of
the
ipv4
or
ipv6
address.
So
if
I'm
going
to
just
ipv6
one,
that's
the
part,
I
only
have
the
ipv6
one,
so
it
will
go
into
ipv6
address
for
that.
F
So
this
is
basically
the
traffic,
it's
a
betrayal
mixed,
ipv4,
ipv6
or
dual.
It's
also
work
with
the
way
it
was
set
up
for
that
I
can
do
a
quick
for
that.
Let's
quickly
take
a
look
at
the
four
different
components
for
that.
If
I'm
just
grab
the
cluster
for
yet-
and
you
can
see,
I
have
oba
inbound,
because
this
is
my
dual
stack
and
I
have
my
inbound
has
two
of
them.
I
have
inbound
for
normal
ipv41
and
ipv6
one.
We
do
that
inbound
six
for
that
and
then
you
can
see.
F
I
have
difference
the
cluster
going
to
different
places
for
in
only
in
the
case,
if
the
service
has
dual
stack,
I
we
have
another.
One
is
called
the
outbound
six
going
to
ipv6
address
in
the
case
of
the
ipv4,
only
or
ipv6
only
service,
the
cluster
is
only
has
one
outbound
services,
so
this
is
still
using
the
original.
F
The
traffic,
the
names
name
conventions,
just
outbound,
something
with
the
for
the
cluster,
and
if
I
could,
I
can
just
take
a
look
at
the
similar
thing.
I
can
take
a
look
at
the
listener.
F
And
you
can
see,
I
have
the
inbound
cluster
with
this,
and
for
the
ipv60
is
the
inbound
six,
for
that
route
is
a
little
bit
different
in
a
way
that
route
is
because
it
is
a
dual
stack
cluster,
which
means
I
have
each
of
the
parts
I
have
all
the
ipv4
and
ipv6
address.
So
in
the
even
in
the
case
of
I
have
ipv4
the
v6
only
service,
I
the
route.
F
I
still
use
that
I
append
in
the
ipv6
for
that,
because
just
indicate
it's
the
ipv4,
a
v6
address
for
the
routes,
for
it.
D
D
Yeah
yeah
here
I'm
gonna
go
back
to
this
really
quickly,
constant
in
as
far
as
things
like
ipv4
compat,
which
I
know
that
john
addressed
envoy's
had
some
numerous
issues
with
like
ipv4
combat
and
some
of
the
behavior
was
kind
of
tricky
and
as
far
as
like
what
like,
when
we
were
testing
things
again,
because
we
did
look
at
like
you
know,
colon
colon
with
ipv4
combat
set
to
true-
and
you
know
like
what
exactly
happens.
G
Just
an
example
I
mean
I'm
not
saying
we
should
use
it.
I'm
sure
you
know
eventually
should
fix
about
the
long
term.
I
think
that
the
long
term
goal
should
be
to
you
know,
by
default
to
support
arbitrary
endpoints
with
whatever
it
is
assuming
it
takes
all
the
box.
I
mean
I'm
not
supposed
to
say
an
hour.
D
Oh
yeah,
no,
absolutely
I
think,
there's
this.
I
think.
The
reason
why
we're
bringing
this
here
is
that
so
we've
got
this
to
work
with
a
single
single
cluster
right
like
that's
our
like.
That
was
the
requirement
that
was
kind
of
given
to
us
from
our
our
customer
but
kind
of
long-term
speaking.
Yes,
like
there's,
actually
a
lot
of
concerns
around
this.
I'm
going
to
have
canon
talk
a
little
bit
more
about
this,
but
you
know
the
first
phase
would
just
be
like.
D
Can
we
actually
get
this
working
within
a
you
know,
a
single
cluster
right
and
then
just
say
hey.
This
is
experimental.
It's
not
supported.
You
know,
and
you
know
these
like
hybrid.
You
know
cluster
organizations
and
and
and
then
actually
like
what
is
the
step
for
us
to
actually
start
figuring
out.
D
What
should
the
routing
be
for
for
for
for
alpha
for
beta
for
vms,
for
multi-clusters,
right,
like
I
mean
multi-cluster
and
vms,
are
gonna
present,
so
many
other
issues
and
as
far
as
like
yes,
can
this
actually
work
well
with
all
of
the
other
features
that
seo
does
have?
And
sorry,
let
me
look
at
your
comment.
D
Yeah,
that's
something
that
we
can
address
here,
hopefully
kind
of
in
the
rfc
and
come
to
a
better
agreement.
The
one
thing
is
kennan
kennen.
Are
you
there
yeah
yeah,
so
kenny?
Do
you
want
to
just
speak
to
a
little
bit
in
as
far
as
like
the
integration
test,
suite
stuff
that
you're
gonna
be
working
on
yeah.
J
So
yeah
so
we're
basically
gonna
try
to
be
integrating
or
pulling
over
so
that
we
can
have
initial
steps
of
like
kind
being,
supporting
a
dual
stack
and
then
going
into
introducing
test
suites
for
ipv6.
Only
ipv4,
only
dual
stack
with
preference
for
ipv6,
dual
stack
with
the
preference
for
ipv4
and
then
sorry
just
making
sure
and
then
basically,
once
we
have
the
rfc
approval
or
if
we
get
the
approval,
then
we'll
be
starting
to
introduce
validation
tests
for
listener
routes.
D
Okay
and
that's
our
half
iris
and
steve
zhang
are
our
kind
of
our
cohorts,
our
colleagues
at
intel
that
are
that
are
kind
of
working
the
single.
We
also
have
somebody
from
their
team
that
is
kind
of
looking
at
envoy
changes
again
like
this
is
still
in
rfc,
we're
still
in
the
initial
inception
for
for
what
will
be
done
for
open
source
and
as
far
as
yes,
if
this
diverges
from
what
asthma
mesh
has
done
for
1.9
and
1.11,
I'm
hoping
we
can
come
to
an
agreement.
D
You
know
fairly
soon
here,
obviously,
there's
still
a
lot
of
things.
This
is
a
big
document.
You
know,
please
take
a
look
at
this.
I'm
sure
there's
going
to
be
many
concerns,
but
but
yeah,
hopefully
we
can
get
to
it
and
address
them
soon.
So,
thank
you
guys
for
your
time.
You
guys
have
any
questions,
though,
before
we
kind
of
hand
it
off
to
somebody
else,
because
I
know
there's
a
lot
more
on
the
agenda.
E
That's
all
right,
I
think
I'm
up
next
yeah
go
forward,
john
one
thing
also,
while
I
share
this,
we
should
probably
not
categorize
it
by
working
group
because
then
networking's
at
the
top,
so
we
get
all
the
topics
first,
I
just
know
anyhow.
So
what
I
want
to
talk
about
is
some
changes
to
kind
of
how
we
do
outbound
traffic
matching.
E
This
probably
has
some
overlap
with
ibv6,
to
be
honest,
but
the
kind
of
the
motivation
is
that
we-
probably
everyone
here,
knows
that
you
can't
really
just
drop
east
geo
in
place
in
a
big
cluster
and
expect
everything
to
work.
There's
a
lot
of
different
reasons
for
that.
Some
of
them
are
things
like
life
cycle
issues.
Where
you
know
we
like
breaking
nick
containers,
we
do
all
sorts
of
funky
things.
Let's
start
up
a
shutdown
that
I
think
we're
working
on
in
other
places.
E
The
stock
doesn't
help
with
that,
but
the
other
case
is
kind
of
things
we
do
with
traffic
that
are
not
expected.
So
this
is
things
like
you
know
my
traffic.
E
When
used
to
go
here-
and
now
it
goes
here
instead
with
estio
or
it
used
to
work
and
now
at
404s
things
like
that,
so
some
of
this
is
intentional,
like
we
do
http
load
balancing,
which
is
just
what
easter
does
and
you
know
that's
a
feature
and
not
a
bug,
but
some
of
it,
I
think,
is
unintentional
because
of
decisions
we've
made
in
the
past
that
kind
of
diverge
from
the
behavior
you
get
without
istio,
without
adding
value
and
just
adding
you
know,
issues
for
people
adopting.
E
E
So
this
is
pretty
common
in
some
applications
like
prometheus
would
be
a
good
example.
Currently
that's
pretty
much
broken
in
easter
today
and
along
the
way,
there's
also
a
bunch
of
other
edge
cases
that
I'll
get
into.
I
won't
get
into
it
too,
deep,
because
I'm
gonna
give
enough
time.
E
I
could
probably
talk
about
this
for
an
hour,
but
I'll
go
over
kind
of
a
high
level
overview
so
on
here
I
I
won't
get
into
this,
but
I
kind
of
mapped
out
the
logic
that
we
have
today,
which,
as
you
can
see,
it's
quite
complicated
but
I'll
skip
over
that
for
now.
Basically,
the
main
issues
like
I
mentioned
you
know
pod
ip
doesn't
work.
We
get
kind
of
inconsistent
behavior.
E
If
you
do
something
like
this,
this
is
obviously
the
request
when
it
really
looked
like
this
in
most
applications,
but
similar
occurrences
do
happen
because
we
route
based
on
host
header,
we
kind
of
get
a
bunch
of
weird
behavior
that
doesn't
align
with
kubernetes,
and
I
also
noticed
it.
Doesn't
it's
not
what
linkerd
does
either
just
as
another
data
point.
E
You
know
we
have
some
other
issues
like
we
do
for
headless
services
like
we
need
to
support
potter
piece,
otherwise,
they're
completely
broken,
but
the
way
we
implemented
it
is
so
expensive,
like
I,
we
saw
one
user
that
just
had
one
daemon
set
and
it
caused
10
megabytes
of
xds
config,
which
is
obviously
just
not
scalable,
so
yeah
there's
some
more.
I
can
go
into
depth
on
these,
but
I
just
I
don't
want
to
spend
too
much
time
here.
So
I'll
talk
about
kind
of
the
the
changes,
I
guess.
E
G
G
We
are
also
still
working
on
the
you
know:
h-bond
proposals
or
better
transport
securities
that
was
presented
one
or
two
years
ago
and
we
are
making
slow
progress
because
there
is
a
lot
of
testing
a
lot
of
family
changes
and
when
this
land
it
will
also
solve
a
lot
of
those
problems
and
and
kind
of
impact
it,
and
we
may
want
to
make
a
choice
about
which
problems
we
solve.
G
We
prioritize
from
this
list,
because
not
all
of
them
need
to
be
done
at
once,
I
suspect,
and
which
ones
we
just
you
know,
focus
more
on
on
the
h1
and
and
address
them
with
h1
solution.
Because
that's
that's
that's
a
point
where
we
can
make
some
breaking
changes
and
we
can
change
a
bit
behavior
in
a
kind
of
expected
way
like
understood,
2.0
or
something
and
and
and
you
know
it
may
be
an
easier
and
safer
migration
when
you
migrate
to
h1.
G
So
we
need
to
be
careful
with
the
trade-offs.
Basically,
that's
what
I'm
saying
yeah.
E
I
totally
agree
like
I
think
this
will
be
one
of
the
hardest
migration
problems
that
we'll
have
to
solve
that
we've
done.
I
think
it's
worth
it,
and
so
far
my
goal
has
been
to
kind
of
define
here.
What
I
think
is
the
ideal
behavior
and
then
once
we
do
that
we
can
figure
out
how
we
can
actually
implement
this
in
envoy,
which
I
don't
think
is
trivial
today.
I
think
it
will
require
on
what
changes
and
then
how
we
can
actually
migrate
there,
and
that
may
mean
cutting
some
of
these
things.
G
You
know
what
I
meant
maybe
do
this
when
we
in
the
age
implementation,
so
we
will
have
at
some
point
a
switch
where
we
migrate
to
h1,
that's
unavoidable,
so
maybe
implement
the
new
features
in
the
hdbond
logic
start
pushing
some
of
the
h1
changes,
so
so,
with
with
a
new
behavior
and
people
will
opt
in
or
opt
out
of
those
features
by
switching
to
h1.
So
when
they
move
to
h-bone,
they
get
the
new
behavior
the
correct
behavior.
If
they
want
the
backward
compatibility,
they
stay
with
the
existing
implementation.
That's
what
that's
my
proposal.
E
Yeah
one
thing
I
would
note
is
that
this,
like
we
can't
couple
it
just
as
a
means
of
you
know
two
breaking
changes
coupled
together,
but
there
are
technically
unrelated.
G
D
E
I'm
after
because
I
can't
go
too
in
depth
in
this
amount
of
time,
is
people
to
look
at
feedback
on
the
routing
logic,
the
new
routing
logic
and
see
if
that
is
actually
the
ideal
eco
behavior-
and
I
guess
two
is
you
know
if
this
is
even
something
we
should
do?
Is
it
worth
the
type
of
migration
pains
and
that
sort
of
thing
you
know
once
I
get
some
feedback
on
that
I'll
start
thinking
on
how
we
can
actually
implement
this
and
what
the
migration
path
will
be
et,
cetera.
E
E
E
E
Yeah
my
next
two
I
have
also
on
the
list.
I
think
it's
the
same
kind
of
thing
just
would
be
good
to
get
some
more
eyes
on
on
these
changes.
I
think
one
of
them.
I
need
to
go
out
a
lot
more
info,
so
I
don't
know.
Maybe
it
was
premature
putting
there
but
there's
another
pr.
That's
not
from
me.
That's
making
a
fairly
substantial
change
that
I
think
I
think
it's
good,
but
it's
kind
of
a
very
risky
change.
E
So
I
want
basically
everyone
to
look
at
it
if
possible,
especially
the
dual
stack
folks,
since
it
may
impact
you
so
yeah.
Please
take
a
look
at
those
and
whoever's
next
on.
The
list
feel
free
to
go.
E
We,
I
don't
think,
there's
much
to
talk
about,
but
especially
the
second
one
would
be
very
good
to
get
other
eyes
on
it,
especially
people
that
are
working
on
dual
stack
and
that
are.
E
A
Yeah,
I
don't
know
who
the
what
the
github
id
is
for
the
next
user.
C
K
K
Know
everything
was
working
before:
okay,
all
right,
sorry
about
that.
Okay.
The
first
issue
actually
is
about
the
document
that
we
we
talked
about
this
in
the
last
environment
working
group
meeting,
and
I
find
that
we
actually
have
this
document
talk
about
upgrade
and
downgrade
all
that
kind
of
stuff.
But
last
time
we
said
the
downgrade
or
rollback
actually
is
not
safe,
and
I
also
find
that
there
are
actually
a
lot
of
issues
opened
because
the
people
follow
document
to
the
upgrade
use
the
upgrade
command.
K
I
wonder
if
we
should
revisit
this
section
and
not
to
kind
of
say,
hey,
do
the
downgrade
this
way
or
do
it
operate
this
way
we
should
just
say:
okay
go
ahead
and
use
the
I
still
cuddle
install
for
your
upgrade
and
for
downgrade.
You
know
we're
not
really
supporting
that.
G
Excuse
me,
so
rollback
is
different
from
downloadback
means
that
I
have
111.
I
I
upgrade
112..
Something
is
wrong
and
I
return
immediately
to
one
level
that
rolls
back,
meaning
that,
in
between
the
users,
do
not
use
any
of
the
12
features.
It's
just
you
know
zapping
starting
the
process.
It
finds
something
wrong
reverts
to
to
any
level.
I
C
All
right
so
the
way
the
docs
are
right
now
right,
we
have
two
dots.
We
have
an
in
place
and
a
canary
the
in
place,
which
is
the
upgrade
and
downgrade,
does
call
out
that
we
recommend
using
canary,
which
is
the
rope,
has
the
roll
back
on
it.
So
my
first
thought
quickly
is
that
we
could
update
the
in
place
to
add
something
that
we
discouraged
the
downgrade.
C
E
Yeah,
I
think
in
place
downgrade
or
roll
back.
Whatever
you
want
to
call.
It
is
a
very
bad
idea,
and
also
the
doc
is
is
wrong.
That's
not
a
supported
way
to
roll
back.
That's
ever
been
designed
by
the
easter
easter
part
of
the
project,
rather
than
the
docs
part.
We've
always
expected
the
control
plane
to
be
equal
or
newer
to
the
proxies,
but
the
rollback
dock
is
not
maintaining
that.
So
it's
almost
guaranteed
to
not
work.
E
G
With
canary
australia,
it's
guaranteed
to
not
work
any
downgrade
between
mine
or
version
is
guaranteed
to
break
something
if
people
started
to
use
it.
So
if
I
start
using
a
feature
from
112
that
only
exists
in
1
12
and
you
downgrade
111
is
guaranteed.
That
might
that
that
it
will
break
to
not
work
because.
E
E
G
Yeah,
you
have
a
damper,
but
eventually
it
recovers.
So
if
you,
if
you
are,
if
you're,
if
you're
only
influence
upgrade
you
are,
you
know
expect
some
downtime.
It's
that
kind
of
part
of
the
contract
in
place.
It
doesn't
guarantee
that
you
don't
have
some
downtime
and
then
you
can
roll
back
and
you
may
have
another
downtime,
but
at
the
end
you
are
in
a
stable
situation.
G
B
K
Right,
constant
it
it
it's
okay
I
mean.
However,
we
define
rollback
or
downgrade
users
cannot
tell
the
difference
and
they
follow
the
document
they
run
into
trouble.
So
that's
what
I'm
trying
to
address
here.
So
if
we
have
a
clear
passage,
okay,
don't
ever
do
downgrade.
Don't
ever
try
to
you
know,
say:
okay,
you
already
move
up
to
1.12.
K
D
K
Right,
it's
okay,
that
I
create
something
that
asks
you
to
make
some
changes,
so
we
can
fix
this
stock
issue
at
least
yeah.
K
Right
because
I
found
a
lot
of
those
issues
all
related
to
okay,
I
didn't
upgrade
broke
too
many
webhooks
right,
okay,
so,
okay,
all
right,
I
think
we
have
a
conclusion
here
now,
the
next
one
that
was
the
environment
working
group
asked
me
to
report
back
on
this
particular
issue
that
see
this
is
also
related
to
the
upgrade.
K
The
problem
is
that
when
user
have
something
running
for
a
while,
then
you
do
up,
they
do
upgrade
start
at
1.12
and
then
we
actually
check
if
the
web
hooks.
Actually,
if
the
system
has
duplicate
web
hooks
in
terms
of
namespace
and
object,
selector
and
obviously
when
we
check
that,
if
that
check
fails
and
the
process
failed
right,
so
that's
really
the
cost.
K
But
this
kind
of
problem
was
not
really
that
obvious.
If
you
use
the
I
still
operator,
because
the
error
actually
is
part
of
the
operator
lock
when
you
run
a
command
and
say
okay,
go
ahead,
upgrade
my
operator
to
the
newer
version
from
the
console.
Everything's
great
there's
nothing
wrong,
but
actually
your
stuff
will
not
be
upgraded
because
the
operator
actually
fails.
They
find
okay.
You
have
a
duplicate.
Webhooks.
K
A
K
That's
right,
which
I
know
so
you
talk
about
yeah
and
make
this
matter
worse,
actually
is
what
I
find
user.
K
Basically,
when
they
install
the
when
they
did
upgrade,
they
actually
used
the
the
generated
either
used
home
or
or
other
ways
generated
the
yama
file
to
do
the
upgrade,
so
they
didn't
use
operator
or
use
the
the
I
still
cuddle.
So
just
okay.
I
did
this
last
time
now.
Let
me
change
this
version.
Number
of
the
image
now
apply
this
yamo
and
whole
things
go,
but
since
it
doesn't
go
as
expected,.
K
A
K
Now,
in
the
system
they
just
have
web
hooks
multiple
web
hooks
which
have
same
namespace
and
object
selector.
So
that's
according
to
the
webhook
analyzer
considered
to
be
an
overlap,
so
this
happens
actually,
even
today,
let's
say
you
have
multiple
instances.
I
still
installed
that
will
happen.
Hey
john
go
ahead.
E
E
K
E
I
What
you're
saying
is
that
correctly
stolen
what
at
least
the
normal
requirement?
And
you
know
you
have
to
have
a
left
and
these
the
wet
hooks
are
selecting
warnings.
So
you
cannot
really
end
up
in
the
situation.
G
G
G
I
K
All
right,
okay!
Well,
let
me
let
me
try
just
upgrade
and
see
if
the
the
web
hooks
actually
correctly
merged.
If
isn't,
then,
then
we
fix
that.
K
No,
no,
I
mean
revision
that
that's
fine,
it
say.
Okay,
if
you
didn't
do
revision,
you
know
you
just
trying
to
to
to
to
have
multiple
instances.
We
don't.
Okay,
sorry.
What
I'm
saying
is:
that's
that
just
really
really
makes
sure
that
when
we
do
a
right
upgrade
if
the
web
hook
actually
correctly
merged
into
you
know
not
overlap
points.
Let's
just
try
that
I
I
haven't
tried
that,
but.
G
G
Very
much
right
and
for
no
revisions,
that's
life.
I
mean
we
said
this
from
the
beginning
that
if
you
are
doing
an
in-place
upgrade,
we
cannot
guarantee
that
you'll
not
have
downtime,
so
you
can
just
delete
the
old
one
and
create
a
new
one.
I
mean
it's,
it's
that's
the
nature's.
That
way.
We
spend
all
the
time
doing,
revision
upgrade
because
it's
we
cannot
really
solve.
I
mean
that's
why
we
introduced
revisions.
That's
why
you
know,
google
and
everyone
is
doing
reviews.
G
K
G
We
should
make
it
clear
in
the
box
that
revision
based
upgrade
doesn't
have
this
problem
and
in
place.
Upgrade,
does
have
this
problem
and
other
problems
that
are
known
and
we
don't
have
a
fix
for
them,
and
we
because
the
fix
is
revision
based
upgrade
okay
right.
Yes,
that's
a
proper
fix
for
the
bug,
the
way
to
fix
the
bug
and
the
problems
is
to
switch
to
irrigation-based
upgrade.
G
K
Okay,
all
right,
so
the
last
one
is
really
just
a
question
for
this
one.
I
will
add
a
little
bit
more
to
the
to
the
issue
as
comment,
so
the
people
wouldn't
really
bother
us
more.
The
last
one
is
really
the
sort
of
moral
document
as
well.
K
Do
we
have
a
list
of
those
environment
variables
that
we
can
set?
The
user
can
take
a
look
at
it.
I
I
haven't
find
one.
A
G
Oh,
he
did,
but
it's
it's
a
very
delicate
issue
here,
because
all
the
environment
variables
are
guaranteed
to
are
used
for
experimental
stuff
and
and
are
not
stable
and
users
should
be
strongly
discouraged
to
rely
on
them,
except
for
the
experiment.
So
they
should
wait,
for
you
know
them
to
become
proper
apis
or
or
default.