►
From YouTube: Gateway API GAMMA Meeting for 20230103
Description
Gateway API GAMMA Meeting for 20230103
A
As
always,
companies
contact
rules
are
in
effect,
so
pleased
for
you
too
respectfully,
and
we
have
an
open,
Agenda.
Midi
notes
are
linked
in
the
invitation.
For
this
event,
please
feel
free
to
take
a
look
and
add
any
topics
that
you'd
like
to
discuss
and
also
aggregate
list
studies.
So
we
can
keep
an
idea
of
who's
able
to
make
it
to
these
all
right
with
that
Keith.
Do
you
want
to
start
with
Gateway
API
mesh
performance
tests.
C
Sure
so
I
was
hoping.
John
would
be
here
to
be
able
to
give
more
context,
but
he
apparently
can't
make
it
so.
The
the
last
action
item
that
we
had
a
few
meetings
ago,
I
think
a
couple
of
months
ago,
before
holidays
got
crazy.
Was
we
kind
of
moved
to
have
Gateway
API
utilize,
the
the
backbone
of
the
istio
into
end
tests
framework
in
order
to
do
mesh
conformance
testing,
they
have
like
specialized
images
to
to
do
different
kinds
of
requests,
and
things
like
that.
C
That
would
be
helpful
down
the
road
and
there's
somebody
at
Google
or
the
SEO
project
who
was
taking
care
of
that
they've
since
moved
on
to
a
different
project
and
no
longer
doing
the
conformance
test
so
John's
trying
to
find
somebody
to
work
on
that
at
you
know
internally,
but
in
the
meantime,
I've
I've
kind
of
set
out
I'll
go
spelunking
through
a
new
code
base
and
and
try
to
pull
some
stuff
together
so
want
to
just
let
everybody
know
that
I'm
that
I'm
doing
that
it's
it's
fairly
complex
and
I'm
doing
the
exercise
of
trying
to
make
it
work,
while
not
adding
a
bunch
of
complexity
into
the
gateway,
API
testing
code
base.
C
So
it'll
take
me
maybe
a
little
bit,
but
it
is.
We
are
moving
forward
with
that.
So
there
might
be
a
there
should
be
a
PR
coming
from
me
somewhere
in
the
next
couple
of
weeks.
Assuming
the
sky
doesn't
fall
and
more
things
have
to
happen.
So
yeah
any
questions
or
comments
on
mesh
performance
testing
in
general.
A
A
We
should
probably
like
firm
that
up
as
we
approach
slow,
like
Gateway
API
0.7
release,
where
we
actually
like
I
think
we'll
be
officially
part
of
it
like
right.
We
have
the
API
review
and
zero
six
and
was
just
like
informant
information.
Only
basically
so
basically
be
an
explosive
experimental,
Channel
and.
A
Conformance
levels
like
I
I
think
this
kind
of
came
up
with
the
topic
I
want
to
get
to
next,
the
like
client-side
routing,
but
basically
differentiating
between
what
is
core
gamma
functionality
and
what
is
potentially
extended
functionality
that
should
behave
according
to
part
of
the
stack
if
an
implementation
chooses
to
support
it,
but
knowing
that
not
all
organizations
may
may
support
it
and
then
obviously
custom
is
not
covered
by
tasks.
But
the
the
gap
between
core
experimental
is
something
that
we
definitely
want
to
yeah.
Let's
figure
out
a
little
bit,
yeah.
C
I
agree
this.
This
feels
like
a
conversation
for
the
main
gateway
API
meeting,
because
I
almost
feel
like
when
it
comes
to
gamma
there,
like
you
said
there
is
some
gray
area
and
we
might
end
up
like
splitting
off
and
having
like,
maybe
a
separate
category
or
a
another
another
category
within
Gateway
API.
C
For
these
kind
of
gated
work
streams,
the
vocabulary
is
lacking
for
a
lot
of
what
we're
trying
to
do,
but
at
minimum
I
think
we
need
to
be
able
to
to
specify
with
whatever
levels
we
need
the
the
expected
level
of
support
with
gamma
and
the
expected
level
of
of
conformance
for
implementations
from
both
customer
side,
as
well
as
so
customer
and
implementation
side,
and
if
we
can
do
that
with
what
exists
in
Gateway
API.
C
Now
then
great
yeah
I,
don't
know
that
we're
able
to
at
the
moment
so
maybe
I'll
take
an
action
item
to
add
an
agenda
item
to
next
month's
I
get
next
week's
Gateway
API
a
meeting
on
Monday,
and
we
can
discuss
there
if
there
aren't.
You
know
anyone
pressing
topics
so
I'll
take
that
action.
A
Thank
you
all
right,
so
we
had
a
brief
discussion
of
this
in
one
of
the
last
means
that
we
had
last
year,
this
topic
this
floor
class
number
1493
is
basically
expanding
the
scope
of
the
existing
Gap
to
include
client-side
routing,
so
the
Legacy
Gap
that
I
wrote,
focuses
on
like
service
owner
slash
producer,
author
value,
so
any
incoming
connection
to
a
service
by
a
specific
identifier
should
be
round
according
to
the
service
under
producer
rules.
A
This
pull
request
expands
the
scope
of
that
to
allow
client-side
rules
to
pretend
there's
two
things:
I
think
the
initial
immediate
proposal
is
they
should
supersede,
and
the
parent
rule
should
not
apply.
There
was
an
interest
in
eventually
figuring
out
some
kind
of
merge
logic,
but
then
kind
of
at
the
last
minute.
I
didn't
really
get
much
of
a
chance
to
follow
up
on
this
in
the
last
year.
A
It
sounded
like
this
would
be
prohibitively
difficult
or
performance
expensive
for
istio,
and
they
may
not
be
interested
in
implementing
this
in
its
current
form
anyway.
A
If
folks
had
thoughts
as
to
the
status
of
this,
if
this
is
something
that
we
should
still
try
to
move
forward,
if
we
should
put
it
in
a
nice
box
or
something
like
that
to
revisit
later
or
what
general
thoughts
were
and
as
I
was
hoping
to
hear
from
somebody
from
istio
to
get
a
little
more
context
on
what
they're
looking
at,
but
also
other
limitations,
that
your
thoughts
as
well.
C
E
Yeah,
so
basically
I
think
John
definitely
knows
way
more
on
this
topic
than
me,
but
basically
the
community
is
going
down
the
path
of
producer,
only
Waypoint.
So
what
does
that
mean?
That
means
we
would
recommend
users
just
have
Waypoint
proxy
on
the
producer
side
and
there's
likely
we're
not
going
to
have
any
Waypoint
proxy
on
the
consumer
side.
So
that
means,
if
you
need
a
specific
route
in
behavior
for
your
particular
client,
that's
different
than
any
other
client.
E
You
may
not
be
able
to
do
it
with
Waypoint
proxy,
so
you
may
need
to
put
a
sidecar
on
your
on
your
client
side.
So
that's
the
thinking
we
have
at
the
moment
so
I'm
just
reading
through
the
pr
and
trying
to
understand
it
sounds
like
the
pr
could
have
specific
behavior,
particularly
related
to
the
client-side
routing,
which
wouldn't
be
possible
with
producer
only
Waypoint
proxy.
D
To
pick
on,
you
know
specific
use
cases
here.
So
if
you
had
a
case
where
a
producer
created
a
service
that
said
that
it
had
a
five
second
timeout
and
then
a
particular
client
wanted
to
be
sure
that
they
never
went
over
a
two
second
timeout
for
that
particular
service
am
I
understanding
correctly.
That
there's
just
no
way
to
do
that
with
istio
and
the
Waypoint.
E
D
E
B
E
Psychologist
is
the
one
with
the
Waypoint
proxy
and
zetano
ambient
also
supports
sidecar
and
also
supports
like
the
interrupt
featuring
Zaika
and
psycha
list.
So
you
can
still
do
it
with
the
psycha
approach
as
part
of
the
ambient.
If
that
makes
sense.
D
A
E
A
A
I
guess
my
like
theoretical
understanding
would
be
that
if,
if
it
merges
this
is
part
of
the
spec,
if
a
user
attempts
to
Define
this
for
a
service
which
has
not
been
configured
with
a
sidecar
which
is
set
up
to
use
the
istio
ambient
routing
with
waypoints,
because
we're
not
going
to
we're
not
at
this
point
attempting
to
do
any
of
that
injection
or
configuration
for
users
we're
just
applying
a
model
to
the
existing
yeah
configuration
I
would
expect
that
a
consumer
defined
route
would
set
an
acceptance
false
status
when
it
attempted
to
attach
to
a
service
that
did
not
support
it.
D
On
the
Linker
D
side,
we
don't
have
that
particular
situation,
but
I'm
fairly,
confident
that
there
are
ways
you
can
set
up
linkerty
where
it
wouldn't
do.
The
right
thing.
I
need
to
think
about
that.
A
little
bit
more
and
same
sort
of
thing
right.
D
I
would
expect
that
as
long
as
we
get
the
feedback
saying
that
the
consumer
route
was
not
accepted,
I
would
think
that's
okay
and
we
should
go
ahead
and
merge,
but
obviously
I
am
not
associated
with
istio,
and
these
Deo
folks
might
feel
more
might
feel
strongly
and
differently
about
that
than
I.
Do
right.
C
I
I
see
Mike
Beaumont
here
from
Kuma
I.
Believe
I
I
remember
seeing
some
comments
in
the
pr
interest
to
see
where,
where
Kuma
land
on
the
client-side
routing
question.
C
F
Is
something
that
is
definitely
implementable
by
Kuma
I
mean
I
mentioned
my
concerns,
but
that's
more
about
like
what
we
do
for
our
next
step.
So
yeah.
E
Now
I
I
like
to
have
John
Wayne
too
I,
do
think
this
is
possible
to
be
implemented
in
is
still
because
the
psycha
approach
is
going
to
be
continually
supported
in
istio
for
a
very,
very
long
time
as
well.
C
Keith
yeah
I
want
to
ask
a
a
different
question.
Now:
are
there
any
implementations
who
feel
that
they
need
this
PR,
because
I
think
there's
also
potentially
value
and
re
come
coming
to
the
question
of
client-side
routing
when
we
have
more
actionable
use
cases,
and
so
our.
D
Not
not
like
you
know,
not
actionable
in
the
sense
of
oh,
my
God.
This
has
to
land
tomorrow,
but
in
the
sense
that
this
is
a
thing
that
linkerty
can
do
and
it's
very
important
to
us
to
make
sure
that
gamma
can
support
this.
This.
You
know
this
idea
where
you
can
have
a
producer
out
and
also
override
it
from
the
client
side,
because
that's
extremely
common
in
Lincoln's
World,
okay,.
C
That's
right:
that's
what
I
wanted
to
confirm
because
it
was
John
who's
kind
of
pushing
this
initially
and
just
wanted
to
make
sure
so
cool
yeah.
In
that
case,
then
it
makes
sense
to
me
what
others
are
saying
before
to
to
kind
of
see
where
istio
stands
on
this
just
to
get
some
clarification,
but
to
continue
to
move
forward
with
the
pr
and
get
something
in
there
and
we
can
always
re-examine
it
later.
F
Yeah
I
have
a
question,
maybe
for
Flynn
mate
posted
something
and
the
thread
that
I
opened
about
some
sort
of
new
resource.
C
D
Mate
and
I,
and
one
of
the
other
linkage
folks
are
supposed
to
be
talking
a
little
bit
more
about
this
later
today
and
I'm
kind
of
expecting
to
toss
something
on
the
agenda,
the
next
meeting,
or
so
just
to
to
kind
of
bring
up
some
of
that
stuff.
Because
yeah
as
we
as
we
kick
around
some
of
these
ideas,
then
there
are
things
on
the
Linker
D
side,
where
so
the
last
meeting
that
I
think
the
last
meeting
we
had
was
the
13th
right
was
yeah.
So
that's
right.
D
There
was
a
lot
of
discussion
in
that
meeting
to
the
tune
of
several
of
us
going
yeah.
You
know
we
should
probably
go
ahead
and
merge
this
one
and
play
with
it
and
start
working
with
it.
We've
been
doing
some
playing
with
it
and
unsurprisingly,
have
been
coming
up
with
places
where
we
think
that
things
are
going
to
need
to
change
as
we
go
forward
and
I.
Think
there's
going
to
be
I'll
be
trying
to
throw
something
on
the
onto
the
agenda
for
the
next
week
or
two
to
talk
about
some
of
those
things
too.
D
C
I'll
throw
a
note
here
that
we've
when
we
were
initially
doing
writing
the
HTTP
route
Gap,
we
had
several
Alternatives
that
we
considered
and
a
couple
of
those.
C
Good
the
game.
Again,
thanks
for
the
clarification
yeah,
we
got
some
Alternatives
we
play
with
and
a
couple
of
those
actually
sought
to
make
pretty
explicit
the
producer
consumer
relationship.
So
as
that
playing
is
happening
and
people
are
thinking
about
other
ways
of
representing
this
idea,
maybe
take
a
look
at
some
of
the
some
of
the
drawbacks
or
potential
concerns
folks
had
and
if
we
could
have
you
know
if
we
could
start
there,
I
think
that
will
help
conversations
when
we
get
to
that
point.
B
C
I
posted
the
link
to
the
Alternatives
in
the
in
the
chat
there.
It's
on
the
on
the
website.
A
D
I
think
it's
probably
also
worth
pointing
out
that
Linker
d
I
have
to
figure
out
the
right
way
to
phrase
this
one
right,
so
linkerty
does
not
have
anything,
but
the
sidecar
model
and
linkerty
correspondingly
does
have
a
fair
amount
of
ability
to
go
through
and
tweak
things
so
that
the
consumers
get
to
choose
things
and
so
that
you
can
actually
do
things
like
have
multiple
consumers
in
the
same
namespace.
D
That
do
things
differently,
and
you
know
it's
there's
some
fairly
interesting
stuff
in
there,
where
it
is
interesting
to
try
to
think
about
that
from
the
perspective
of
istio
having
the
sidecar
mode,
but
also
the
sidecar
less
mode,
and
that
I
think
is
going
to
make
it
I
think
it's
going
to
be
very
important
as
we
go
forward
to
make
sure
that
the
istio
folks
stay
I.
A
D
A
And
I
think
like
yeah,
like
likewise
like
our
unique
complexities,
is
there's
like
multi-cluster
type
use
cases
for
crossing
Network
boundaries
and
other
things
that
maybe
more
of
a
pressing
concern
to
us
than
some
other
meshes.
So
in
terms
of
like
where
routing
doesn't
apply
to
where
it
can
be
applied.
D
Yeah,
that's
I,
think
the
multi-cluster
stuff
is
going
to
get
interesting
for
language
pretty
quickly,
but
probably
not
instantly
yeah.
Obviously,
in
terms
of
those
unique
complexities,
y'all
should
just
do
it
linkerty's
way,
so
that
then
we
all
have
the
same
complexities.
Then
we
don't
have
to
worry
about
it
right.
A
Yeah
right
now,
but
you
know
like
Park
County,
though
I
think
that
we
would
have
similar
issues
with
how
the
ambient,
like
the
tunnel
thing,
is
happening
across
cluster
boundaries,
because
we
don't
want
to
like
populate
the
list
of
endpoints
available
between
clusters.
Things
like
that.
So
that's.
D
Yeah,
it
makes
a
fair
amount
of
things
much
much
easier
to
reason
about,
oh,
but
it
does
have
its
downsides.
A
Thanks
for
the
discussion
on
this,
it
feels
like
we've,
been
able
to
like
explore
this
a
bit
more
in
depth
and
hopefully
hear
any
more
detail
from
John
and,
let's
see
a
folks,
soon,
yeah
I
guess
kind
of
just
wanted
to
unless
anybody
else
has
other
items
to
add
to
the
agenda
kind
of
wanted
to
wrap
this
up
on
a
like
open-ended
rules
for
the
new
year.
A
What
are
people
interested
in
focusing
on
and
trying
to
accomplish
with
gamma
stuff,
whether
that
be
just
like
shipping
it
in
a
zero
film
release
and
like
getting
it
out
there
or
implementation
goals
for
your
perspective
meshes
whatever?
Maybe
foreign.
D
I
think
it's
pretty
safe
to
say
that
linkerty
would
like
to
be
able
to
be
actually
using
this
stuff
in
2023
in
a
a
more
cold-hearted
practical
way.
The
thing
that
absolutely
needs
to
happen
is
for
us
to
decide
that
we
will
be
able
to
use
it
or
that
we
won't
be
able
to
use
it,
and
I
personally
would
prefer
that
that
decision
be
made
in
the
will
Direction.
D
Actually
everybody
working
with
Liberty
would
prefer
that
decision
to
be
made
in
the
will.
Direction,
and
you
know
we
there's
a
there's-
a
certain
amount
of
optimism
there,
but
that's
the
I
think
that's
the
immediate
perspective.
From
our
point
of
view.
Is
that
yeah
we
there's
stuff
that
Lincoln
needs
to
learn
how
to
do
and
needs
to
be
able
to
do,
and
we
would
rather
use
Gateway
API
for
that,
as
opposed
to
creating
entirely
new
stuff.
C
I,
for
one
would
love
to
see
policy
become
a
a
thing
that
we
have
some
solid
Direction
on
in
the
new
year.
You
know,
Gateway
API
has
the
whole
policy
attachments
Paradigm.
C
It
was
towards
the
end
of
the
year,
there's
a
get
to
firm
up
the
whole
meta
resource,
interaction
and
Define
that
but
I
I
think
that
once
you
get
past
based
networking,
the
the
secret,
the
rub
of
service
mesh
of
of
networking
is
to
be
able
to
execute
policy
on
for
arbitrary
workloads
and
for
it
to
be
intuitive
across
personas
within
an
organization.
C
That's
the
the
beautiful
thing
about
the
way
Gateway
API
is
currently
designed
and
I
I
love
to
try
to
use
use
gamma
to
actually
start
getting
some
examples
out.
There
start
playing
with
it,
seeing
how
it's
going
to
work,
make
any
adjustments
that
need
to
be
made
and
start
getting
in
the
hands
of
users,
because
implementations
are
are
looking
for
ways
to
express
their
funky
specific
things.
C
And
you
know
my
implementation
included,
and
it
would
be
great
for
for
us
to
at
least
I'm
not
saying
to
to
have
it
all
done
by
the
end
of
the
year,
because
we
do
have
other
things,
but
be
nice
to
at
least
have
Direction
there
and
make
it
comfortable
enough
for
people
to
be
able
to
play
with
it.
D
Honestly,
just
nailing
down
that
policy
attachment
meta
resource
Thing
by
the
end
of
the
year
is
I
would
consider
that
an
accomplishment.
A
Definitely
yeah
I
guess,
like
personal
goal,
for
our
implementation
is
I.
Would
love
to
see
console
console
has
a
fair
bit
of
work
to
do
to
be
able
to
implement
the
surrounding
model,
but
I
would
love
to
see
us
be
able
to
offer
this
in
the
experimental
capacity
at
some
point,
probably
closer
towards
the
end
of
the
year,
but
that
that's
something
that
I'm
hoping
to
be
able
to
drive
internally
is
to
get
us
to
be
able
to
actually
Implement
gamma.
B
A
D
D
I
have
a
certain
amount
of
optimism
that
if
we
get
gamma
routing
and
policy
working,
then
it
will
just
work
with
Linker
d-linkerty
multi-cluster
I'm,
not
100,
optimistic
on
that
I.
Just
am
partly
optimistic
about
that,
because
of
the
way
Linker
D
does
multi-cluster
under
the
hood
in
terms
of
multi-cluster,
where
we're
talking
about
interop
across
vendor
multi-clusters
yeah
I'm
with
Mike
on
that
one
I
think
that
would
be
cool
I.
D
A
I
definitely
would
not
promise
like
multi-vendor
interrupt
but
like
the
way
that
console
is
going
towards
multi-cluster
relationships
is
having
more
like
independent
peers,
rather
than
like
full
knowledge
of
property
endpoints
across
them.
So
I
think
my
My
Hope
Is
different
from
yours
in
terms
of
like
multiple
cluster
of
like
I,
don't
think
it'll
just
work,
but
I
think
that
if
we
designed
something
that
could
work
for
console
to
console
multi-cluster
with
gamma
I,
think
that
may
be
a
step
on
the
path
towards
generally
applicable
I.
Think
that's
made
like
a
step
two.
A
After
like
licorice
model,
it
might
just
work
with
full
knowledge,
but
yeah
like
it
I'm
adding
that
boundary
and
that
kind
of,
like
lack
of
knowledge,
I,
think
it
is
a
like
step,
one
of
that
abstraction,
even
with
just
a
single
vendor,
to
start
yeah.
D
C
C
You
go
I
optimistic
stretch,
goal
for
for
this
year
when
it
comes
to
multi-cluster,
is
to
start
in.
Some
of
y'all
might
already
be
doing
this,
but
I
I
want
to
be
start
getting
a
bit
more
aligned
with
the
MCS
or
just
the
Sig
multi-cluster
work,
because
there
are
some
interesting
things
happening
there
with
like
the
work
API
on
the
scheduling
side
with
cluster
info.
There's
like
the
the
cluster
cluster
properties,
I
forget
what
it's
called.
C
There's
resources
there
describing
things
about
your
cluster
that
you
know
multiple
vendors
are
using
to
create
multi-cluster
experiences
and
things
like
that
and
I
think
to
Flynn's
point
about
things
like
trust
domain
things
like
cluster
having
unique
names
for
clusters.
We
already,
you
know
the
service
import
service.
Export
flow
is
already
fairly
well
defined.
C
C
If
we
see
a
resource
that
has
potential
for
work,
we
want
to
do
like
in
the
next
year
for
multi-cluster
that
we
already
are
aware
of
those
conversations
not
necessarily
involved,
but
at
least
aware
of
those
conversations
and
they're
aware
that
we're
aware
of
those
conversations
and
we
get
pulled
into
the
appropriate.
The
appropriate
rooms,
yeah,
I,
think
I.
Think
that's
Mouth
music
stretch
goal.
F
I
want
to
say
for
Kuma,
I
I
mean
I
kind
of
I.
Think
we're
in
a
similar
position
to
link
your
D
at
the
moment
where
we're
like
ready
to
actually
I
mean
we
can.
We.
F
Implementing
our
own
versions
of
these
things
in
a
Gateway
API
way,
and
it
would
be
really
great
to
hopefully
get
some
like
experience,
implementing
it
and
give
feedback
back
to
the
to
the
gamut
group
about
it.
G
I
I
have
one
yeah
I,
have
comments
on
the
multicaster
I
see
how
do
I
phrase
this
I'm,
seeing
a
a
patent,
a
design
pattern
that
which
is
more
scalable
for
multi-cluster,
is
where
there
is
like
a
config
cluster,
which
you
will
take
care
of
all
the
plumbing
of
these.
You
know
using
Gateway
API
and
for
all
the
working
cluster
worker
cluster
and
so
in
that
sense,
I'm,
not
sure
how
close
this
model
to
the
multi
MCS
construct.
G
So
it's
more
like
you
define
your
networking
power.
You
know
topology
things
in
config
cluster,
your
you
know
your
mesh,
the
HTTP
route
when,
where
the
worker
cluster
just
inherited
and
the
worker
clustered,
the
responsibility
of
worker
cluster
is
basically
just
handle
the
local
endpoint
and
with
the
individual
cluster
and
between
cluster
there's
really
between
worker
cluster.
There's
no
communication
with
each
other
I
find
that
patent
seems
more
scalable
for
multi-cluster,
because
if
you
start
importing
endpoints
for
every
single
cluster
I
see
that
maybe
a
scalability
issue
that
yeah.
D
G
It's
a
I
I'm,
not
saying
it's
new
I've
seen
people
already
doing
that.
I
I
think
we
also
yeah
on
Amazon.
We
also
have
this
I
think
you
know
before
I
give
API.
We
have
this
design
pattern
as
well.
G
G
So
the
idea
of
what
we
see
is
the
config
cluster
will
programming
all
the
things,
whatever
necessary
data
playing,
the
VPC
lattice
and
the
worker
cluster,
the
responsibility
of
worker
class
to
just
report
endpoint
to
the
VPC
lattice
data
plane
to
say:
hey,
here's
the
end
point
for
this
service
here,
the
another
endpoint
for
this
service.
So
in
that
particular
case,
there's
no
point
of
import.
G
No
point
of
peer-to-peer
kind
of
like
you
know,
control
control,
message
exchange,
you
say:
hey
I!
Have
these
2000
then
point
I?
Have
these
10
000
endpoint
between
cluster,
they
just
directly.
You
know
they
will
basically
the
control
plane
on
each
cluster.
They
just
reconcile
on
the
endpoint
on
this
cluster
itself
locally
and
program.
The
VPC
lattice
fabric.
G
So
that's
what
I
see
you
know
when
when
Keys
you
brought
up
the
MCS
architecture,
peer-to-peer
model
I
feel
this.
Maybe
there's
some
friction
over
there
versus
we're
more
like
a
hubspoke
model
and
and
where
the
config
cluster
has
more.
You
know
you
know
the
the
operating
config
after
the
programming,
the
you
know,
multi-cluster-wide
resources,
where
each
operator
in
each
class
to
just
configure
the
class
local
related
resources
like
endpoint
and.
C
Yeah,
that's
a
that's
a
that's
a
good
point
like
that's
something
that
I've
I've
seen
as
well
the
Hub
and
spoke
model
on
Azure
our
fleets,
our
Azure
Fleet
product
uses
the
the
hubspoke
model,
but
it's
still
looks
extremely
similar
to
multi-cluster
Services
user
service
import
and
service
export.
But
that's
pushed
to
both
those
configs
are
pushed
to
the
worker
clusters
as
well.
C
So
it
is
going
to
be
interesting.
I
think
I
I
agree
to
to
see
if
that
I
think
the
model's
already
prevalent.
But
it's
going
to
be
interesting
to
see
if
Sig
multi-cluster
does
any
pivoting,
because
you
know
several
of
us
were
there
at
kubecon
for
the
signal
or
it
was
contribs
on
it
for
the
sick,
multi-cluster
meeting
I.
C
Think
I
was
there
at
Mac
and
a
couple
others
and
they're
looking
to
other
like
problems
like
scheduling
and
things
like
that
and
I
there's
a
several
questions
about:
okay,
namespace
sataness
is
great,
but
what
about
the
non-in
space
same
this
clusters
and
that
may
there
may
be
a
change
in
the
in
the
next
couple
of
years.
I
think
I
I
have
a
feeling
to
fall
more
along
with
our
customers,
actually
want
to
use
their
clusters,
as
opposed
to
the
abstractions
that
that
MCS
assumes
so.
A
A
A
We
do
like
the
kind
of
more
independent
peer-to-peer
model
for
data
plane
concerns,
but
I
think
there
was
some
sort
of
role
for
a
like
Hub
and
spoke
type
pattern
for
a
like
whether
it
may
not
be
specifically
a
like
mesh
control
plane,
but
like
a
management
plane
of
some
sort
to
be
able
to
configure
things
across
clusters,
whether
that
be
like
establishing
which
clusters
are
members
of
a
MCS
cluster
Set,
which
would
have
which
would
imply,
like
means
they've,
seen
this
across
them
and
being
able
to
kind
of
coordinate
across
like
extra
cluster
concerns
like
that
yeah.
A
So
so,
but
definitely
like
wanting
to
keep
that
management
type
like
Hub
concerns
out
of
the
data
plane
out
of
the
kind
of
like
data
pass
so
having
direct
service
kind
of
company
between
those
corporate
clusters.
It's
not
like
where
we
see
it
going,
but
definitely
I
think
there's
a
lot
here
to
explore
and
yeah
looking
forward
to
kind
of
like
digging
into
this,
as
we
keep
moving
foreign.
E
Store
service,
the
API
status
I
think
it's
still
Alpha
at
the
moment,
and
the
last
year,
like
the
commits,
pretty
much
just
come
from
one
person,
I
think
so
yeah
so
I'm
not
sure
it's
been
the
reason
I'm
asking
is
in
istio.
It
was
implemented
for
a
long
time
and
it
was
still
dark.
E
Launch
and
I
haven't
heard
much
people
talking
about
it
and
then
to
your
point
lee
when
I
think,
regardless
what
design
pattern
you
are
using
right
if
you're
using
like
the
config
cluster
and
then
not
expose
all
the
endpoints
I,
think
that's
an
independent
form,
the
multi-cluster
API
or
whatever
API
that
people
are
going
to
use
to
to
config
how
their
services
are
going
to
be
exposed
across
different
clusters.
So
you
could
use
either
design
patterns,
regardless,
whichever
design
pattern
for
the
same
API.
G
E
So
for
your
VPC
letters,
wow
should
always
think
about
it
as
an
API
Gateway
in
your
cloud.
It's.
G
It's
now
really
it's
more
like
underlying
VPC
fabric,
so
any
VPC
can
talk
to
any
VPC.
Any
account
can
talk
to
any
account.
So
so
it's
a
generic
fabric
allow
application
to
talk
to
each
other
and
what
I
like
to
do
to
expose
it.
How
to
provision
this
VPC
lattice
is
so
Gateway
API
today
I
mean
potentially
SEO
as
well.
It's
just
a
data
plane
fabric.
So
today
will
how
we
use
it.
G
Is
we
just
use
gateway
to
basic
group,
a
bunch
of
HTTP
routes
together,
a
Gateway
is
anything
below
the
Gateway
can
talk
to
each
other.
G
A
VPC
can
associate
to
one
Gateway
and
today,
and
so
anything
in
this
VPC
can
talk
to
anything
under.
You
know
think
about
VPC,
Gateway,
HTTP
route
and
point
like
a
tree.
The
Gateway
can
attach
to
a
VPC,
and
so
anything
within
this
VPC
can
reach
anything
below
the
Gateway.
That's
today's
model,
and
if
you
know,
can
you
imagine
if
we
want
to
provision
using
sdo
as
a
control,
plane,
provisioner
VPC
ladders?
That
would
be
ideal,
but
I
mean
just
the
thinking.
You
know
we
haven't
done
anything
on
that
part.
G
It's
basically
a
new
data
plane
service
in
AWS
VPC,
but.
A
It
sounds
to
me,
and
please
correct
me
wrong.
It
sounds
kind
of
similar
to
how
something
like
Submariner
might
be
used
to
kind
of,
like
abstract,
away,
Network
different
networks
between
between
clusters,
which
is
a
way
that
istio
has
been
used
with
MCS
API,
with
Submariner
kind
of
like
give
that
flat,
Network
space
and
reachability
across
them.
So
that's
what
we've
been
looking
at
for
console
is
basically
like
environments
where
that
is
not
possible
or
desirable
for
different
reasons,
so
that
yeah
I
I
think
they're.
D
It's
there's
also
to
my
way
of
thinking.
The
four-person
startup
case
comes
into
this
a
little
bit
too,
where
it's
entirely
possible
to
get
to
a
place
with
a
small
team
where
you
go
oh
holy
crap.
We
should
actually
be
running
two
clusters
here,
but
instantly
having
to
go
from
one
to
three
can
be
a
little
challenging,
even
though
it
might
be
something
that
you
very
much
realize
you
need
to
do
later,
so
yeah
yeah,
having
some
sense
of
both
of
those
and
and
not
ruling
out
either
one
I
think
is
probably
a
good
idea.
D
B
A
All
right,
oh
one,
last
comment
on
the
like
MCS
being
in
kind
of
like
a
frozen
State
in
Alpha.
So
my
understanding
is,
it's
basically
like
the
intended
scope
is
like
done
currently,
and
it's
really
basically
pending
adoption
is
what
is
needed
to
get
it
to
Beta.
So
not
any
substantial
change
to
the
code
base
or
any
like
additional
work.
A
We
did
the
intent
I
think
the
hope
is
to
get
a
graduated
debata
with
minimal
changes
currently
and
from
talking
to
folks
inside
multi-cluster
they're,
currently
looking
for,
like
the
kind
of
like
next
wave
of
projects
after
MCS.
A
So
one
of
the
things
that
I
have
discussed
with
them
briefly
is
kind
of
like
this,
not
sameness,
so
obviously
that's
much
more
complex
when
you
kind
of
remove
the
Assumption
of
a
thing
by
this
name
in
this
name,
it
says
that
one
cluster
is
the
same
logical
thing
in
another
cluster,
but
that
seems
like
something
that
they're
open
to
exploring
at
some
point
and
yeah.
Maybe
a
worthwhile
topic
for
us
to
engage
with
now
on
the.
D
The
reason
that
I
found
it
interesting
to
look
at
the
GitHub
repo
just
now
was
that
it
hasn't
gone
to
Beta,
but
it
also
hasn't
been
changed
in
two
years,
so
that
kind
of
makes
me
wonder.
Okay,
so
what's
you
know
what's
going
on
in
here,
and
the
answer
could
very
well
be
that
just
taking
it
to
Beta
hasn't
been
anybody's
priority
but
yeah.
It
makes
me
a
little
curious.
D
A
A
Oh
just
I
think
it's
kind
of
early.
It's
that
the
spec
happened
before
a
lot
of
meshes
were
ready
to
adopt
it,
we're
ready
to
implant
it
like
we
were
still
working.
A
Level
concerns
and
I
think
it's
just
now
we're
starting
to
see
customers
asking
us.
How
do
I
do
this
and
actually
starting
to
see
that
demand
build
up
to
the
point
where
it
makes
sense
for
us
to
start
prioritizing
this
in
the
new
year.
E
Yeah
I
was
just
going
to
say:
I
mean
I've,
noticed
something
odd
with
this
spec
as
well,
I
think,
what's
implementing
is
still
almost
two
years
ago
now
and
it
was
stock
launch.
It
was
marked
as
like
an
experimental
feature
and
I.
Think
Google
was
the
initial
contributor
contribute
this,
but
after
that
was
the
initial
Implement.
No
one
was
using
it.
There
was
no
promotion
regarding
it.
There
was
no,
you
know,
nobody
was
talking
about.
E
This
featuring
is
still
basically
so
it
could
be
even
removing
it
still
because
in
HCL
we
don't
like
to
keep
experimental
feature
for
a
long
time
without
people
willing
to
be
able
to
like
stand
up,
promote
it
to
the
next
phase
and
I
know.
The
people
who
implement
this
featuring
is
still
actually
left
Google,
so
yeah
I'm,
just
not
sure.
What's
even
going
to
be,
you
know
in
history
for
a
long
time.
C
Can
you
see
yeah
I'll
I'll
say
for
one
of
the
I
think
one
of
the
things
that
makes
this
the
multi-cluster
service
API
a
little
bit
different?
Is
that
count
to
Max
point?
We
we
weren't
as
a
mesh
Community,
we
weren't
ready
to
like
consume
sort
of
import
service,
export
information
about
it,
but
also
there
was
there's
a
sense
in
that
nobody
wanted
to
be
an
MCS
controller
and
nobody
wanted
to
be
the
thing
to
go
and
actually
write
the
service
input
service
exports.
C
People
were
waiting
for
vendors
and
Cloud
providers
to
build
services
that
did
that
and
so
I
think
part
of
the
delay
in
in
moving
it
to
Beta.
Moving
the
MCS
API
to
Beta
is
that
there's
kind
of
a
a
need
to
wait
for
the
large
Cloud
providers
to
make
services
that
implemented
this
API
that
built
the
MCS
controller
I'll,
say
on
the
osm
side
and
trying
to
you
know
build
our
multi-cluster
service
functionality.
C
Just
getting
into
an
environment
set
up
to
test
it
is
is,
can
be
fairly
difficult
because
you've
got
Submariner
or
you've
got
karmada
and
not
all
of
those
are.
You
know
super
easy
to
use
in
like
a
CI
context,
depending
on
on
your
I'm
sure.
Everybody
here
has
got
some
different
experiences
with
that.
C
Just
just
a
bit,
so
my
point
is,
you
know,
there's
I
I.
Think
I
certainly
would
like
to
check
in
with
Sig
multi-cluster
and
see
what
their
POV
is
on
the
status
of
the
MCS
API,
but
I
do
think
there
is
an
element
of
waiting
for
very
you
know
in
general,
very
large
companies
to
make
pretty
big
changes
to
how
they
reason
about
multi-clusters
muscle
cluster
Services.
C
Some
companies
may
have
even
had
things
that
were
in
process
in
motion
when
this
came
out
and
had
to
Pivot
and
we're
just
seeing
like
azure's
Fleet,
just
released
I
think
mid
last
year,
I
mean
we're
getting
feedback,
so
the
feedback
loop
is
a
bit
longer
because
you've
gotta,
you
have
to
wait
for
vendors.
Vendors
have
to
wait
for
customers,
get
feedback,
vendor
text
feedback
to
to
MCS
and
then
there's
the
whole
mesh
component
in
there
as
well.
C
A
Yeah
and
definitely
like
from
conscious
perspective,
we're
looking
into
the
possibility
of
implementing
a
like
multi-cloud
capable
MCS
controller
at
at
some
point.
But
this
is
oh
yeah.
That's
kind
of
thing
like
it
is
a
substantial
undertaking
and
understanding
of
like
this
is
something
that's
mostly
been
done
by
managed
service
providers,
so
I
think,
while
it
isn't
in
istio
directly,
it's
gke
provides
like
an
MCS
controller.
A
I
think
AWS
all
launched
one
over
sometime
in
the
past
year
and
yeah
hearing
about
the
Azure
Fleet
Venture,
which
I
haven't
heard
much
about
yet,
but
is
that
also
an
MCS
controller.
C
C
A
This
is
definitely
like
it's
a
space
that
we're
starting
to
explore
too
and
yeah,
maybe
maybe
just
getting
some
of
the
folks
together
that
have
started
working
on
this
to
kind
of,
like
figure
out,
what's
working
well
and
kind
of
like
what
they
see
as
path
forward,
for
it
would
be
good
as
they
like
get
those
folks
into
Sig,
multi-cluster
and
talking
to
folks.
A
If
there
is
a
desire
to
advance
and
promote
the
stack
and
try
to
drive
adoption
of
it
or
or
if
there's
like
gaps
or
changes
that
can
happen
to
it.
Foreign.
G
Today
we
do
borrow,
you
know
we
do
use
the
MCS
export
object
instead
of
defining
a
new
crd
and
we
Leverage
The
MCS
export
export,
but
April
here
is
all
it
does,
is
just
tell
the
VPC
lattice
fabric
to
say:
hey,
here's,
a
kubernetes
service
which
you
can
attach
at
the
backend
ref
of
HTTP
route,
which
is
some
other
cluster.
So
let's
say
you
imagine
you're
in
cluster
one,
you
define
HTTP
route
and
you
can
reference
a
kubernetes
service
in
a
different
cluster.
G
The
way
it
app
references
is
just
the
other
cluster
exported
and
the
first
cluster
you
don't
have
to
import
because
we
don't
care
about
endpoint,
the
first
class,
let's
just
say,
hey
for
Go
slash,
pass
one
go
to
this
service
one
but
service.
One
is
the
some
service
in
some
different
cluster
which
exported
to
VPC
lattice
fabric.
G
So
that's
how
we
use
it
today.
So
it's
a
little
diverged
from
the
MCS,
but
we
do
borrow
the
MCs
crd
expert
for
that
purpose.
Interesting.
A
Yeah
as
far
as
scalability
concerns
like
we're,
constantly
uses
a
bespoke
thing
that
is
similar
for
our
cluster
query
model,
but
like
what
we're
doing
is
try
to
have
a
bit
more
granular
control
so
being
able
to
export
a
service
only
to
a
specific
cluster
rather
than
to
all
clusters
in
the
fabric.
As
it
sounds
like
kind
of
vivis
lattice,
it
seems
to
perform
more
like
Universal
connectivity
versus
like
specific
cluster
cluster
connections.
G
Right
so
basically,
trdr
is
the
cluster.
Will
export
endpoints
to
VPC
lattice
fabric,
so
you
think
about
bpc
a
lot
of
fabric,
it's
everywhere
within
the
AWS
Cloud.
So
without
a
Google
cluster,
you
define
your
endpoint
your
services,
you
just
report
so
loud
BBC.
A
lot
is
aware
of
all
these
kubernetes
servers
in
the
cloud
within
a
cloud
cross
account
and
and
then
then
they
can
hook
to
each
other
find
out.
Okay,
I.
G
If
I
won't
go
down
this
path,
I
go
down
to
this
kubernetes
service,
which
can
be
in
this
class
or
different
classroom.
Doesn't
matter
yeah.
B
A
All
right
well
we're
out
of
time.
Everyone
thanks
for
the
great
discussion
today,
yeah
and
hopefully
we'll
follow
up
about
some
devices
off
and
it
even
though
folks
seem
a
little
pessimistic
multi-cluster.
There
seems
to
be
a
lot
of
thoughts
about
it.
A
Looking
forward
to
a
more
productive
discussion
on
that
continue,
so
yeah
thanks
again
everybody
and
take
care,
have
the
rest
of
the
day.
Thanks.