►
Description
Gateway API is an open source project managed by the SIG-NETWORK community. It comprises resources for modeling service networking in Kubernetes. These resources evolve Kubernetes service networking through expressive, extensible, and role-oriented interfaces that are implemented by many vendors and have broad industry support. OpenShift 4.8 will add Gateway API support (tech preview). Join Red Hat's Daneyon Hanson to learn more about this exciting new feature.
Note: This project was previously named "Service APIs" until being renamed to "Gateway API" in February 2021.
A
A
All
right,
everybody
welcome
to
another
openshift
commons
briefing
and
today,
as
we
like
to
do
on
mondays
and
tech,
previews
upstream
projects
and
really
thrilled
to
have
danian
hanson
back
he's
now
a
principal
software
engineer
at
openshift
and
he's
going
to
talk
about
a
feature:
that's
coming
in
openshift
4.8,
the
gateway
api
and
a
little
bit
of
background.
I
think
on
contour
as
well
the
project
that
it
springs
from
or
is
related
to
so
I'm
gonna.
Let
damian
introduce
himself
walk
us
through
this.
A
Maybe
do
a
demo
or
two
and
then
we'll
have
a
live
conversation
and
q.
A
at
the
end.
So
ask
your
questions
in
the
chat
and
we'll
get
rocking
and
rolling.
So
take
it
away.
Dania.
B
Yeah
thanks
diane,
as
diane
mentioned,
my
name
is
damian
hansen,
I'm
a
principal
software
engineer
with
openshift
and
going
to
spend
some
time
today
going
through
a
a
dev
preview
feature:
that's
gonna
be
coming
in
openshift48
and
the
features
called
gateway
api.
A
little
background
on
gateway
api
is
so.
If
we
look
at
what
we
have
today,
we
have
the
ingress
resource
for
kubernetes
and
the
ingress
resource
it's
been
around
for
a
while,
actually
recently,
ga
but
there's
been
challenges
with
the
ingress
resource.
B
So
if
you're
familiar
with
openshift,
the
the
route
resource
is
primarily
used,
we
support
ingress
and
route,
but
what
we
actually
do
is
translate
ingress
to
a
route
resource,
but
the
res
the
rot
resource
was
actually
even
created
before
the
ingress
resource,
because
we
had
openshift
needed
a
way
to
express
how
to
route
traffic
into
the
cluster,
and
so
we
created
the
route
resource,
but
the
ingress
resource
came
around
was
created
as
a
simple
way
to
provide
ingress
and
as
it
evolved.
What
we
started
to
see
was
that
it
just
wasn't
expressive
enough
to
meet.
B
You
know
complex
routing
use
cases
and
what
started
to
happen
was
implementers
started.
Exposing
some
of
this
additional
configuration
through
annotations
that's
become
pretty
difficult
to
manage,
and
so
that's
kind
of
where
we're
at
with
the
the
ingress
resource,
and
then
we
also
look
at
the
service
resource
and
it's
kind
of
become
a
dumping
ground
for
all
sorts
of
kind
of
kubernetes
service,
modeling,
and
so
it's
becoming
quite
bloated.
B
And
if
we
look
here,
this
is
actually
the
picture
from
when
we
first
got
together.
Looked
looks
pretty
strange
these
days.
Well,
I
guess
maybe
not
so
much
anymore,
depending
where
you're
from,
but
it
looks
kind
of
strange
you
know
with
the
pandemic.
I
was
like
wow.
This
was,
I
think,
about
four
or
five
months.
This
is
like
november
of
2019
that
we
got
together
at
the
kubecon
north
america
in
san
diego
to
really
start
talking
through
this
and
then
formalizing
a
group.
B
We
created
a
working
group
to
come
up
with
a
solution
and
what
we
called
it
at
the
time
was
service,
apis
and
and
service
apis
stuck
around
until
just
about,
I
think
about
two
or
three
months
ago,
where
we
renamed
the
project
gateway
api.
But
after
the
group
was
formed,
it
took
us
about
a
year
to
really
get
to
the
point
where
we
felt
comfortable
cutting
a
release.
B
We
cut
the
v1
alpha
on
release
back
in
november
of
2020
and
through
this
process
you
know
we
at
red
hat,
made
a
decision
that
that
we
were
going
to
implement
gateway
api
in
contour
as
opposed
to
open
shift
router,
there's
things
that
were
happening
in
the
industry
with
the
envoy
uptake
with
envoy,
as
well
as
now
contour
being
cncf
projects.
B
You
know,
openshift
router
has
been
good
to
us,
but
we
wanted
to
try
and
move
forward
with
an
implementation
that
had
a
diverse
community
that
was
established
upstream,
the
cncf
community
and,
and
those
were
big
drivers
for
us
and-
and
you
know
ultimately
led
us
to
using
contour
to
implement
qa
api.
B
So
let's
talk
a
little
bit
about
the
api
itself.
You
know
one
of
the
first
areas
to
really
point
out
is
that
you
know
gateway.
Api
is
a
collection
of
resources,
and
these
resources
are
modeled
off
of
how
clusters
are
managed
and
operate
right.
And
so
you
have
these
different
groups,
a
group
that
will
provide
the
infrastructure,
a
group
that
operates
the
infrastructure.
And
then
you
have
the
users,
and
in
our
case
those
users
are
are
developers
that
want
to
expose
their
applications
right
and
so
on
the
left-hand
side
of
the
diagram.
B
There,
you
see
those
different
personas
and
how
they
align
to
the
resources
that
make
up
gateway,
api
right,
and
so
we
have
a
gateway
class
which
it's,
if
you're
familiar
with
storage
classes
or
ingress
class,
it's
just
a
way
to
to
define
a
set
of
configuration
or
capabilities
in
in
a
gateway,
api
sense.
It's
those
capabilities
are
around
expressing
a
gateway
right,
and
so
a
simple
example
that
that
may
help
with
establishing
a
mental
model
is,
is
if
we
have
two
different
gateway
classes.
B
B
Those
are
two
very
simple
use
cases,
but
hopefully
that
helps
you
understand
what
we're
trying
to
accomplish
with
kind
of
classifying
gateways
using
the
gateway
class
resource,
then
we
have
the
gateway
resource
right
and
so
the
gateway
resource,
instantiates,
the
infrastructure
right,
so
a
gateway
class
which
also
in
in
this
diagram,
isn't
represented,
but
the
gateway
class
will
typically
reference
some
kind
of
custom
resource
and
when
we
get
to
the
demo
I'll
show
you
that
in
more
detail,
but
typically
gateway
class
is
also
going
to
reference
some
kind
of
custom
resource
that
expresses
all
of
the
detailed
kind
of
configuration
right.
B
So
that's
what
that
custom
resource
is
used
for
and
that
allows
gateway
api
to
be
portable
right,
so
we're
not
actually
putting
implementation
specific
configuration
parameters
in
the
gateway
class,
but
you
know
that
custom
resource,
as
well
as
a
gateway
class.
B
Those
are
think
of
those
as
just
kind
of
configuration,
snippets
that
live
in
the
cluster
and
nothing
really
happens.
There's
no
infrastructure
provisioning,
that's
going
on
anything
like
that
until
a
gateway
is
instantiated
right.
So
typically,
a
cluster
operator
is
going
to
create
the
gateway
and
that
will
go
ahead
and
cause
the
controller
or
the
implementation
to
take
action
on
the
gateway
right.
And
so
it's
going
to
go
ahead
and
see
that
hey
cluster
operator
wants
to
create
a
gateway.
B
Let
me
validate
it
all
right,
it's
valid!
Let
me
start
acting
on
it
and
looking
at
the
gateway
class
and
this
custom
crd
and
start
creating
the
infrastructure.
That's
being
requested
right
and
you
see
further
down
this
chain
and
then
you
have
http
route,
so
there's
different
route
types
that
are
spec
by
gateway,
api,
right
and
and
they're.
B
You
know
they're
protocol
specific,
so
there's
tls
route,
there's
http
route
and
then
there's
even
layer,
four
abstractions,
a
tcp
route
and
udp
route,
and
so
the
route
types
is
where
our
developers
are
going
to
be
interfacing
right,
and
so
we
have
these
developers
each
that
created
an
http
route
to
expose
one
of
their
application
services
that
that
reside
on
the
cluster.
B
So
beyond
just
the
gateway
api
model
and
and
how
it's
designed
around
these
roles.
It's
also
designed
to
be
extensible
right.
I
talked
a
little
bit
in
the
previous
slide
about
this
crd
that
gets
referenced
by
gateway
class.
B
Well,
that's
not
just
the
only
reference
that
that's
provided
for
for
a
way
to
expose
like
implementation,
specific
configuration
throughout
the
api.
B
We
as
a
maintainer
team
and
others
really
try
to
to
think
through
different
use
cases
from
the
simple
to
the
very
complex
and
try
to
figure
out
where
in
these
resources
are
the
best
way
to
expose
additional
customization,
while
keeping
the
core
protocol
100
portable
right
and
so
there's
these
different
kind
of
layers
of
functionality
from
core
to
extended
to
custom.
B
And
the
key
is:
is
that
the
core
is
a
hundred
percent
portable
right,
so
we
can
go,
create
a
gateway
gateway
class
using
a
hundred
percent
of
the
core
api
features
and
you
can
go
between
providers
and
implementations
and
everything's
all
good
right,
more
than
likely
you're
you're
going
to
get
to
a
point
where
you
want
to
dip
into
some
of
the
extended
or
custom
features
of
that
particular
implementation.
You
just
need,
you
know,
need
to
be.
Mindful
of
you
know
what
extended
and
custom
features
you're
using.
B
If
you
do
decide
to
make.
You
know,
move
these
resources
around
between
implementations-
and
you
know.
The
key
point
here
also
on
this
slide
is
the
gravitational
pull
towards
core
right.
So
because
all
these
different
implementations
from
proxies
the
load
balancers
have
so
many
different
capabilities.
B
B
You
know
I
talked
to
at
the
beginning
of
the
presentation
about
you,
know,
ingress
and
the
challenges
of
ingress
being
very
simple,
and
the
way
that
we
express
additional
functionality
is
through
annotations
and
that
becomes
very
challenging
to
support
over
a
long
period
of
time,
and
so
that
is
one
of
the
areas
that
we
tackled
with
gateway.
B
Apis
is
again
finding
that
balance
between
making
the
core
features
portable,
while
also
making
gateway
api
extensible,
so
that
we
can
be
expressive
without
having
to
use
annotations,
and
now
here
is
just
kind
of
a
simple
example
of
of
using
traffic
splitting
based
on
weights
right,
and
so,
if
it's
traffic
splitting
mirroring,
you
know
routing
to
to
different
types
of
resources,
not
necessarily
a
service
resource.
It
could
be
some
kind
of
custom
resource
or
a
you
know,
a
s3
bucket
or
any
kind
of
resource
right.
B
We
can
support
so
we're
not
really
kind
of
locking
in
the
design
to
a
specific
type
of
resource
for
the
back
end.
For
example,
and
then
I
talked
about
portability,
but
now
here's
here's
current
implementations
again.
These
implementations
are
either
in
the
works
or
at
an
alpha
feature
level,
but
you
know
I'm
pretty
impressed
with
the
the
diversity
of
the
community,
along
with
you
know,
for
being
an
alpha
implementation
again,
the
one
alpha
one
was
cut
just
in
november
that
we
have.
B
So,
where
we
at
today
right
so
so
we
as
red
hat,
we
established
maintainership
and
gateway
api
and
contour
communities.
That
was
really
important
for
us
to
make
sure
that
we're
invested
in
these
communities
not
only
for
ourselves,
but
you
know,
for
openshift
and
for
our
customers.
B
We
developed
an
upstream
operator
for
managing
contour
right,
so
really
important,
as
I'm
sure
everyone
knows
here,
for
functionality
within
openshift,
that
functionality
typically
needs
to
be
managed
by
an
operator,
and
so
we
went
ahead
and
worked
with
the
contour
community
to
establish
an
operator
and
very
happy
with
the
progress
of
that
project.
The
operator
is
being
released
in
synchronized
release
with
contour,
and
I
think
this
is
now
the
third
or
fourth
release
that
we've
done,
that
we've
got
a
road
map.
B
Things
are
working
really
well
with
having
that
operator
and
not
only
having
the
operator
but
again
having
it
upstream.
Living
with
with
contour
is,
is
very
important
to
us.
B
We
added
gateway
api
support
and
contour
and
contour
operator
in
v113,
which
was
just
about
six
weeks
ago.
We
improved
that
support
in
the
114
contour
that
was
released
just
about
a
week
and
a
half
ago
we
have
you
know
we're
working
hard
within
that
community
to
continue
improving
the
support.
B
We
still
have
a
ways
to
go,
but
I
think
where
we're
at
you
know
we're
very
happy
with
and
and
again
we're
working
hard
to
keep
moving,
that
gateway,
api
support
in
a
positive
direction
and
for
openshift,
you
know
kind
of
the
heart
of
what
we're
talking
about
here
today
is
we're
actually
providing
a
dev
preview
of
a
gateway,
api,
contour
and
the
contour
operator
in
4-8.
So
we're
really
excited
to
provide
this
dev
preview.
Anyone
interested
in
the
preview
again
keep
in
mind.
B
It
is
dev
preview,
but
we're
really
hoping
to
to
have
users
kind
of
kick
the
tires
on
the
solution.
Give
us
feedback.
You
know
work
with
us
to
help
make
the
feature
the
very
best
feature
that
it
can
moving
forward,
and
so
we
you
know
to
do
this.
We
really
want
to
be
able
to
have
that
partnership
with
you
know
with
our
customers
and
yeah
looking
forward
to
to
hearing
from
from
users.
B
Diane,
I
threw
a
link
to
this
guide
and
in
slack,
if
you
don't
mind,
posting
it
to
the
chat
window
here
I
appreciate.
C
B
So
what
I'm
sharing
with
you
here
this
is,
this
is
documentation
that
I
put
together
and
running
gateway
api
on
openshift.
B
You
see
that
this
is
the
version
that
I've
tested
on,
which
is
a
48
nightly,
build
using
upstream
114
0
of
contour
and
the
contour
operator,
and
again
just
to
to
stress,
there's
no,
you
know
openshift
specific
integration
here,
we're
not
forking
anything
from
upstream
dev
preview
is
basically
gonna,
say:
hey
here's,
how
you
take
this
upstream
project
and
run
it
on
openshift,
and
that
is
what
we're
kind
of
using
as
a
baseline,
which
is
also
you
know,
which
is
very
good
in
the
sense
that
right
we're
going
to
start
this
feature
using
upstream,
not
a
fork.
B
We're
going
to
start
using
upstream
operator
upstream
contour
upstream
gateway
api
and
then
evolve
the
support
from
that,
but
that
will
always
be
our
baseline
to
make
sure
that
we're
in
lockstep
with
upstream
and
why
you
know
we
felt
it
was
critical
that
all
the
work
we've
done
up
until
this
point
is
really
about
getting
upstream
right.
So
it
can
be
right
in
openshift,
but
take
a
look
at
this
documentation
this.
B
This
will
be
used
for
the
official
product
documentation,
along
with
some
other
documentation
that
we'll
develop,
but
this
is
essentially
just
a
quick
start
right
how
to
go
ahead
and
get
gateway,
api
up
and
running
in
my
openshift
cluster.
So
let's
kind
of
walk
through
it
here.
The
first
thing
that
we're
going
to
do
is
run
contour
operator.
B
Let
me
jump
over
to
my
terminal
here
and
I
have
a
cluster
openshift
for
a
cluster
running
I've
configured
my
oc
client,
to
talk
to
the
cluster,
and
you
see
that
all
my
cluster
operators
are
reporting
the
expected
status
conditions,
so
everything's
looking
good.
Let's
go
ahead
and
provision
the
contour
operator.
B
B
Other
other
crds
are
from
contour
operator,
for
example,
contour
operator,
watches,
contour,
custom
resources
and
then
perform
some
kind
of
action
based
on
those
contour
custom
resources
and
some
of
the
crds
are
for
contour
itself
http
route
or
I'm
sorry,
http
proxies
tls
certificate
delegations
and
such
we
set
up
all
the
r
back
needed
for
the
operator
and
contour.
B
So,
let's
see
what
the
status
of
the
deployment
is
for
a
contour
operator
all
right,
it's
available
I'm
going
to
go
ahead
and
tell
the
logs
too.
B
And
you
see
that
the
the
operator
is
available,
that
it's
running
and
and
tells
us
what
image
of
contour
that
it
will
use
along
with
what
image
of
envoy
proxy
it
will
be
using
starts.
The
metrics
server
creates
a
metrics.
Endpoint
starts
the
controllers
for
the
different
resources
that
it's
going
to
manage
right,
so
gateway
controller,
the
contour
controller,
the
gateway
class
controller,
you'll,
see,
there's
no
http
route
or
udp
router
or
tls
route
controllers.
B
But
I'm
just
going
to
go
back
here
and
talk
for
a
second
about
contour
all
right.
So
contour
is
it's.
It's
a
control
plane
for
envoy
right,
so
in
for
gateway,
api
or
just
using
you
know,
contour
itself
right
contour
is
a
control
plane
for
managing
envoy
proxies
envoy
proxies.
Are
the
data
planes,
so
when
you
create
a
gateway,
you
create
an
ingress
or
http
proxy.
B
Is
the
custom
resource
that
the
contour
community
created
to
get
around
the
the
ingress
resource
limitations
that
I
talked
about
at
the
beginning
of
our
presentation
and
and
so
it's
you
know,
contra
is
going
to
watch
the
any
of
those
resources
and
then
it's
going
to
go
ahead
and
instantiate
manage
your
envoy
proxy
fleet,
which
will
you
know
essentially
take
the
those
resource
configurations,
translate
them
into
an
envoy
configuration
and
then
envoy
will
handle
the
the
proxy.
B
All
right
so,
like
I
said
we,
we
now
have
the
operator
up
and
running
on
the
logs,
we'll
keep
that
there
and
let's,
let's
kind
of
go
down
through
here
so
I
mentioned
now
for
dev
preview.
We
don't
have
really
any
openshift
specific
integration
at
this
point.
B
Take
a
look
at
issue
112,
where
we
have
that
as
an
issue
on
the
operator
repo,
where
we'd
like
to
create
an
abstraction
that
allows
that
allows
contour
and
contour
operator
to
you,
know
to
perform
management
for
certain
platforms
right
and
won't
dive
down
into
too
much
of
the
details.
But
look
at
the
issue
112
if
you'd
like
to
know
more.
What
we
need
to
do
here
is
we
need
to
create
or
establish
or
associate
the
contour
and
the
contour
search.
Gen
service
accounts
with
the
non-roots
scc
right.
B
So
let
me
go
ahead
and
do
that
for
the
contour
service
account
and
let's
also
do
it
for
search
gen
and
the
key
point
here.
Is
you
see
this?
This
schema
for
this
command
come
on
all
right.
There
we
go
system
service,
account,
project,
contour,
contour,
search,
gen
and
then
project
contour
contour
here
see
the
schema,
and
what
we
have
here
in
the
schema
is
this:
is
this
portion
is
the
namespace,
and
then
this
is
the
name
of
the
service
account
right.
B
B
So
keep
that
in
mind,
wherever,
if
you
create
your
own
gateway-
and
you
put
it
in
name
namespace
foo-
make
sure
that
when
you
do
when
you,
when
you
add
the
contour
and
contour
certain
service
accounts
to
the
non-root
sec,
that
you
are
specifying
the
the
name
of
the
the
namespace
of
the
gateway
here,
so
that's
good
to
go.
Let's
go
ahead
and
provision
our
gateway.
B
Now
say
this
is
a
gateway,
but
this
is
actually
going
to
be
multiple
resources
here,
let's
take
a
look
at
what
they
are,
give
it
a
second
we
create.
This
is
unchanged
because
we
already
have
the
contour
operator
namespace,
but
we
create
the
project.
Contour
namespace,
remember
right
up
here,
for
our
service
accounts
match
this
namespace,
so
we
create
the
namespace
that
allows
us
to
cr
to
create
these
resources
in
right,
so
this
contour
will
be
created
in
the
operator's
namespace.
B
But
let's
take
a
look,
let's
see
here
so
the
first
thing
we're
going
to
look
at
here
is
this
custom
resource
called
a
contour
and
our
contours
is
named
contour
gateway
sample
and
again
this
is
been
created
in
the
contour
operator
namespace,
so
in
the
same
name
space
as
the
operator
which
is
required,
and
we
see
that
it's
ready
and
that
it's
been
admitted
by
the
gateway
class.
B
Let's
take
a
little
bit
closer
look
here
and
we'll
dive
into
some
of
the
details
right
and
you
see
that
it
references
that
sample
gateway
class.
So
there's
a
bi-directional
binding
that
occurs
between
this
contour
custom
resource
and
the
gateway
class
that
it's
bound
to
because
the
gateway
class
we'll
see
here
in
a
second
it
actually
references
this
contour
resource.
There's
a
bidirectional
binding
between
the
two
resources.
B
This
field's
actually
ignored
when
gateway
class
is
gateway.
Class
references
specified.
So
we
can
skip
that.
But
you
see
here's
a
lot
of
the
details.
Right
that
aren't
are
not
meant
to
be
expressed
through
gateway
api
right,
because
again,
different
implementations
will
will
have
different
configuration
settings
and
so
forth,
and
so
you
see
the
network
publishing
field
in
the
contour
custom
resource
allows
us
to
specify
the
container
ports
and
port
numbers.
B
That
envoy
will
use
the
the
type
of
load
balancer,
so
we're
going
to
create
an
aws
external
load
balancer
and
then
the
number
of
replicas
that
we're
going
to
create
for
the
contour
control
plane,
and
it
also
gives
us
some
status
here
as
well
to
let
us
know
how
many
of
the
contour
and
envoys
are
available,
along
with
some
status
conditions.
So
all
this
looks
really
good.
Give
you
a
little
more
background
of
what
configuration
we're
expressing
through
the
contour
custom
resource?
B
Let's
take
a
look
at
this
gateway
class
now
and
we'll
just
dive
into
the
details
of
the
gateway
class.
So
remember,
this
contour
is
referencing,
this
gateway
class
and
we're
going
to
see
now
this
gateway
class
references
that
contour
we
just
looked
at,
because
this
is
where
we
go
ahead
and
do
that
in
the
gateway
class
using
the
parameters
ref
field,
where
we
say
hey
for
this
gateway
class
again
going
back
to
like
the
earlier
example,
this
could
be
our
external
gateway
class.
Instead
of
what
did
I
call
it?
B
A
contour
gateway,
class
sample
or
sample
gateway
class
right,
so,
based
on
that
configuration
we
saw
on
the
contour.
This
could
very
well
be
our
external
gateway
class.
Because
we're
you
know,
any
gateways
of
this
class
will
create
an
external
aws
load
balancer.
So
that's
kind
of
the
kind
of
that
workflow
and
the
linkage
between
these
different
resources.
B
The
other
key
piece
is
the
controller
field
right
and
so
contour
operator
is
going
to
be
looking
at
gateway
classes
and
then
one
of
the
first
things.
It's
going
to
be
doing
is
it's
going
to
say:
I
see
a
gateway
class.
Let
me
see
if
it
specifies
the
controller
string,
that's
required
for
me
to
manage
this
gateway
class,
and
so
we
use
this
this
string
for
for
telling
contour
to
to
manage
gateway
classes.
B
So
this
allows
you
know,
clusters
that,
similar
to
like
ingress
controllers
right,
you
could
have
a
cluster
with
different
ingress
controllers,
same
thing
with
gateway
class
and
gateways
right.
We
may
have
multiple
gateways,
but
we
may
want
those
to
have
different
implementations
and
the
controller
field
is
what's
used
for
that.
We
see
that
this
gateway
class
is
admitted
and
that
it's
owned
by
the
contour
operator.
So
looking
good
so
far,
let's
take
a
look
at
at
the
gateway
here
right.
B
We
use
the
gateway
class
name
field
to
tell
this
gateway
which
gateway
class
it
is
part
of,
and
then
the
gateway
has
multiple
listeners
right.
These
are
like
the
network,
endpoints
that
the
gateway
will
be
listening
on
right,
and
so
it
specifies
what
protocol
it'll
be
listening
on
the
ports
and
then
we
get
into
this
routes
field-
and
this
is
something
to
you
know,
to
really
understand,
because
the
same
lines
of
this
linkage
that
we're
seeing
throughout
the
apis
this
route,
this
routes
field-
is
what
allows
us
to
link
routes
right.
B
So
one
of
the
next
areas
of
the
api
that
we'll
dive
into
is
the
actual
routes
right,
the
routing
logic
of
how
do
we
now
that
you
know
traffic
is
hitting
a
gateway?
How
do
we
actually
route
that
traffic
to
the
back
end
resources
like
a
service
resource
that
that
we
want
right?
So
this
routes
field
is
going
to
express
what
type
of
routes
that
it
should
bind
to?
B
What
name
spaces
right?
So
do
we?
You
know
we
do
only
want
to
bind
routes
that
are
in
the
same
name
space
as
the
gateway.
Do
we
want
to
allow
routes
across
all
name
spaces,
or
we
can
use
selectors
to
be
very
specific
and
which
routes
were
we're
binding
to.
So
we
got
a
lot
of
flexibility
there
to
to
create
that
binding
with
routes
right
and
so,
and
the
same
logic
here
for
http
for
our
https
listener
as
well,
and
then
we
have
our
status
conditions
right.
B
So
we
have
status
conditions.
You
see
that
the
gateway
is
ready
to
serve
routes
and
it's
it's
ready
to
go
so.
B
The
next
step
is
to
actually
create
a
route,
and
so
let's
go
ahead
and
do
that
you
know
optionally
too,
you
could
see,
let's
take
a
look
at
the
the
infrastructure
that
was
created
by
the
gateway
right.
So
when
we
instantiated
this
gateway,
the
operator
took
action
on
that
and
did
a
bunch
of
stuff
for
us
right.
It
created
a
deployment
to
manage
our
our
control
plane.
B
It
created
a
daemon
set
to
establish
our
our
data
plane
and
did
a
bunch
of
stuff
config
maps
and
service
counts.
All
this
kind
of
stuff
to
to
make
contour
and
envoy
all
work
in
harmony.
B
But
let's
go
ahead
and
create
a
sample
workload,
we're
going
to
use
card
and-
and
you
could
go
ahead
and
get
a
little
background
on
the
card
app
if
you'd
like,
but
we'll
go
ahead
and
provision
card
create
a
deployment
the
service.
And
then
you
see
that
this
is
our
http
route
right.
So
this
is
the
route
that
the
gateway
is
going
to
be
binding
to.
B
Running
things
looking
good
and
you
know
what
I
wanted
to
do
as
well-
that
I
didn't
show
you
before
is
the
logs
there's
a
reason
why
I
had
the
logs
for
the
operator
right,
and
so,
as
I
mentioned
when,
when
the
operator
sees
the
contour
custom
resource
a
gateway
class
resource,
it's
going
to
reconcile
those
it's
going
to
make
sure
that
they're
valid
that
they're
referencing
one
another,
all
that
good
stuff
and
then
again
the
key
resources
that
gateway
resource
and
when
it
sees
a
gateway
resource.
B
It
starts
doing
a
whole
bunch
of
work
for
us,
the
r
back.
You
know
the
r
back
for
contour
the
config
map,
the
daemon
set
to
manage
the
envoy
proxy
fleet
all
this
stuff
right.
So
I
just
wanted
to
show
you
that
really
quickly,
but
back
to
our
example,
workload
we
see
the
the
pods
are
running
we've
front,
edited
those
pods
with
the
cluster
ip
service
and
we've
established
or
created
an
http
route,
go
back
to
our
documentation
and,
let's
actually
even
take
a
look
at
what
this
route
looks.
B
Like
so
again,
with
that
theme
of
bi-directional
relationships,
we've
got
we've
specified
for
our
gateways
that
we're
going
to
allow
same
name
space.
So
if
our
gateway
was
not
in
the
namespace
project
contour,
which
is
the
namespace
of
this
http
route,
we
would
not
bind
to
the
gateway,
even
if
the
gateway
has
a
different
configuration
right,
because
both
need
to
agree
on
the
same
configuration,
but
fortunately
we
do
right
because
the
gateway
previously.
Let
me
see
it's
back
here,
still
right.
B
Here's
the
gateway
configuration
that
says:
hey
we're
going
to
allow
routes
from
the
same
name,
space
right-
and
you
see
here
again
on
the
route
side,
we're
saying:
hey,
allow
gateways
from
the
same
name,
space.
B
And
so
with
the
route
we
say:
okay,
what
host
name
that
we're
going
to
use
for
routing
this
traffic
right,
so
this
is
going
to
align
with
the
the
host
header.
So
basically,
any
requests
that
that
hit
the
gateway
with
a
host
header
local.projectcontour.io
will
match
this
route.
So
the
gateway
knows.
Okay,
select
this
route,
not
just
because
of
its
gateway
policy
here,
but
that
this
request
is
coming
in
with
the
appropriate
hostname
header,
and
then
I've
got
some
rules
right.
B
I'm
going
to
forward
this
request
to
endpoints
associated
to
service
name
card
on
this
particular
port,
and
we
have
a
match
here
right
so
before
we
do
the
forward
we're
going
to
match
not
only
the
host
name,
but
what's
the
path.
Okay,
so
any
requests
that
hit
this
host
name
the
root
path.
Really,
you
know
any
subpass,
let's
go
ahead
and
forward
to
endpoints
of
the
service
name
card,
and
then
we
also
have
status
conditions.
So
we
can
see
what
what
gateways
this
route
is
bound
to
right.
B
This
is
the
gateway
that
we've
we
created
and
went
through
the
details
on
and
that
it
this
route
submitted
right
because
it's
valid.
So
it's
past
validation,
it's
associated
to
a
gateway-
and
it's
admitted
so
admitted
true,
is
that's
the
key
status
condition
we're
looking
for
so
everything's
looking
good.
So
far,
let's
go
back
here
to
our
documentation
and
let's
go
ahead
and
test
connectivity
through
our
gateway.
B
And
you
see,
because
I
don't
have
a
dns
name
created
for
this-
I
have
to
go
ahead
and
supply
this
host
header
here
right,
and
so
I
get
the
the
gateway
ip
from
the
host
name
or
depending
on
your
load,
your
cloud
provider.
As
I
specify
here
in
these
in
these
directions,
you
may
need
to
swap
out
hostname
for
ip,
so
this
cluster
is
running
on
aws,
which
uses
hostnames
for
load,
balancer,
ingress.
B
So
we've
we've
cr
we've
established
infrastructure
using
gateway
apis
again
the
gateway
apis.
The
implementation
is
contour
and
contour
manages
envoy,
so
contour
and
envelope.
B
This
is
all
upstream
running
on
openshift
and
we
have
verified
connectivity
from
you
know.
My
client
here
running
on
my
laptop
all
the
way
through
my
openshift
cluster
running
on
aws
to
the
car
to
the
card
application,
and
what
we
can
do
is
just
verify
that
those
requests
did
go
through
our
envoy
plot
proxy
fleet
right.
So
let's
look
at
the
logs
of
the
envoy
diamond
set.
You
see
here
that
we
found
three
pods
and
it's
using
one
of
them.
So
since
we
have
three,
we
need.
Let's
see
here.
B
So
the
request
did
not
go
through
this
particular
pod
in
the
daemon
set.
Let's
go
ahead
and
logs.
B
B
B
And
last
but
not
least,
let's
actually
take
a
look
at
the
deployment.
Maybe
we'll
have
to
do
the
same
thing
get
into
a
particular
pod
of
the
card
application
and
we
can
see
that
the
request
actually
hitting.
So
we
see
here
no
requests
coming
in,
so
we
probably
have
to
do
the
same
thing
here
with
our
card
endpoints,
which
we
have
these
three
okay
logs
and
let's
do.
B
All
right,
the
first
one,
which
one
is
this,
that
we
looked
at
so
three
pods
using
h
and
so
ls3
hms.
So
let's
try.
Let's
try
this.
B
B
That's
going
to
be
the
the
ip
of
the
envoy
proxy
that
serviced
the
request.
So
we
see
that
it
came
in
on
223,
there's
223
right
and
when
we
verified
we
we
looked
at
those
different
envoy.
Proxy
logs
should
be
srr
75
that
we
saw
the
request
come
in.
B
So
let's
go
ahead.
We've
got
some
time
here,
I'll
go
ahead
and
stop
and
hand
it
back
to
diane
and
others
that
may
have
any
questions.
A
Well,
I
think
that
was
really
a
great
introduction
to
the
the
gateway
api,
so
thanks
very
much
cool
and
collected
and
and
all
of
your
resource
links
there.
So
I'm
I'm
thrilled
with
that,
so
we
anyone
should
be
able
to
follow
along
with
the
quick
start.
There
were
a
couple
of
questions
in
the
chat
and
mark
curry
has
has
joined
us
as
well.
A
Who
is
the
pm
for
some
of
this
and
a
few
other
folks
here
that
are
working
with
mika
and
others
and
I'll
meet
you
in
the
chat,
and
I
think
you
answered
a
lot
of
them.
One
of
them
was
early
on.
You
know
what
are
the
alternatives
to
contour
and
I
pointed
people
to
the
contour
fa
project,
contours
faq,
and
you
did
cover
that.
So
I
think
we're
good
with
that
one
and
then
noelle
was
asking.
A
Could
you
explain
how
all
this
fits
in
with
service
mesh
and
is
contour
intended
to
replace
envoy,
and
I
think
he
covered
that
one
up?
He
might
marx
there
if
he
wants
to
go
in
a
little
bit
deeper
on
that.
B
Yeah,
let
me
before
mark
jumps
in.
Let
me
just
talk
about
the
service
mesh
and
so
in
one
of
the
slides.
I
I
shared
you
see
the
different
implementations
and
one
of
those
being
istio
right,
and
so,
if
it's
istio
k
native
open
openshift
with
our
route
resource,
I
mean
this
is
these:
are
the
the
issues
that
gateway
api
is
trying
to
tackle
right?
And
why
did
the
you
know?
Most
of
these
projects
started
off
their
initiative
with
ingress
I
mean
I
remember
early
on
with
istio.
B
It
started
off
using
standard,
kubernetes
ingress
and
then
like
pretty
much
all
these
projects.
They
get
to
a
point
where
it's
like
you
know
what
ingress
is
just
not
expressive
enough
to
meet
my
needs.
I
need
to
go,
create
a
custom
resource
and
the
seo
goes
out
and
creates
their
own
gateway
and
virtual
service,
and
I
mean
each
project
goes
and
does
this
right
even
with
contours
the
http
proxy
resource.
B
So
what
gateway
pi
is
trying
to
do
is
create
this
common
abstraction
right.
This
common
api
that
k
native
istio
any
implementation
can
use,
and
the
part
is
a
positive
thing
about
that,
is
you
know
we
reached
out
to
these
different
community
leaders,
our
project
leaders,
so
we've
worked
very
closely
with
istio
and
why
they're
on
that
list
of
supported,
implementations
right
same
with
k-native?
B
You
know
I
talked
about
the
route
resource.
That's
why
we're
here
at
the
table,
and
so
you
know
it's.
It's
really
meant
to
be
kind
of
that.
You
know
I
like
what
mark
has
shared
in
the
past
is
like
it
unifies
ingress.
So
it's
like
hey.
I
don't
care
if
I
want
to
provide
ingress
into
my
cluster
for
a
standard,
kubernetes
service
right.
Typically,
that
would
be
done
using
ingress
right
or
for
openshift.
B
It
would
be
the
route
right
or
if
it's,
if
it's
istio,
okay,
I'm
gonna,
create
a
group
gateway
and
a
virtual
service,
or
if
it's
k
native
because
I've
you
know
it
doesn't
matter,
we
don't.
You
know
the
cluster
operator
doesn't
need
to
know
20
different
apis
to
basically
provide
ingress.
B
They
can
now
start
to
say,
hey
the
the
path
forward.
For
most
of
these
projects,
if
not
all
of
them
is,
let's
start
converging
on
gateway
api,
so
we
have
this
unified
way
of
expressing
ingress
no
matter,
you
know
what
the
back
end
is.
If
it's
again
a
standard,
you
know
kubernetes
service,
some
custom,
backend,
serverless
service,
mesh.
C
No,
I
think
they
uncovered
it
very
well,
so
we
we
definitely
want
to
unify,
as
dane
was
saying
across
some
of
the
different
layered
products
of
openshift,
so
we
have
service
mesh.
We
have
openshift
virtualization
with
three
scale
products.
We
want
to
provide
a
mechanism,
that's
going
to
work
equally
well
equally,
as
well
across
all
of
those
to
simplify
the
decision
and
configuration
process,
as
well
as
unifying
efforts.
So
this
this
as
the
as
daniel
was
saying,
is
represents
a
unifying
effort.
Ultimately,
that's
that's
probably
the
biggest
outcome.
A
And
I
see
it
seems
like
a
lot
of
the
questions
that
are
coming
in
are
contour
instead
of
this
or
contour
with
this,
or
that
it's
really
about
getting
alignment
across
all
of
the
gateway
and
the
apis,
which
I
think
is
an
amazing
effort,
and
it
is
something
that's
been
working
through
the
sig
in
cncf
as
well.
So
this
has
been
a
pretty
interesting
collaboration.
C
A
C
And
4.8
we're
targeting
a
developer
preview
like
so,
as
dane,
was
saying
earlier.
This
is
something
the
users
can
kick
the
tires
so
to
speak
on
and
give
that,
and
they
can
get
a
heads
up
of
the
direction
that
we're
going.
A
And
if
people
want
to
get
get
involved
in
this
project,
whether
it's
contour
or
the
gateway
api
where's
the
best
place
for
them
to
connect
with
you
guys.
B
You
know
I'd
say
on
the
slack
channel,
so
of
course
you
know
contour,
it
is
a
cncf
project.
Let
me
actually,
I
just
pop
in
a
link
to
community
here
in
the
chat
window,
one
second.
B
So
this
is
a
really
good
resource
to
reference
and,
interestingly
enough
gateway,
api
has
the
same,
has
the
same
resource
as
well.
So
those
are
both
the
community
links
for
both
projects
and
yeah.
We'd
appreciate
you
know
more
new
happy
faces
coming
in
and
you
know,
even
if
it's
just
asking
questions
or
you
know
we're
always
looking
for
use
cases
right,
I
mean
we've
got
for
gateway
api.
We've
got
maintainers
from
from
red
hat
myself
from
from
project
contour
from
kong.
B
You
know
all
these
different
implementations,
of
course,
google
as
well
and
and
so
yeah
I
mean
we
would
appreciate
again,
even
if
it's
just
you
know,
questions
or
use
cases.
You
know
and
looking
forward
to
getting
more
people
involved.
A
B
Yeah
diane,
thanks
for
bringing
that
up,
I
forgot
to
mention
that
so
yeah
at
the
upcoming
kubecon
we've
got.
B
We've
got
presentations
and
live
meetings
for
for
both
projects,
so
sig
network
look
for
the
sig
network
meeting
along
with
presentations
for
gateway,
api
and
that's
gonna,
be
really
interesting
because
rob
scott
from
google
is
going
to
do
a
few
different
demonstrations
to
really
show
the
different
implementations
so
he's
going
to
highlight
contour
with
some
of
the
other
implementations
and
also
demonstrate
an
advanced
use
case
where
you
know
we
worked
with
a
sig
multi-cluster,
pretty
early
on
and
so
don't
think
about
just
the
gateway
api
in
the
context
of
a
single
cluster
being
able
to
like
do
traffic
splitting
across
two
different
service,
backends
I'll.
B
Take
that
same
idea,
but
bring
it
up
a
little
and
say
well
what?
If
I
had
multiple
clusters
that
I
wanted
to
load
share,
requests
coming
in
across
those
clusters,
and
so
so
he's
going
to
be
demonstrating
some
of
the
advanced
features
there
with
the
multi-cluster
traffic
splitting
and
then
yeah
project
contour
we've
got,
we
recorded
a
briefing
and
then
we
have
a
couple.
You
know
meet
the
maintainer
live
sessions
and
definitely
looking
forward
to
having
a
nice
open
discussion
with
people
who
are
interested.
A
So
I
I
think,
that's
that's
probably
besides
the
sig
network
meetings
and
and
the
community
meetings,
the
the
next
big
event
for
probably
everybody
who's.
A
Listening
to
this,
besides
the
red
hat
summit
april
event
and
the
4.8
release
cycle,
kubecon
eu
is
probably
the
next
juncture
where
I'm
thinking,
if
there's
any
new
features
or
use
cases
or
anything,
that's
going
to
get
showcased
there
and
please,
if
you're,
listening
to
this
before
or
after
reach
out
to
daniel,
to
the
folks
mark
and
others
we'd
love
to
get
your
feedback
on
using
it
in
the
developer
preview
for
openshift
and
hear
hear
what
you're
doing
and
how
you're
using
it
so
definitely
reach
out
to
us,
and
we
look
forward
to
getting
an
update
post
kubecon
on
what
new
features
and
functionality
gets
added
in
as
the
project
matures
so
well
done.
A
And
I.
B
A
Lot
of
work
went
on
in
the
background,
a
lot
of
collaboration
across
communities
across
upstream
projects.
So
this
is
really
you
know
a
really
nice
way
to
showcase
all
the
amazing
cross-community
collaboration
that
goes
on
in
the
background
on
some
of
these
cncf
projects,
and
you
know
really
a
nice,
a
nice
showcase.
A
So
thank
you
very
much
for
today
and
we'll
just
do
this
again,
hopefully
in
another
couple
of
months
and
see
where
we're
at
then
and
get
your
feedback,
especially
if
someone's
using
this
in
production
rolls
it
out
in
4
8
and
wants
to
talk
about
their
experience.
I'd
love
to
hear
that
too,
as
well.
So
that
would
be
awesome.
So
thanks
danian.
It's
always
a
pleasure
and
mark
awesome
work
shepherding
this
all
through
not
seeing
any
other
questions
in
the
chat.
So
I
think
you
all
get
four
minutes
back
into
your
day.
A
So
go
grab
a
cup
of
coffee
and
enjoy
the
rest
of
your
week.