►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Now
this
is
an
official
live
stream
of
the
cncf
and,
as
such,
it's
subject
to
the
cncf
code
of
conduct.
That
basically
means
just
be
nice
to
everyone,
be
respectful
to
the
presenters
to
the
other
people
and
chat,
and
you
know
to
me
too,
please
of
friends
who
are
joining
us
live.
Please
do
say
hello
in
the
chat:
I
love
how
Global
our
audience
is.
Tell
us
where
you're
tuning
in
from
and
as
always.
If
you
have
questions
as
we
go,
please
put
those
in
chat
too.
A
B
B
We
make
an
Enterprise
version
of
Argo
that
we've
shipped
to
our
users
and
I'm
also
an
Argo
maintainer,
and
it's
under
that
capacity
that
I'm
coming
to
today
to
talk
about
some
really
cool
work
that
has
been
done
by
our
team
here
at
code
fresh
to
create
a
rollouts
extension
that
allows
you
to
take
advantage
of
everything
in
Gateway
API,
and
this
basically
means
that
you
can
Define
your
Progressive
delivery
once
and
it
will
work
with
any
Gateway
API
provider.
So
that
means
any
Ingress
provider.
B
So
to
help
me
do
that
I
asked
Rob
if
he
would
join
and
he's
coming
to
us
from
Google
and
he
works
on
that
project.
Rob
you
want
to
introduce
yourself.
C
Sure
yeah
I'm
Rob
Scott
I'm,
a
software
engineer
at
Google
I've,
been
working
on
Gateway
API
since
well
the
very
beginning.
This
project
started
at
kubecon,
San
Diego,
which
is
nearly
four
years
ago
now,
and
it's
been
quite
a
process.
I'm
really
excited
about
Gateway
API
I
like
to
call
it
the
most
collaborative
API
in
kubernetes
history.
We
have
something
like
150
different
people
that
have
contributed
to
it
and
it
really
is
the
next
generation
of
many
apis
and
we'll
get
into
that
shortly.
C
B
That's
perfect,
and
so,
if
you
don't
follow
Rob
on
Twitter
I
think
he
is
at
Rob.
Squat
Scott
is
that
right,
Rob.
C
Robert
J
Scott
I
I,
wish.
My
name
is
way
too
common
I
I
wish
I
got.
You
know,
that's
my
GitHub
handle,
though
so
halfway.
B
Rob
Scott
GitHub,
okay,
Robert
J
Scott
on
Twitter,
you
can
follow
me
on
Twitter
at
today
was
awesome
and
as
we
go
along,
if
you
have
questions
you
have
comments,
throw
them
in
the
chat.
If
we
don't,
if
we
don't
see
them
immediately,
our
host
will
either
help
out
and
interrupt
us
and-
and
let
us
know
about
them
or
at
the
end.
B
We'll
also
have
time
for
questions,
and
we
also
have
a
giveaway
I
think
at
the
end,
so
stick
around
for
that
and
you'll
have
a
chance
to
get
some
some
some
good
stuff.
So
with
that,
the
plan
today
is
pretty
simple:
we're
gonna
introduce
AP,
Gateway
API,
we'll
talk
about
what
that
is,
how
exactly
it
works
and
then
we'll
talk
about
using
it
with
Argo
rollouts,
we'll
give
a
little
demo
and
we'll
have
time
for
questions
and
break
for
high
five
and
lunch.
B
So
with
that,
let's
pass
it
over
to
Rob
and
why
don't
you
introduce
Gateway
API.
C
Awesome
well
thanks
for
the
introduction,
so
yeah
Gateway
API
is,
as
I
kind
of
mentioned,
a
project.
That's
been
going
on
for
quite
some
time
now
behind
the
scenes,
but
we're
really
getting
to
the
point
where
the
momentum
has
just
gotten
incredible.
I
think
we
have
more
implementations
of
Gateway
API
today
than
we
ever
had
of
Ingress.
We
are
about
to
go
ga
in
a
few
months
and
this
project
has
really
taken
off
but
I.
C
It
would
be
easy
to
have
gotten
this
far
and
maybe
not
noticed
everything
that's
happening
with
Gateway
API,
because
we've
been
doing
a
lot
in
the
past
few
years.
So
let
me
just
take
a
few
minutes
to
explain
what
I
think
of
when
I
think
of
Gateway
API.
C
So
this
is
a
bit
of
jumble
of
words
but
I,
think
of
Gateway
API
as
a
single,
unified,
extensible,
role-oriented
API
for
kubernetes
service
networking
and
we'll
dive
into
some
of
these
words
here
in
a
little
bit
more
detail,
because
it's
a
lot.
But
first,
let's
talk
about
the
scope.
The
scope
of
this
is
really
large.
It
includes
all
kubernetes
services,
both
at
L4,
so
that
would
be
service
type
load,
balancer
and
L7,
which
would
be
Ingress
and
service
mesh.
C
So
we're
trying
to
you
know
capture
a
lot
within
the
scope
of
this
API
then
extensibility.
This
is
such
a
key
part
of
this
API.
For
those
of
you
who
may
have
used
the
Ingress
API
before
you
may
have
noticed
that
the
Ingress
API
really
went
for
this
idea
of
the
lowest
common
denominator
right
so
with
Ingress,
we
tried
to
have
an
API
that
worked
for
every
possible
every
implementation,
which
meant
that
we
only
covered
the
common
features,
which
is
was
a
very
small
set.
C
We
didn't
provide
a
good
way
for
extensibility
either,
so
that
just
meant
annotations
everywhere.
It's
implementations
working
with
what
they
had,
but
it
was
kind
of
a
mess.
So
with
Gateway
API,
we
wanted
to
do
two
things:
one
develop
a
much
larger
base
functionality
defined
in
the
core
API
itself
and
then
two.
We
wanted
to
provide
extensibility
lots
of
ways
to
plug
in
implementation,
specific
extensions
on
top
of
the
API,
so
they
wouldn't
be
stuck
with
more
annotations.
C
So
lots
of
work
going
on
there,
but
I'm
excited
about
the
features
that
we
already
have
and
the
ways
that
we
have
to
build
on
top
of
this
API
that
are
not
just
annotations,
then
the
role-oriented
part
of
this
is
so
key
to
how
we
designed
and
developed
this
API,
the
Ingress
API
when
you
think
about
it,
is
really
just
one
key
resource.
C
You
have
a
Ingress
resource
and,
more
recently,
an
Ingress
class
Resource
as
well
with
Gateway
API
we
wanted
to
before
we
Define
the
resources
we
wanted
to
Define
what
the
roles
we
were
seeing
in
so
many
organizations
right,
because
kubernetes
authorization
via
our
back
is
very
much
dependent
on
the
actual
resource
types.
So
you
could
Grant
someone
right
access
to
an
Ingress,
but
you
couldn't
say:
I
want
this
person
to
be
able
to
control
just
TLS,
for
example,
or
I
want
this
person
just
to
be
able
to
control
routing
configuration
with
Gateway
API.
C
We
tried
to
bucket
those
resources
in
sensible
ways
so
that
larger
organizations
that
may
have
different
people
fulfilling
these
roles
I
would
have
that
kind
of
access
and
would
be
able
to
say
okay.
This
specific
role
say
an
infrastructure
provider
or
a
cluster
operator
is
going
to
have
this
level
of
access
to
this
set
of
resources,
so
that's
been
defined
well
throughout
the
API
and
we'll
get
into
that
in
just
a
little
bit
and
then.
Finally,
this
is
a
kubernetes
API.
C
Although
this
API
is
built
with
crds,
it's
kubernetes
native,
it's
an
official
kubernetes
API,
it's
open
source,
it's
portable
and
we've
met,
invested
a
lot
in
conformance
testing.
So
all
of
this
means
that
you're
going
to
have
a
consistent
experience
across
any
of
these
implementations.
We've
got
over
20
implementations
of
the
API
API
today,
and
it
just
keeps
on
growing
and
also
because
these
are
crds.
It
means
you
can
install
the
latest
version
of
this
API
on
any
kubernetes
cluster
you're
running
you
don't
have
to
wait
for
kubernetes,
128
or
129..
A
There's
a
comment
that
I
want
to
acknowledge
just
about
vocabulary,
so
there's
some
confusion
around
API,
Gateway
and
Gateway
API.
Can
you
help
untangle
that
please.
C
Yeah
I
that
that's
a
that's
a
really
good
question:
we're
we're
working
on
improving
our
docs
to
clarify
that
a
little
bit
more
but
I,
think
of
API
Gateway
as
a
fairly
broad
category
of
products
and
I.
Think
of
Gateway
API
as
well
an
API
that
enables
some
subset
of
API
Gateway
functionality
about
API
Gateway
products
are
usually
broader
in
scope
than
just
Gateway.
Api
and
Gateway.
Api
also
covers
some
things
like
mesh.
C
Now
that
aren't
usually
in
scope
for
API
Gateway,
so
I
know
that
name
is
confusing,
but
yeah
that
that's
probably
the
best
way
of
differentiating
those.
A
C
You
cool
good,
good
question,
great
question,
all
right,
so
yeah,
let's
dive
into
the
resource
model
here
at
when
you're
talking
about
Gateway
API,
there's
really
three
key
kinds
of
resources.
Here.
First
we've
got
a
Gateway
class
that
defines
the
type
of
infrastructure
that
a
Gateway
would
be
provisioned
with.
Then
you
have
a
Gateway
and
you
say
I
want
to.
C
You
know
we'll
use
GK
as
an
example,
because
that's
the
product
I
work
on
on
top
of
get
Gateway
API,
and
so
in
gke
we
provision
a
few
different
Gateway
classes,
one
for
an
external
load
balancer
and
one
for
an
internal
load
balancer
as
an
example.
So
you
might
create
a
new
Gateway
of
class
Gateway,
gke
xlb
and
you
so
that's
basically
saying
Hey
I
want
a
external
load,
balancer
provisioned
by
gke,
and
that's
what
the
Gateway
represents
here
and
then.
C
Finally,
that
next
step
is
the
routing
layer
and
the
routes
are
the
real
power
of
this
API.
In
this
example,
I
have
an
HP
route
and
a
TCP
route.
We
have
routes
for
lots
of
different
protocols,
including
grpc
UDP
TLS,
on
one
shown
here
as
well,
and
I
expect
there
will
be
more
as
time
goes
on,
but
the
routes
are
very,
very
full
of
capabilities.
C
So
as
an
example
in
this
exam
we're
showing
that
an
HP
route
can
do
traffic
splitting
between
service
a
and
service
B,
you
could
also
use
TCP
routing
to
do
traffic
splitting
as
well
there's
a
long
list
of
features
that
we'll
get
into
in
just
a
little
bit
more
time,
but
it
can't
look
at
the
resource
model
without
also
thinking
about
the
roles
we
had
in
mind
here.
C
So
first
up
we
have
infrastructure
providers
and,
in
the
case
of
a
cloud
provider
like
gcp
clusters
that
you
create
will
come
pre-provisioned
with
a
set
of
Gateway
classes
that
we're
providing
for
you
some
in
some
other
organizations.
You
may
instead
choose
to
provide
your
own
set
of
Gateway
classes
for
each
cluster,
so
we're
thinking
of
that
as
kind
of
that
info
infrastructure
provider
level.
C
Next,
we
have
gateways
which
we
expect
are
going
to
be
owned
largely
by
cluster
operators.
So
in
that
case,
you
would
have
a
cluster
operator
that
might
set
up
an
external
production
load,
balancer,
an
internal
production
load
balancer,
maybe
a
test
lb
somewhere,
and
then
you
can
kind
of
Define
the
set
of
namespaces
that
can
expose
their
applications
through
that
load
balancer
for
example.
C
So
that's
that's
a
really
high
level
idea
of
one
way
that
gateways
could
be
managed
and
then
finally
I
think
one
of
the
most
consistent
things
we
see
here
is
that
application
developers
are
who
we
expect
to
be
managing
all
of
the
routing
logic
across
your
applications.
So,
although
the
actual
load
balances
the
Gateway
classes,
Etc
are
things
that
we
expect
are
going
to
be
managed
exclusively
by
by
people
above
that,
like
cluster
operators
or
infrastructure
providers,
I
would
expect
application
developers
will
be
managing
most
of
the
routing
logic
here.
C
C
All
right
so
keep
on
moving
here
and
there's
a
lot
of
features.
This
unlocks.
First
up.
Let's
look
at
what
Gateway
API
enables
there's
TLS
config,
there's
HP
matching
a
lot
of
new
things:
header
method,
query
param,
we're
enabling
the
ability
to
cross
namespace
boundaries,
which
probably
seems
rather
scary
at
first,
but
it's
super
powerful.
We've
made
sure
there's
a
two-way
handshake,
so
both
sides
of
the
the
connection
are
agreeing
to
that.
C
But
what
that
means
is,
you
can
say:
I
want
a
gateway
to
be
defined
in
my
infrastructure,
namespace
and
I
wanted
to
be
able
to
connect
to
routes
in
these
four
application
name
spaces
for
example,
and
similarly
cross
namespace
forwarding.
So
you
can
connect
routes
to
services
in
different
namespaces
as
well.
There's
a
lot
of
filters
in
HP
route,
like
header
modification
request,
mirroring
request,
redirects
URL,
rewrites,
there's
a
lot
here
now.
C
Some
of
you
may
say
well,
Ingress
did
have
some
of
these
things
and
you're
right,
but
I'll
just
highlight
what
the
Ingress
API
included
and
it's
a
very,
very
small
subset
of
what
Gateway
API
already
has
and
the
difference
is
ingress.
Api
is
basically
Frozen
in
time.
It
will
continue
to
be
supported,
but
it's
not
going
to
grow.
It
is
what
it
is.
C
A
gateway,
API
is
continuing
to
grow
rapidly
and
the
feature
set
I
expect
will
only
continue
to
expand
as
we
go
all
right,
but
we
talked
about
the
huge
set
of
features
that
these
apis
have,
but
maybe
we
should
just
take
a
step
back
and
focus
on
the
similarities
here.
So
I
have
a
simple
Ingress
and
HTTP
route
example
here
and
I
want
to
show
you
just
the
similarities
and
from
you
know
that
this
API
might
actually
feel
pretty
familiar
to
you.
C
So
in
Ingress,
if
you
wanted
to
say
your
Ingress
was
implemented
by
say
nginx
you'd,
say
Ingress
class
name
nginx
in
HTTP
route.
What
you
do
is
you'd
say:
hey
I,
want
this
route
to
be
attached
to
my
nginx
Gateway
and
that
you
do
that
via
parent
refs.
At
the
same
time,
you
could
add
another
parent
to
the
same
routing
configuration
and
have
it
implemented
by
more
than
one
Gateway.
C
C
All
right
so
next
up,
let's
just
dive
into
gateways
themselves,
because
we
just
saw
that
HP
route
and
Ingress
are
awfully
similar.
But
Gateway
is
kind
of
this
New
Concept
in
the
hierarchy,
and
you
may
be
wondering
well
what
exactly?
Why
do?
We
need
a
gateway,
so
you
know
in
Ingress
we
had
Ingress
and
Ingress
class,
and
there
was
some
significant
variation
in
implementations
here
for
some
implementations
of
this
API
on
Ingress,
all
ingresses
were
kind
of
merged
together
behind
one
load
balancer
and
for
other
implementations
of
the
API.
C
Every
Ingress
resource
was
mapped
to
a
different
load
balancer
behind
the
scenes,
with
Gateway
we've
kind
of
and
introduce
the
resource
to
actually
represent
that
level
of
the
hierarchy
instead
of
just
you
know
having
implementation
specific
behavior
there.
C
So
in
this
case
we
have
a
Gateway,
Foo
lb
and
you
can
attach
as
many
routes
as
you
want
to
that
Gateway
or
you
can
segment
out
gateways
depending
on
you
know,
different
groups
of
infrastructure,
different
groups
of
applications-
environments,
whatever
it
might
be
so
gateways,
really
represent
an
instance
of
a
load
balance
or
a
proxy.
They
Define
listeners.
So
in
this
example,
we've
got
a
listener
on
HBS,
443
and
you've
got
some
basic
TLS
configuration
associated
with
it,
and
you
can
attach
attach
routes
to
these
gateways.
C
So
first,
we've
got
in
cluster
gateways,
so
in
cluster
gateways
are
really
familiar
to
a
lot
of
Ingress
implementations
today
and
that's
when
you
deploy
a
Gateway
resource
you're
going
to
get
a
group
of
pods
created
in
your
cluster,
often
backed
by
something
like
a
service
type
load
balancer-
and
these
are
going
to
be
your
data
plane,
your
you
know:
L7
load,
balancing
infrastructure
for
this
Gateway
representation,
the
the
beauty
of
these
kinds
of
implementations
are,
they
behave
the
same
way
on
any
kubernetes
cluster.
C
On
the
other
side,
you've
got
a
cloud
provider
implementation,
so
the
gke
implementation
that
I
work
on
when
you
deploy
a
Gateway
that
Maps
one
to
one
with
a
cloud
load
balancer.
So
a
Gateway
is
going
to
resolve
in
ospending
up
a
cloud
load
balancer,
and
it
enables
you
to
load
balance
directly
from
the
cloud
load
balancer
to
pods
without
any
kind
of
intermediate
hop.
C
C
V0.8.0
is
just
on
the
cusp
of
releasing
we're
just
starting
the
API
review
process.
Now
gamma,
which
is
our
mesh
standard,
is
about
to
go
experimental.
So
that
means
it's
going
to
become
part
of
a
release
in
something
that
is
formally
supported.
Believe
it
or
not.
I
I
haven't
had
enough
time
to
talk
about
the
mesh
side
of
things
here,
but
we
have
three
meshes
that
are
already
fully
conformed
with
using
this
API
for
mesh,
so
that
includes
istio
Linker,
D
and
Kuma.
C
I
am
really
exciting
to
see
portability
across
service
mesh
implementations
using
the
same
API
you
can
configure
load
balancers
with
you
can
also
now
configure
meshes
with
then
routability
is
this
New
Concept
coming
in
0.8
and
that
allows
you
to
Define
in
a
standard
and
portable
way
where
the
Gateway
can
be
reached.
So
if
you
want
a
public,
private
or
cluster
local
Gateway,
That's,
A,
New
Concept,
that's
coming
in
this
coming
release
of
Gateway
API
and
then
1.0
is
a
release.
We're
targeting
for
October
and
that's
a
huge
milestone
for
us.
C
We're
anticipating
the
gateway
gateway,
class
and
HTTP
route
are
all
going
to
go
to
GA
and,
along
with
that,
we're
going
to
have
a
lot
of
effort
on
conformance
to
ensure
that
conformance
is
fully
covered
across
the
full
feature
set,
so
that
we
can
provide
every
user
of
every
implementation
of
this
API.
A
consistent
experience.
C
With
that,
I
think
that
that's
all
I've
got
for
Gateway
API
I'll
have
plenty
of
time
at
the
end
to
to
answer
any
questions
unless
I
missed
some
that
came
in
here.
B
This
one,
let's
actually
so
there
is
a
question
about
kind
of
guiding
Philosophy
for
Gateway
API.
Let's
actually
save
that
one
till
the
end,
because
I
think
that's
a
really
good
discussion,
it'll
take
just
a
minute
and
then
in
the
meantime,
Let's.
Let's
kick
off
and
and
start
talking
about
how
this
is
going
to
tie
in
with
our
rollouts
sounds.
A
B
So
for
those
of
you
that
aren't
familiar
with
Argo
rollouts,
we'll
do
just
a
very
brief
introduction
for
those
of
you
that
are
already
familiar,
and
this
won't
be
too
long.
But
if
you're
not
familiar
with
the
Argo
project,
this
is
a
cncf
project
that
is
now
graduated
and
it's
made
up
of
four
tools:
Argo
workflows,
which
is
a
general
purpose,
workflow
engine
for
kubernetes
Argo
events,
which
is
for
Eventing
those
workflows,
Argo
CD,
which
is
a
get
Ops
operator.
B
So
you
can
define
a
source
of
Truth
and
a
Target
environment,
and
it
will
keep
things
in
sync
and
follow
a
policy
and,
of
course,
Argo
rollouts,
which
is
for
doing
Progressive
delivery.
So
these
are
the
four
tools
within
the
Argo
project.
We're
going
to
be
talking
about
Argo,
rollouts
now,
of
course,
Argo
rollouts
and
Argo
CD
do
integrate
very
well
together,
but
one
thing
I
want
to
call
out
before
I
move
off.
B
This
slide
is
that
we
have
something
called
Argo
Labs
now
Argo
project
Labs
is
is
essentially
where
people
can
go
and
create
add-ons
plugins
tools
that
are
meant
to
work
with
Argo
and
work
on
them.
In
an
open
source
fashion,
so
we've
got
a
couple
of
tools
in
Argo
Labs
if
you've
used
Argo
CD
autopilot.
This
is
something
that
the
team
of
code
fresh
built,
which
is
basically
an
opinionated
way
of
setting
up
Argo,
CD
and,
of
course,
we're
going
to
be
talking
about
a
plug-in
for
Argo
rollouts,
that
is
in
Argo
Labs
today.
B
So,
first
of
all,
how
does
how
does
it
Progressive
delivery
work?
It's
very
simple!
You
have
Canary
releases
or
you
have
blue
green
deployments
for
a
canary
release.
You
deploy
a
new
version
of
your
app.
You
give
a
percentage
of
traffic
to
that
new
version
and
if
it
works,
then
you
work
your
way
up
to
100.
So
it's
very
simple
and
most
people
at
this
point
are
familiar
with
the
concept
of
a
canary
release
and
the
reason
you
do
a
canary
release
is
because
it
reduces
the
blast
radius
of
catastrophic
failures.
B
So
many
of
you
are
familiar
with
the
concept
of
using
things
like
feature
Flags,
which
are
really
for
testing
user
interaction
with
features
Canary
releases
are
really
a
way
to
de-risk
your
deployments.
So
when,
when
every
time
you
make
a
software
change,
there's
some
risk
that
maybe
you're
going
to
introduce
some
sort
of
braking
change,
there's
going
to
be
some
sort
of
regression
and
a
canary
says.
B
Well,
you
know
I'm
going
to
give
this
to
10
of
my
traffic,
we'll
do
a
health
check
if
everything
works
it
moves
on,
if
it
doesn't
and
I
had
some
sort
of
impact
on
those
users
I'm
able
to
roll
it
back
very
quickly,
and
it's
only
a
small
subset
of
my
users
that
have
that
impact.
So
it's
a
way
of
just
reducing
the
risk,
reducing
the
blast,
radius,
It's
very
effective
for
that
blue
grain
is
a
similar
strategy,
except
with
blue
green.
B
You
bring
up
your
entire
stack
separately
and
then
you
switch
over
the
traffic
all
at
once
and
that's
a
way
to
make
sure
that
the
new
version
is
going
to
work
before
it's
exposed
to
users
as
well.
So
these
are
both
great
options
for
or
de-risking
your
deployments.
We're
going
to
talk
mostly
about
Canary
today,
in
this
context
So
within
Argo.
This
is
Argo
CD.
It
should
say
Argo
roll
out
sorry
about
that,
but
you
can
see
that
in
Argo
rollouts
we
use
what's
called
a
roll
out
object.
B
So
this
is
a
custom
resource
in
kubernetes
and
it
is
essentially
a
deployment
with
some
additional
information.
So
you
can
actually
take
an
existing
deployment
and
create
a
rollout
that
consumes
that
deployment.
So
you
don't
even
need
to
modify
your
deployments
if
you
don't
want
to.
If
you
did
want
to
change
the
deployment
to
a
rollout,
you
literally
just
need
to
change
the
kind
from
deployment
to
roll
out
and
the
API
version
from
delivery
to
Argo
project
off
of
one.
So
and
and
then
you
can
add
your
additional
steps.
B
So
you
can
see
here,
we've
got
a
strategy
to
find
to
do
a
canary,
and
this
is
going
to
run
an
analysis
template
the
analysis.
Template
is
basically
how
we're
going
to
do
a
health
check
and
Analysis
templates
integrate
with
Prometheus
with
datadog
with
basic,
basically
any
metrics
provider.
So
you
can
run
that
and
then
it's
going
to
set
some
arguments
and
it's
going
to
set
some
steps.
B
So
that's
that's
the
basic
simplest
way
of
looking
at
how
these
rollouts
are
going
to
function
and
I
can
I
can
even
actually
I'll
show
you
this
first.
So
we
we
had
a
great
slide
earlier
about
kind
of
the
structure
of
gateways
and
apis,
and
things
like
that.
B
So
as
an
example,
if
we
were
looking
at
istio,
istio
has
this
istio
Ingress
and
it
provides
this
Gateway
that
is
allowing
you
to
select
which
service
stack
is
going
to
receive,
which
container,
which
traffic,
which
is
basically
which
one's
going
to
be
exposed
to
users.
So
the
way
that
this
is
going
to
work
with
Canary
is
you're,
basically
going
to
bring
up
a
new
service
you're
going
to
in
the
case
of
istio.
B
You
would
attach
it
to
that
virtual
service,
and
then
you
would
create
pass
arguments
to
the
virtual
service
to
set
a
percentage
of
traffic
and
routing
to
the
canary
version.
B
Now,
I've
done
this
manually
for
years
right
so
like
way
back
in
2017
I
think
we
built
our
first
Canary
step
and
it
was
inside
of
a
CI,
CD
Pipeline,
and
you
know
it
was
kind
of
complex.
But
now
with
RL
rollouts,
it
is
so
much
easier
to
do
because
it's
all
declarative,
it's
all
done
as
a
matter
of
policy,
and
you
can
just
operate
it
that
way,
no
problem
so
using
using
Argo
rollouts.
Today
we
have
a
number
of
different
providers
that
we
support
and
each
of
those
requires
their
own
arguments.
B
So,
for
example,
here
you
can
see
a
collection
of
different
configurations.
These
all
do
the
exact
same
thing,
except
you
can
say
that
each
of
them
has
a
different
traffic
routing
argument
and
the
traffic
routing
specifies
the
arguments
for
Ambassador
for
an
Amazon
load
balancer
for
traffic.
For
what's
this
one
doing.
This
is
oh
this
one's
istio
and
then
we've
got
another
one
over
here.
That
I
think
is
doing
nginx.
B
So
each
of
these
requires
their
own
specification,
and
so
each
of
your
rollouts
needs
to
be
specialized
for
the
Gateway
that
you're
using
that's
the
way
it
works
today
and
just
to
show
you
what
these
look
like
in
action.
I've
got
my
code
fresh
dashboard
here,
where
we're
exposing
article
rollouts
and
if
I
look
at
this
Canary,
that's
currently
in
progress
or
sorry.
This
one
that's
finished:
you
can
see
this
one
was
set
up
to
do,
show
20
of
traffic.
It
was
going
to
pause,
go
to
forty
percent,
go
to
60
80.
B
This
is
what
they
look
like
and
if
I
I
could
actually
kick
off
a
new
one
right
now,
where
I
run
this
and
sync
it
and
hit
synchronize,
and
this
is
going
to
kick
off
a
new
Canary,
so
you
could
actually
see
it
in
action,
and
this
is
just
how
Argo
rollouts
works
today,
and
so
that's
very
easy
and
nice
and
and
makes
things
pretty
smooth.
Let's
see,
did
it
run,
can
you
see
this
thing
to
happen?
B
Not
necessarily
critical
or
I
didn't
look
at
this
one
beforehand,
so
maybe
it's
maybe
it's
there's
some
sinking
thing
going
on.
So
not
not
a
big
deal,
though,
just
to
give
you
an
idea
of
how
a
canary
functions
in
general.
So
let's
go
back
and
look
at
these
templates.
So
we've
you
can
see.
B
I've
got
all
my
different
specialized
templates
here
available,
but
what
if
we
could
use
a
single
API-
and
this
is
what
got
us
at
the
Argo
project
so
interested
in
Gateway
API-
is
that
every
time
we
wanted
to
add
support
for
some
different
Gateway
provider,
we
had
to
build
it
from
scratch,
and
so
we
had
support
built
in
for
a
handful,
and
then
people
were
always
opening
issues
and
saying
hey
what
about
Kong?
What
about
glue?
What
about
this?
What
about
that
and
it's
very
time
consuming?
B
Basically,
because
us
maintainers,
we
have
to
go
and
learn
each
of
these
different
gateways
and
figure
out
how
they
function
in
order
to
add
the
support
in
Argo
rollouts.
Well,
we
don't
have
to
do
that
anymore,
because,
thanks
to
Gateway
API,
we
developed
a
plug-in
called
rollouts
plug-in
traffic
router
Gateway
API,
that's
a
great
name.
We're
gonna
have
the
marketing
folks
work
on
that
one,
but
basically
this
provides
a
unified
interface
for
Argo
rollouts
to
consume
any
Gateway,
API
compatible,
router
and
routing
mechanism.
B
So
the
we'll
throw
the
link
in
the
chat
here
to
that
project.
Go
give
it
a
star
give
it
a
like.
We
appreciate
that
helps
people
get
more
visibility
on
the
project,
and
this
is
a
pretty
new
project.
This
is
really
where
we're
announcing
it.
So
it's
been
in
use
for
the
last
couple
of
months
by
a
few
different
teams,
and
so
far
all
the
feedback
has
been
very
positive,
but
today
we
really
want
to
open
it
up
and
have
everybody
start
using
it
and
giving
us
that
feedback.
B
So
what
does
this
provide
us?
And
what
are
the
limitations?
Well,
the
great
things
about
it.
First
of
all,
it
really
expands
our
support
for
traffic
management
providers
with
our
Argo
rollouts
and
that's
awesome
from
my
perspective,
hey.
It
makes
it
easier
because
we
don't
have
to
re-implement
it
for
every
provider,
but
it
also
is
easier
for
you
as
the
end
user,
because
those
templates
that
I
showed
you
earlier,
where
you
have
to
specify
a
different
argument
for
each
provider.
You
don't
have
to
do
that
anymore.
B
You
can
just
have
one
unified
template
that
you
use
and,
of
course,
there's
portability
across
these
providers.
So
if
you
have
a
service
that
you've
defined
a
rollout
that
you've
defined,
you
can
use
that
in
lots
of
different
places
and
you
don't
need
to
tweak
it
or
modify
it
for
these
different
providers.
Now
there
are
some
limitations
and
we'll
talk
a
little
bit
more
about
these
and
Rob.
Keep
me
honest.
You
know
I'm,
not
the
Gateway,
API
expert,
I
work
on
Agro
and
I'm,
just
a
consumer
of
Gateway
API.
B
So
Rob
can
tell
me
if
I
get
anything
wrong
here.
There
are
some
limitations
right
now
where
the
granularity
of
control
for
routes
is.
B
May
you
may
not
have
as
many
options
with
certain
providers
so,
for
example,
if
you
want
to
do
something
like
header
based
routing,
you
actually
need
to
extend
within
Argo
sorry
within
Gateway
API,
there's,
basically
a
specific
provider
settings
for
those
kinds
of
extensive
things.
So
if
you
want
to
do
more
advanced
things,
there
are
ways
to
do
it,
but
it
may
be
a
little
bit
more
complex
than
if
you
were
going
with
just
a
native
built-in
provider.
B
So
that's
a
minor
complaint
and
something
that
the
Gateway
API
team
is
aware
of
and
I'm
sure
is
always
improving.
I
mean
we
can
talk
more
about
those,
but
if
you're
using
Argo
rollouts
today
you
have
native
support
for
working
with
Ambassador
awslb,
istio,
nginx,
Apache
traffic
and
SMI,
which
is
a
Linker
D.
So
those
are
all
natively
supported
in
Argo
rollouts
today,
but
by
using
Gateway
API.
We
are
also
able
to
support
all
of
these
I
I
think,
except
for
link
or
D
I.
B
Don't
think
SMI
is
supported
within
Gateway
API,
but
I
think
everything
else
is
supported
by
Gateway
API.
In
addition
to
all
of
these
other
ones,
Envoy
flow
mesh,
glue,
gke's
load,
balancer,
proxy
console,
Kong,
Kuma,
Lightspeed,
stunner,
Contour,
cilium,
big
IP,
anecdotal
epic.
All
of
these
additional
options
are
supported,
so
this
is
3x.
This
is
the
Apple
presentation.
Part
of
this
of
this
we
now
have
3x
bore
support
for
different
Gateway
apis.
B
So
if
you
are
using
any
of
these,
if
you
have
a
variety
of
them-
and
you
may
have
situations
where
you
want
to
use
one
over
another-
you
don't
necessarily
have
to
be
all
in
on
one
in
order
to
use
Argo
rollouts
with
them.
This
just
provides
a
universal
way
of
doing
it,
and
so
this
is
actually
now
my
default
way
of
adding
rollout
support
is
to
not
care
at
all
about
what
my
underlying
Gateway
is
when
it
comes
to
configuring.
My
rollout,
because
I
can
just
do
these
in
a
generic
way.
A
So
let
me
just
show
you
a
question:
that's
very
relevant
that
I
think
is
kind
of
like
it's
just
a
different
angle
at
the
same
thing
that
you're
saying,
but
does
this
mean
that
Argo
rollout
support
for
providers
will
track
one
to
one
with
Gateway
API
support.
B
Yeah,
so
let
me
yeah
to
explain
how
that
works.
So
there
is
a
gateway,
API
plug-in
that
we're
showing
off
today,
and
that
provides
the
interface
for
Argo
rollouts
to
talk
to
Gateway,
API
and
I.
Don't
expect
that
there
is
any
requirement
on
this
plug-in
to
be
updated
regularly
to
support
what
Gateway
API
is
providing,
because
that
API
is
essentially
universalized.
B
It
shouldn't
actually
need
to
be
updated
very
often
to
support
different
things.
It
should
be
hey.
If
somebody
adds
Gateway
API
support
to
their
provider,
then
it
should
just
automatically
work
with
Argo
rollouts.
Does
that
make
sense.
C
Yeah,
exactly
that's,
that's
that's
something
that
we've
been
working
so
much
on
with
the
Gateway.
Api
is
a
very
broad
set
of
conformance
tests
and
just
in
the
past
month
or
so,
we've
started
to
add
some
centralized
reporting.
So
you
can
very
clearly
show
not
just
conformance
results,
but
what
implementation
support
which
features
of
the
API,
because
there's
a
pretty
broad
core
that
everyone
supports,
but
then
there's
some
extended
features
like
Dan
was
referring
to
that.
C
Not
everyone
is
able
to
support,
but
we're
going
to
have
a
centralized
way
of
showing
okay
if
I
need
this
feature.
These
are
the
implementations
that
have
support
for
it
and
maybe
I'll
just
jump
in
real
quickly
and
also
mention
that
SMI
bit
service
mesh
interface
that
the
team
behind
that
actually
or
teams
behind
that
have
decided
to
go
all
in
on
Gateway
API.
So
that
includes
Linker
d,
Kuma
Etc,
and
so
that's,
what's
called
gamma
it's
Gateway
API
for
mesh
management
and
administration,
something
like
that.
It's
a
fun
acronym!
C
But
anyway,
that's
that's!
Coming
in
the
upcoming
release,
and
so
Linker
D
and
other
meshes
will
be
supported
by
Gateway
API
as
well.
Natively,
which
is
really
exciting,
there's
been
some
great
contributions
from
the
Linker
D
team
to
to
get
this
going,
but.
A
B
Oh
excellent,
yeah,
okay,
that's
awesome
and
then
of
all
of
these
I
mean
we
were
talking
about
a
provider
specific
implementation,
so,
for
example,
header
based
routing
I
think
only
30
of
these
support
header
based
routing
out
of
the
box.
So
that's
one
of
those
examples
right
Rob
where,
where
that's
kind
of
that's
still
a
little
bit
provider
specific,
because
it's
a
little
bit
more
advanced
implementation.
C
Yeah
I'd
have
to
look
at
the
specifics,
I
think
header
based
maybe
slightly
the
thing
that
is
is
less
common
support.
I
think
header
based
routing
is
fairly
broadly
supported,
but
header
modification
is
something
that
is
less
broadly
supported,
but
there
there
are
examples
like
that.
Certainly
of
of
some
features
in
the
API
that
are
not
going
to
be
supportable
everywhere,.
B
Yeah,
okay,
all
right,
perfect,
so
Let's!
So
let's
go
into
the
demo
bit
and
obviously
you
know
keep
the
questions
and
comments
coming
so
for
the
demo.
I
have
something
very
simple
here
to
just
show
you
and
I
think.
As
far
as
demos
go
it's
almost
a
little
boring,
because
the
whole
point
is
that
this
stuff
is
just
Universal.
Now
so
I
showed
you
earlier
what
our
rollout
would
look
like
using
a
different
provider,
but
here
you
can
see
I've
got
an
Argo
rollout.
B
It's
deploying
an
Argo
rollouts
demo
and
under
my
strategy
here,
I've
got
my
canary
and
I've
specified
which
services
are
being
interacted
with
for
Canary
and
our
stable
service
and
under
traffic
routing
I'm,
specifying
the
plug-in
Argo
project,
Labs,
Gateway,
API
and
then
I
just
pass
in
the
route
that
we've
created
there
and
the
the
namespace.
That's
all
I
really
need
to
do,
and
this
basically
means
that
my
traffic
routing
is
going
to
go
through
a
the
Gateway
API.
B
You
can
see
that
it's
currently
paused
been
going
for
five
days
quite
old
and
this
one's
just
using
gke's
built-in
load
balancer,
so
I
didn't
install
any
additional
provider
it's
just
baked
in
and
you
can
enable
that
pretty
easily.
If
I
look
at
the
step
around
really
quick,
oh
here's
that
Canary
happening
or
that
I
started
earlier.
B
Let's
see,
I
was
going
to
show
you
if
I
look
at
API.
If
I
look
at
the
example
documentation
here,
you
can
see
under
the
plugin.
We
have
a
couple
of
examples,
so
under
Google
Cloud,
all
you
need
to
do
is
enable
Google
Cloud
to
have
the
use
the
Gateway
API
standard
and
you
can
modify
an
existing
cluster
just
passing
the
argument.
Gateway
API
equals
standard
or
you
can
do
it
when
you're
creating
a
new
cluster
and
it
will.
B
It
will
enable
Gateway
API
support
within
gke
and,
of
course,
it's
going
to
be
a
little
bit
different
for
each
provider,
but
just
to
just
to
set
up
the
basis.
But
the
point
is
that
rollouts
is
universal
and
extended
right.
So
with
my
canary
that's
going
right
here.
I
can
actually
promote
this,
so
let's
do
Argo
rollouts
promote
remote,
let's
promote
our
rollout
demo
and
this
is
going
to
it
basically
set
the
next
traffic
progression
to
happen.
So
you
can
see
it's
now
in
progressing.
B
It's
spun
up
four
additional
pods
here
and
so
it's
getting
more
traffic.
So
it's
going
up
to
that.
60
percent
range.
These
are
all
running
and
it
is
once
again
paused.
Of
course
we
can.
We
can
stop
this
again
and
let's
do
another
promotion
and
we'll
go
watch
this
progress
again.
So
you
can
see
should
see
currently
in
healthy
status.
That's
great.
B
B
Let's
run
again
so
yeah
you
can
see
now
it's
actually
shut
down
the
old
version
and
everything
is
moved
to
the
the
new
version
and
we've
actually
completed
our
rollout.
So
we're
done
with
our
rollout
and
and
we've
actually
got
to
the
100
traffic
routed
to
the
new
version.
So
that's
that's.
Basically
it
I
mean
that's
it's
really
that
simple.
B
It's
really
that
easy
to
use-
and
you
can
see
the
spec
is-
is
universal,
so
I
could
be
using
this
with
any
provider
if
I
was
using
Kong
or
Contour
or
any
of
these
other
providers.
It's
going
to
work
just
the
same,
so
easy,
peasy,
lemon,
squeezy,
all
links
to
Gateway
API.
We
extended
our
support
in
Argo
rollouts
and
we
provided
an
easier
interface
to
use.
So
as
we
pass
into
the
questions
section,
I
want
to
give
a
call
to
action
for
you
to
become
a
get
Ops
Champion.
B
If
you,
if
you're
not
familiar
with
this
program,
we
have
a
thriving
Discord
community
and
we
offer
exclusive
content
there
and
to
help
you
get
started.
We're
providing
a
hundred
percent
off
on
get
off
certification.
So
if
you
have
not
done
git
Ops
certification
with
with
code
fresh
we're
now
offering
this
for
free
to
everybody
who
came
to
this,
that's
not
something
we
do
very
often,
but
it's
going
to
teach
you
how
to
do.
Argo,
CD,
Argo
rollouts
to
do
Canary
releases.
B
How
to
use
application
sets
how
to
manage,
and
you
can
use
that
code.
Api
me
just
make
sure
to
use
API
all
caps.
You
can
scan
that
QR
code
and
it
does
expire
I,
think
it
says
on
the
19th.
So
it's
supposed
to
expire
today.
I'll
go
double
check
that
code.
While
we
take
questions,
but
in
the
meantime
you
can
use
that
code
to
get
100
off
on
GitHub
certification.
It
is
the
most
popular
and
fastest
growing
certification
for
get
Ops
in
the
world
today.
B
So
appreciate
the
chance
to
share
that
with
you.
So
the
code
is
API
me.
So
with
that,
let's
go
into
questions
and
you
can
pull
my
screen
down
and
we
can
maybe
come
back
to
that
question
that
came
earlier
about
the
philosophy
for
Gateway
apis.
So,
let's
see
who
was
it
that
asked.
A
C
Yeah,
that's
that's
a
really
good
question.
You
know
we
we
set
out
to
well
do
a
few
things.
C
First
was
defining
the
scope
right,
trying
to
figure
out
what
we
were
trying
to
solve
for,
and
so
that
was
really
the
the
next
generation
of
Ingress
so
trying
to
fix
some
of
the
problems
that
we
had
seen
pretty
broadly
in
Ingress
API,
but
then
two
I
think
I
think
you
know
I
kind
of
alluded
to
this,
but
the
idea
that
we
wanted
to
provide
a
broad
and
and
portable
core
set
of
capabilities,
so
the
portability
part
is
one
ensuring
that
what
we're
defining
can
be
broadly
implemented,
but
then
two
really
more
than
any
API
kubernetes
API
worked
on
before
really
emphasizing
conformance,
because
we
have
a
lot
of
kubernetes
apis.
C
Today,
Network
policy
comes
to
mind
as
another
example
of
an
API
that
doesn't
have
a
built-in
implementation
of
the
API,
but
there
are
lots
of
implementations
out
there,
but
there
are
some.
You
know
unexpected
inconsistencies,
depending
on
the
implementation
you're
using.
We
wanted
to
do
everything
we
could
to
avoid
that
case,
and
so
that
meant
starting
very
early
on
with
conformance
tests
from
the
outset
of
this
is
what
it
means
to
implement
this
API.
C
And
then
you
know,
along
with
that,
trying
to
do
as
much
as
we
could
to
work
with
a
broad
set
of
of
implementations.
You'll
see
a
lot
of
the
people
that
have
contributed
to
this.
Api
are
people
that
you
know
have
worked
with
previous
Ingress
or
service
mesh
implementations
or
build
their
own
custom
API.
So,
for
example,
istio
Contour
Etc
were
very
very
involved
in
the
development
of
this
API
and
we
learned
from
not
just
Ingress
API
but
their
experience,
but
developing
their
own
apis.
A
C
Oh,
that
is
a
challenge
every
day.
You
know
if
we
we've
gone
through
different
challenges
as
we've
evolved
as
an
API
honestly.
C
The
first
challenge
was
just
getting
enough
people
interested
to
work
on
this
together
and
and
get
something
out
that
we
all
agreed
on
and
then
get
it
implemented,
and
so
at
that
point
you
know
we
were
just
trying
to
get
something
going
and
working,
and
now
we've
gotten
to
the
point
where
the
scope
creep
is
a
very,
very
real
concern,
because
we
keep
on
getting
more
and
more
enhancement
proposals
that
all
independently
make
sense,
but
unfortunately
there's
there's
too
many
to
to
take
on
all
at
once.
C
So
so
what
we've
done
is
is
we've
focused
on
you
know
clearly
defining
what
the
scope
of
this
API
is,
and
so
that
basically
is,
is
it
L7
load
balancing?
Is
it
L4
load
balancing?
Is
it
service
mesh
and
then
within
that
agreeing
with
the
group
of
maintainers
what
our
roadmap
is
and
what
the
the
priorities
are
going
forward.
So,
there's
fairly
little
that
we've
said
well,
this
doesn't
belong
in
the
API
ever,
but
we
have
said
well.
Our
short-term
roadmap
for
the
next
year
needs
to
focus
on
these
these
things.
C
So
it
is
a
constant
Challenge
and
General
call
out.
We
can
always
use
more
help,
yay
open
source.
There
is
always
more
work
to
do
than
there
are
people
to
do
it.
So
for
anyone
on
this
call,
if
you're
interested
come
come
check
us
out
get
involved,
we
can
always
use
more
contributors,
and
if
you
want
to
guide
the
next
generation
of
apis
for
Ingress
mesh
load,
balancing
whatever
we'd
love
to
have
you
it's.
A
It's
good
and
important
work
I
have
a
question
about
the
the
rollouts
plug-in,
and
that
is
you
did
a
great
job
of
showing
the
problem
and
the
solution
and
chef's
kiss
it's
like
I'm
I'm,
all
in
I
bought
in.
But
my
question
is
like
what
about
functionality
that
goes
above
and
beyond
the
Gateway
API
spec
is
that
is
that
some
well
first
of
all,
is
there
such
a
thing
or
and
then
are
there
ways?
Basically
that
plugins
are
limited
versus
yeah.
B
B
Yeah,
so
for
for
Argo
CD,
there's
not
really
a
limitation
for
how
the
rollouts
plugin
operates.
The
way
that
we
actually
have
been
scoping,
Argo
rollouts
as
a
project,
is
that
we
actually
want
more
of
these
things
to
be
plug-in,
based
because
trying
to
maintain
Integrations
with
a
hundred
apis.
That's
really
hard.
It's
such
a
hard
problem
this
that
somebody
actually
started
an
entire
project
just
to
work
on
it
called
Gateway
API.
So
we
don't
want
to
be
doing
it
in
the
Argo
project.
We
want
Rob
to
do
it.
That's
his
job.
B
Now
we're
glad
to
pawn
it
off
on
him.
So
there's
not
really
a
limitation
when
it
comes
to
how
these
rollouts
operate
within
the
context
of
Argo,
CD
or
Argo
rollouts.
The
limitations
are
only
in
regards
to
the
spec,
so
we
mentioned
earlier
that,
for
example,
if
you
wanted
to
do
header
modification
I
think
that's
one
that
Rob
brought
up.
That's
something
that's
not
provided
with
supported
with
everything
and
I'm,
not
sure
exactly
I
think
I
think
actually
Rob,
there's
there's
something
in
the
Gateway
API
spec!
B
That's
for
working
with
underlying
services
like
what
is
it?
That's!
It's
it's
yeah,
it's
like
its
own,
extended
features
is
what
it's
called
is.
C
Yeah
you're
completely
right,
so
we
we
have
a
concept
of
support
levels
in
Gateway
API,
so
one
is
core,
and
that
means
that
we
expect
every
implementation
to
support
this
feature,
so
that
would
be
like
path
matching
prefix
path.
Matching
everyone
supports
that.
C
On
the
other
hand,
header
modification
is
something
that
would
be
extended
support,
that's
something
that
we
recognize
not
everyone's
going
to
be
able
to
support,
but
when
they
do
it's
going
to
be
in
this
way
it's
going
to
be,
we
have
conformance
tests
to
cover
it
and
and
kind
of
just
one
more
shout
out
in
that
direction.
We've
been
working,
we
have
this
concept
of
supported
features
that
is
coming
in
an
upcoming
release
that
will
allow
every
Gateway
implementation
to
publish
via
Gateway
class
just
in
status
hey.
C
These
are
the
features
that
this
Gateway
class
supports
and
we're
hoping
to
take
that
to
the
next
step
and
actually
provide
standardized
warnings
and
maybe
even
prevent
you
from
using
configuration
that
doesn't
work
with
the
implementation
you're
using
so
again
ongoing
work
to
try
and
surface
that.
But
right
now
we're
trying
to
balance
trying
to
have
a
good
set
of
features,
while
also
recognizing
that
not
every
implementation
can
support
every
set.
A
But
smart,
very
smart
y'all
are
so
impressive.
Jesse
has
another.
A
quick
question:
are
the
conformance
tests
available?
Are
they
public?
Yes,.
C
Yeah
so
great
question:
yes,
so
if
you
go
to
Gateway
API
repo
there's
a
aptly
named
conformance
directory
within
that
they're
they're,
not
that
you
know
I'm
happy
to
explain
how
they
work
a
little
bit
more.
We
don't
have
as
much
documentation
around
that
as
we
should
so
definitely.
Another
thing
I
should
call
out
is
we're
Sig,
Network,
Gateway
API
on
Slack,
so
kubernetes
slack,
and
we
are
all
there's
a
great
group
of
people
there
we're
always
happy
to
chime
in
and
ask
answer.
C
Questions
like
this
and
yeah
we'd
love
to
talk
about
conformance
tests.
We've
got
a
few
few
others
in
progress,
but
honestly
I
think
this
is
the
most
extensive
set
of
conformance
tests.
I've
seen
for
any
kubernetes
API.
B
Yeah
I,
don't
think
so,
there's
not
really
any
advantage
to
having
them
merge
as
a
project.
In
fact,
we
see
it
as
a
huge
advantage
that
they're
separate
and
can
be
used
independently.
We
have
users,
for
example,
Salesforce
uses
Argo
rollouts,
basically
to
handle
all
of
their
Canary
and
Progressive
delivery
everywhere,
but
they
don't
necessarily
use
Argo,
CD
everywhere,
so
being
able
to
use
and
use
them
independently.
B
We
view
that
as
a
huge
advantage
and
the
way
that
they're,
even
architected,
Argo
CD
can
be
installed
in
one
cluster
but
then
be
connected
to
a
hundred
different
clusters,
so
you're
deploying
and
managing
updates
to
all
these
different
clusters.
Argo
rollouts
must
always
be
installed
on
every
cluster
that
it's
going
to
be
used
in
because
it's
part
of
the
internal
you
know
routing
and
management
of
kubernetes
itself.
B
So
so
you
always
need
to
install
Argo
rollouts
anyway,
even
if
you're
not
using
Argo
CD,
in
order
to
do
Progressive
delivery
or
even
if
you're,
using
Argo
CD
and
it's
some
sort
of
central
management
cluster.
So
there's
not
really
any
reason
to
merge
them
and
version
them
together.
They
operate
independently
and
we
view
that
as
a
huge
advantage.
A
I
think
it
says
the
last
question
from
the
audience,
and
so
those
of
those
of
us
who
are
really
excited
about
Gateway
API
and
want
to
get
started.
What's
the
best
way,
what's
a
good
way,
how
do
we
Implement
Gateway
API
in
our
clusters,
yeah.
C
So
again,
I
I
definitely
would
recommend
joining
Sig
Network
Gateway
API
slack,
because
there's
a
great
group
of
people
there
that
have
already
implemented
the
API
and
can
provide
tips
also
on
our
website,
which
has
been
linked
previously.
There
is
an
implementation
implementers
guide
so
that
that's
also
a
starting
point
here,
but
because
Gateway
API
is
open
source
and
almost
all
implementations
of
Gateway
API
are
also
open,
open
source.
If
it
were
me,
I'd
take
a
look
at
some
of
the
implementations
of
Gateway
API.
C
That
already
exists,
so
we
have
implementations
of
Gateway
API
that
are
translating
to
Cloud
load
balances
that
would
be
gke
or
we
have
ones
that
translate
to
Envoy
AJ
proxy
nginx
lots
of
underlying
data
planes
so
depending
on
what
you
want
to
actually
implement
it
with
there's,
probably
already
an
example
for
you.
So
I'd
also
recommend
looking
at
previous
work
and
maybe
just
contributing
there
too,
depending
on
your
use
case.
B
Yeah
I
think
so
I
think,
if
he's,
if
he's
asking
just
hey,
how
do
I
use
this
with
my
cluster,
if
you're
using
eks
I,
don't
think
you
actually
need
to
do
any
enablement
to
enable
support
for
Gateway
API
with
bks
ALB
load,
balancers
I
think
it's
just
enabled
by
default
for
gke.
B
You
do
need
to
enable
a
flag
that
that
basically
just
says
Gateway
API
standard,
and
so
you
just
have
to
look
up
specifically
how
to
enable
it
for
your
cluster
or
your
or
your
specific
Gateway,
but
most
Gateway
API
providers
don't
need
any
special
Flags.
If
you're
using
you
know,
glue
mesh
I
think
you
can
just
start
using
it
with
Argo
rollouts
and
you
don't
need
to
do
anything
special.
It's
just
enabled
by
default,
but
you'll
have
to
look
up
specific
for
your
provider
to
see.
If
there's
something
you
need
to
do.
C
Yeah
I
think
right
now
the
the
support
on
AWS
is
limited
to
their
lattice
product
and
then
you
can
install
other
implementations,
say
Envoy
Gateway
Contour
Etc
on
eks.
If
you
want
to
use
it
there.
A
If,
if
someone
like
this
person
else,
elski
knows
that
they
have
non-standard
protocols
and
dynamic
ports
is,
there
is
Gateway
API
still
for
them.
C
It
depends
on
on
how
broadly
you
want
to
you
know
how
invested
you
want
to
be
in
this.
You
know,
certainly
depending
on
how
non-standard
these
are.
There
may
be
other
people
with
your
same
use
case
and
if
there
are
definitely
worth
bringing
that
to
Gateway
API
as
a
community
and
seeing
if
you
can
open
source
it.
At
the
same
time,
we've
seen
others
develop
their
own
custom
route
types.
C
So
there's
there's
somebody
I
know
that's
working
on
an
IP
route
right
now,
for
example,
and
the
beauty
of
that
is
you
plug
into
the
rest
of
the
Gateway
API
system
ecosystem.
You
have
all
the
infrastructure
around
it,
but
you
can
build
your
own
routing
mechanism
that
you
know
may
work
for
your
custom
protocol
or
use
case
so
definitely
interested
in
what
whatever
that
use
case
is.
But
we've
we've
tried
to
make
the
API
plugable,
so
you
can
replace
different
components
as
necessary.
A
One
last
comment
from
Jessie
just
saying
that
Gateway
API
isn't
very
googleable
but
not
really
sure
how
to
solve
it.
So.
A
Of
trying
to
figure
out
whether
yeah,
what
yeah.
C
Worse,
I
don't
know,
but
since
we
already
did
the
rename
once
I
think
it's
just
never
going
to
be
renamed
again
so
For
Better
or.
C
A
And
so
just
one
more
time
we
have
the
website
on
the
bottom
of
the
screen
right
now
and
also
going
to
the
slack
workspace.
If
you
can't
Google
it,
you
could
act.
Ask
the
folks
in
the
Gateway
API
channel
in
the
kubernetes
slack
workspace,
to
help
guide
you
in
terms
of
implementing
Gateway,
API
and
I'm
I.
Think
that's
it
is
that
anything
that
you
all
would
like
to
say
in
closing
before
I.
Do
my
closing
statements.
C
Thanks
so
much
for
the
time
it's
it's
been
great
to
present
and
talk
I,
always
love
to
talk
about
Gateway
API,
thanks
for
the
great
questions
and
discussion
and
so
excited
to
see
that
Argo
integration
here.
A
This
presentation
was
Stellar,
I
appreciate
you
both
so
so
much
and
I
appreciate
everyone
who
tuned
in
today
and
watched,
live
and
participated
in
chat.
Also,
those
of
you
who
watch
recording
thank
you
so
so
much.
It
was
great
to
have
our
new
friends
Dan,
Garfield
and
Rob
Scott
here.
Thank
you
again
for
sharing
your
time
and
expertise,
especially
about
Gateway
API
and
the
Gateway
API
plugin
for
Argo
rollouts
here
at
Cloud
native
live.
We
bring
you
the
latest
in
Cloud
native
code
on
Tuesdays
and
Wednesdays
at
noon.