►
From YouTube: What is new in Istio 1.10?
Description
Watch a replay of our "Whats new in Istio 1.10?" webinar on May 25th where Samuel Naser, Istio 1.10 release manager from Google and Lin Sun, Istio TOC member from Solo.io discuss what is new in Istio 1.10. You can watch Istio 1.10 in action! Various improvements and new features were discussed.
A
What
is
new,
let
me
quickly
introduce
myself
so
I'm
the
director
of
open
source
with
solo
io
I've
been
contributing
to
istio
for
pretty
much
at
the
beginning
of
the
project.
So
it's
been
four
years.
I
have
a
lot
of
patents
to
my
name.
I
also
wrote
a
book
istio
explained
sam.
Could
you
introduce
yourself.
B
Yeah,
so
I'm
I'm
sam,
I'm
a
software
engineer
at
google
and
I
pretty
much
just
work
on
all
things.
Istio
related
and
I
was
a
release
manager
for
110.
So.
A
A
A
I
want
to
take
a
minute
to
thank
the
entire
istio
community
to
make
110
the
best
release,
in
my
opinion,
out
of
all
the
insta
releases
sam,
could
you
take
everyone
through
the
upgrade
improvements
in
110.
B
Yeah
sure
so
upgrades
have
been
like
a
big
pain
point
in
this
video
up
to
now-
and
you
know,
we've
heard
the
feedback
and
we've
been
investing
a
lot
of
resources
into
making
them
better.
So
the
first
thing
is
we're
now
supporting
jumping
straight
from
one
eight
to
one
ten.
So
previously,
if
you
wanted
to
upgrade
from
one
eight
to
one
ten,
you
pretty
much
had
to
go
one
eight,
one,
nine
one
ten
and
now
we're
testing
these
direct
upgrades.
B
B
B
Yeah
and
then
we
have
the
experimental
pre-check
command.
So
this
is
another
really
nice
thing:
it'll
help
you
kind
of
upgrade
without
you
know,
being
too
worried
that
your
mesh
is
going
to
break
for
a
known
reason.
So
you
know
as
a
project,
we
try
to
avoid
changes
that
introduce
backwards
and
compatibility,
but
every
now
and
then
we
we
have
to
make
some
kind
of
a
change
or
we
catch
a
regression.
B
So
with
this
command,
if
you
run
it
before
running
your
upgrade,
you
can
be
sure
that
you
catch
known
issues
before
you
know
losing
traffic
in
your
mesh
or
something
bad
happening
after
the
upgrade.
So.
B
Change
actually
I'll
we'll
be
discussing
later
that
can
be
caught
with
this
command
so
yeah,
and
then
we
also
have
the
revision
tag
command.
So
revision
tags
are
pretty
cool.
It's
it's
a
ux
improvement
for
revision
based
upgrades,
so
pretty.
A
B
Have
in-place
upgrades
and
revision-based
upgrades
for
revision-based
upgrades
there
used
to
be
a
lot
of
manual
relabeling
and
we'll
actually
talk
about
this
more
on
the
next
slide.
B
Based
upgrade
works
or
used
to
work,
is
you
install
two
control
planes
side
by
side
instead
of
just
rolling
one
of
them
over
and
to
decide
which
revision
should
inject
workloads
for
a
given
namespace,
you
would
label
the
namespace
with
the
istio
io
rev
label.
B
So
now
you
can
create
what's
called
a
revision
tag
and
it
gives
you
like
a
stable
identifier
to
label
the
namespace
with,
and
then
you
can,
instead
of
changing
the
label.
You
just
change
for
the
tag,
points
and
magically.
All
of
your
namespaces
are
pointed
to
the
correct
revision
and
yeah.
It
gives
you
a
lot
more
control
and
makes
the
canary
upgrade
process
just
a
lot
easier
in
general.
A
Yeah,
this
is
a
great
feature
of
it's
your
one
time
and
I
remember
sam,
you
were
leading
up
this
work
in
the
community,
and
many
of
us
participate
in
the
community
was
providing
feedback
to
you.
I
think
it's
tremendously
useful
for
a
user
to
be
able
to
not
changing
their
version
and
just
using
a
tag
and
thanks
to
christian
to
have
this
nice
picture
too.
A
A
B
With,
like,
I
don't
know,
10
rows
and
10
columns
that
you
know
encode
the
logic
of
whether
a
sidecar
gets
injected,
so
we've
improved
this
and
I
think
it's
a
lot
more
straightforward.
Now
pretty
much
we've
added
revision
labels,
so
the
istio
I
o
rev
label
to
on
the
pod
level.
So
now
you
can,
on
the
per
pod
basis,
decide
which
revision
should
inject,
and
so
this
is
pretty
cool.
B
B
So
this
is
a
really
really
cool
feature
and
I
think
I
think
technically,
actually
it
landed
in
one
nine
in
terms
of
like
the
implementation
was
there,
but
now
in
110
we
have
like
documentation-
and
you
know
use
case
showing
you
how
you
can
take
advantage
of
it
and
pretty
much
if
you've
ever
tried
to
upgrade
istio.
B
You
know
that
one
of
the
scariest
part
is
the
gateway
upgrade
because
the
the
gateway
deployment
in
istio
is
pretty
tightly
coupled
to
the
installation,
and
I
think,
for
a
lot
of
users,
it's
kind
of
opaque
when
you
try
and
upgrade
what
happens,
how
does
that
gateway
get
upgraded
and
with
gateway
injection?
We
actually
just
inject
the
gateways
with
the
proxy
like
like
a
normal
sidecar
gets
injected,
so
you
can
leverage
revision
based.
You
know
you
can
leverage
revision
based
upgrade
methods
with
gateways.
B
You
can
completely
customize
your
gateway
deployments
with
any
way
you
could
customize
a
normal
kubernetes
deployment
without
going
through
the
operator
apis,
and
I
think
it's
a
huge
improvement.
B
Yeah
yeah
so
sure
this
diagram
actually
kind
of
shows
the
current
state
of
things
with
the
injection
label
changes.
So
basically
you
first
look
at
the
namespace
labels,
depending
on
whether
there's
no
label
an
enabled
label
or
a
disabled
label.
You
follow
the
logic
and
then
you
look
at
the
per
pod
labels
and
then,
after
that,
you
look
at
auto
injection
default
behavior
and
then
we
actually
have
a
similar
kind
of
flow
for
revision
labels
and
it
works
generally
pretty
much
the
same.
A
That's
awesome:
it's
great
user
have
this
level
of
controls
with
that
we're
going
to
start
to
show
a
demo.
A
So
I
have
a
kubernetes
cluster
in
google
cloud,
and
this
is
my
cluster
and
you,
as
you
can
see,
I
have
a
bunch
of
parts
and
deployments
and
services
running
already.
So
what
I'm
going
to
do
next
is:
let
me
go
ahead.
If
I
can
remember
what
scripts
you
run
by
the
way,
this
is
a
live
demo.
A
It's
just
we're
using
scripts
to
run
it.
So,
hopefully
the
demo
gods
is
with
us
today.
A
A
What
we
just
I
did
was
downloading
it's
your
1.10
and
we
are
extracting
it
right
now
and
let's
go
ahead
to
check
out
our.
What
is
your
cuddle
version?
So,
as
you
can
see,
our
istio
cuddle
is
1.10,
but
our
control,
plane
and
data
plane
is
still
183.
A
So
sam
talked
about
istio
cardo
x,
pre-check
x
stands
for
experimental,
so
this
is
a
recommended
commands
that
we
want
you
to
run
before
you
upgrade
to
istio
110,
so
making
sure
there's
no
issue
flat
flagged
it's
your
cuddle
analyzer
is
another
command.
We
also
encourage
you
to
run
just
to
analyze
to
see
if
there's
any
resources
in
your
cluster
that
might
be
deprecated
in
110.
A
A
A
This
is
another
new
thing
that
we
would
love
to.
Have
you
guys
tell
us
the
install
and
upgrade
experience
by
filling
out
the
survey
for
us
now,
let's
go
ahead
check
if
my
110
parts
are
running
as
you
can
see,
they
are
running
and
let's
go
ahead
check
if
there's
any
clients
connected
to
istio
110,
apparently,
no.
A
A
So
the
second
demo
I
want
to
quickly
show
is
to
show
you:
how
do
you
upgrade
to
110
on
the
data
plane,
because
what
we
just
done
is
update
to
one
to
on
the
control
plane
right,
so
it
doesn't
mean
your
data.
Plane
is
actually
also
pointing
to
the
new
control
plane.
So,
let's
check
out
our
data
plane
overall
status,
as
you
can
see,
I
have
five
proxies.
A
A
Now,
let's
check
out
the
the
labels
that
sam
talked
about
instio.io
rev,
as
you
can
see,
the
issue
in
action
namespace,
which
runs
the
web
api
service,
is,
is
tagged
with
183.
A
A
A
So
what
I'm
going
to
do
next
is
is
actually
to
to
open
up
this
enchanti.
A
So
kayali
requires
a
token
strategy
to
log
in
so
what
we
are
going
to
do
is
to
figure
out
how
do
we
log
into
kyali,
so
my
script
is
supposed
to
print
out
the
kyali
token
for
me,
but
I
didn't
so.
Let
me
go
ahead.
A
There,
okay,
so
I'm
looking
at
kylie
dashboard,
looks
like
my
kylie
dashboard
are
not
up
wrongly
okay,
so
so
I
guess
we
won't
be
able
to
see
it
in
kylie,
but
let's
continue
to
roll
out
the
deployment.
So
what
we're
doing
is
continue
with
the
web
api
canary.
As
you
can
see,
the
canaries
are
deleted.
Now
we
did
roll
out
the
web
api
successfully
with
the
canary
comes
in
and
take
some
of
the
traffic
and
then
after
the
canary
rollout.
A
Now
what
we
want
to
do
next
is
we
want
to
upgrade
the
istio
ingress
gateway.
So
sam
talked
about
earlier.
It's
best
practice
to
upgrade
istio
ingress
gateway
separately
from
the
control
plane,
and
this
is
because,
when
the
control
plane
are
upgraded,
it
could
also
be
impacting
to
the
data
plane
and
you
want
to
really
sequence
them
out,
because
you
don't
want
the
the
control
plane
and
data
playing
upgraded
together.
That
could
cause
you
probably
more
outage
than
if
you
can
sequence
them
together.
So
it's
really
help
you
to
minimize
the
downtime.
A
A
Now,
if
you
do
a
proxy
status
on
the
issue,
ingress
gateway,
you
will
see
we're
running
on
110
of
the
issue,
ingress
gateway,
and
if
we
do
get
the
load
balancer
ip
of
the
issue,
ingress
gateway,
we
will
be
able
to
send
some
traffic
to
the
israel
ingress
gateway
using
this
command.
A
B
B
So
we
we
also
have
some
networking
changes
so
pretty
much
this
pretty
much.
The
way
it
was
before
was
when
traffic
makes
it
to
the
pro
the
pod.
That's
running,
sdo,
you
have
it's
going
to
get
forwarded
to
your
application's
local
host
port,
so
this
has.
B
Because,
first
of
all,
localhost
is
meant
to
be
like
a
secure
boundary
and
exposing
localhost
kind
of
externally
is
not
considered
best
practices,
and
also
you
had
to
kind
of
change
your
application
to
listen
on
either
all
interfaces
or
just
localhost
to
make
things
work
with
this
deal.
B
So
you
know
we
try
to
be.
We
try
to
make
it
so
that
things
work
right
out
of
the
box.
Your
applications
shouldn't
have
to
be
modified.
You
know
to
the
extent
that
that's
possible
and
in
110
it
is
now
the
case
that,
when
traffic
comes
into
your
application,
it'll
actually
be
forwarded
to
the
application
on
the
pod
ip.
B
So
you
can
listen
like
you
would
normally
listen,
and
this
is
actually
really
cool
for
stateful
sets
and
for
stateful
applications
where
normally
you
would
be
listening
on
the
actual
pod
ip
things
like
zookeeper
would
break
before
and
now
things
like
zookeeper
work
right
out
of
the
box
and
there's
a
really
great
blog
post
discussing
this
change.
A
Yeah
totally
I
mean
I'm
super
excited
about
this
feature
also
definitely
check
out
our
blog
on
how
this
change
really
impact
you
from
a
user
perspective.
So
if
you
follow
this
blog,
you
will
be
able
to
learn
like
how
istio
can
fix
the
inbound
traffic
prior
to
110.
But,
most
importantly,
it's
really
how
you
actually
do
this
without
any
configuration
changes
on
your
side
with
110.
So
that's
really
exciting
and
also
remember.
A
Sam
talk
about
definitely
use
the
pre-check
commands
just
to
help
you
identify
whether
this
is
going
to
break
any
of
your
existing
applications
in
the
mesh,
because
this
is
a
big
change.
It's
really
help
issue
to
match
the
part
networking
behavior
of
what
kubernetes
but
run
the
pre-check
command,
just
making
sure
everything
is
good.
B
Yeah
sure
so
the
discovery
selectors
are
another
really
cool
feature
made
it
into
one
time.
It's
kind
of
a
performance
based
feature.
The
idea
is,
you
can
actually
choose
which
namespaces
istio
should
watch
for
configuration
like
like
services
or
endpoints.
So
by
default
it
might
be
surprising
to
some
people,
but
when
you
install
istio,
it'll
watch
every
resource
in
every
namespace,
you
know
for
its
internal
registry
and
this
might
not
be
desired.
B
If
you
have
some
namespaces
that
just
don't
need
to
be
part
of
the
mesh
or
you
have
namespaces
that
have
a
lot
of
churn
like
short
running
jobs,
short-lived
things
like
that.
So
this
provides
a
way
actually
to
say,
hey.
You
know
only
watch
for
configuration
from
the
namespaces
that
match
these
given
labels
and
then,
if
we,
if
we
go
to
the
next
slide,
it
kind
of
talks
about
the
relationship
between
this
and
sidecars
so
side.
B
Car
resource
is
another
concept
that
can
help
with
performance
where
sdod
has
the
control
plane
still
has
all
of
the
configuration
and
it's
aware
of
all
the
configuration,
but
you
can
scope
down
what
actually
gets
sent
to
the
per
proxy
workloads
discovery,
selectors
kind
of
act
on
the
layer
above
where
istio
isn't
even
aware
of
the
configuration.
If
you
configure
it
not
to
listen
yeah.
Do
you
have
anything
to
add
to
that
lin
because
I
know
I
know
solo
had
a
lot
of
work
on
this.
A
A
A
So
this
is
still
my
kubernetes
cluster
in
google
cloud.
As
you
can
see,
you
know
we're
looking
at
the
endpoints
of
my
web
api
service
on
deployment
in
the
is
your
action
namespace.
It's
a
lot
right,
sam
and
let's
take
a
look
at
the
routes.
It's
also
a
lot
right.
I
mean
I
only
have
like
maybe
three
or
four
services
in
the
mesh
honestly.
A
This
is
way
more
than
you
know
what
I
would
expect
so
now.
The
question
is:
how
can
I
configure
my
istio
control
plane
to
see
only
what's
necessary
right,
so
what
we
are
doing
is
enable
the
istio
injection
namespace,
with
istio
discovery
enabled
now
the
question
I
have
is:
do
we
also
need
to
label
the
istio
ingress
namespace?
The
reason
is,
I'm
actually
deploying
my
istio
ingress
gateway
in
the
istio
ingress
namespace.
A
By
the
way,
this
is
the
recommended
approach
to
deploy
your
issue
of
ingress
gateway
in
a
separate
namespace
from
your
istio
control
plan.
So
in
this
case
the
answer
is
yes,
you
do
need
to
label
that
as
well,
otherwise
your
traffic
wouldn't
be
able
to
go
through
from
ingress
to
your
service.
A
Now,
let's
go
ahead:
config
is
your
control
plane
with
discovery
selectors,
so
the
disco
recovery
selector
is
a
very
flexible
label,
schema
where
you
can
do
still
matching
labels
in
this
case
now.
The
question
I
have
for
everyone
is:
can
we
do
this
step
before
labeling
the
namespace?
The
answer
is:
no.
You
don't
want
to
do
that.
A
You
definitely
want
to
label
your
namespace
first,
because
the
moment
you
actually
enable
enable
istiocado
to
install
this
configuration
you
are
going
to
you
know
potentially
break
your
services
by
changing
what
the
control
plane
can
watch
out
from
kubernetes
api
server.
So
you
want
to
make
sure
your
label
is
correct,
or
maybe
at
least
tested
thoroughly
before
you
enable
this,
because
this
could
change
a
lot
of
things
on
your
istio
control
plane.
A
A
A
This
is
extremely
helpful
because
if
you
run
into
problems
with
your
configuration
that
you
need
another,
a
person
or
an
expert
to
take
a
look,
you
could
potentially
produce
the
output
in
a
json
file
for
everything
related
to
this,
this
particular
web
service
and
then
provide
that
to
your
to
your
expert
team.
To
take
a
look.
This
is
a
really
you
interesting
ui
provided
by
one
of
my
co-worker
denis.
A
A
The
other
interesting
thing
we
added
is
to
be
able
to
list
my
revisions
right
to
be
able
to
see
what
are
the
control
planes
as
part
of
the
revision
and
be
able
to
describe
a
particular
revision
and
to
see
you
know
what
are
the
details
for
this
particular
revision
now.
Lastly,
I
just
want
to
show
you
quickly
how
many
clients
I
have
to
connect
it
to
my
htod.
A
B
A
Yeah
yeah
totally,
so
I
would
like
to
pass
back
to
you
sam,
to
take
us
through
some
of
the
debugging
improvements.
A
B
So
this
is
just
kind
of
a
bunch
of
changes
kind
of
breeze
through
here
we
have
a
lot
of
improvements
to
external
authorization
x.
Dot
z
is
a
feature
that
either
landed
in
one
nine
or
one
eight,
but
it's
continually
being
improved
pretty
much
it
lets.
B
You
give
an
external
designate,
an
external
place
where
you
need,
you
can
make
authorization
decisions
and
these
improvements
kind
of
just
let
you
gives
you
more
control
over
what
kinds
of
headers
get
sent
in
the
authorization
requests
timeouts
that
kind
of
thing
so
super
useful
there
we
are
we've
added
the
ability
to
dry
run
authorization
policies.
B
So
this
is
another
really
cool
one
pretty
much.
Instead
of
going
going
straight
in
and
creating
an
authorization
policy
that
affects
traffic,
you
can
label
it
as
a
dry
run
policy
and
when
that
policy
is
triggered,
you
can
actually
just
see
it
in
envoy
logs,
rather
than
rather
than
it
denying
traffic.
So
this
is
really
useful
for
evaluating
your
authorization
policies
in
practice.
B
B
What
components
are
enabled
for
each
of
those
revisions?
That
kind
of
thing
so
super
nice
to
have
as
revisions
become
the
recommended,
upgrade
method
and
as
as
they
make
it
to
a
more
and
more
stable
place,
so
we've
added
an
option
to
dump
all
proxy
config.
So
I
guess
before
you
had
to
specify,
if
you
just
wanted
endpoints
or
clusters,
and
you
kind
of
had
to
aggregate
them
yourself
now
you
can
just
dump
it
all
in
one:
go
and
kind
of
a
lot
of
the
time.
That's
exactly
what
you
want.
A
B
So
really
useful.
I
guess
you
could
use
this
and
put
it
right
on
the
envoy
ui
that
lin
showed
and
immediately
get
some
insight
and
then
yeah
the
internal
debug
command.
This
is
really
cool,
so
sdod
by
default,
the
control
plane
has
a
debug
endpoint.
You
can
go
to
it
now.
If
you
port
forward
to
your
sdod
instance
and
hit
8080.
B
This
pretty
much
gives
us
a
secure
way
to
access
those
debug
endpoints,
and
this
is
again
this
is
kind
of
like
an
internal
implementation
stuff,
but
sometimes
it's
really
useful
if
you're
trying
trying
to
drill
in
deep
and
figure
out
what's
going
on
with
your
control
plane.
In
addition,
we
have
the
experimental
telemetry
api
that
landed
in
110.
lynn,
and
I
are
actually
trying
to
figure
out
where
the
documentation
on
this
is.
But
it's
it's
a
really
cool
feature.
B
It's
going
to
be
pretty
much
the
way
to
configure
telemetry
going
forward.
Yeah.
Do
you
have
anything
to
add.
A
Yeah,
I
I
think
the
the
api
is
in
the
api
repository.
It's
only
for
the
metrics,
though
so
the
tracing,
the
the
log
api
are
not
available
yet,
but
the
metrics
api
are
experimental.
So
we
definitely
want
you
guys
to
take
a
look
and
let
us
know
any
feedback
you
may
have.
So
if
you
go
to
the
telemetry
one
you'll
be
able
to
see
the
api
right
there.
We're
going
to
work
with
the
istio
community
to
make
sure
the
documentation
was
landing
is
still
the
I
o.
B
A
With
that,
this
is
great.
Thank
you,
sam,
to
take
us
through
all
the
different
features
with
that.
I
think
we
are
open
up
for
question
and
answer.
One
thing
I
want
to
mention
is:
we
do
have
a
poll
I
probably
should
said
it
earlier,
but
just
let
us
know
it's
more
about
how
long
have
you
been
using
istio
and
what
version
of
istio
are
you
running
today,
so
just
to
get
some
idea
on
that
for
us
and
then,
if
you
have
any
questions,
do
let
us
know
by
through
the
chat?
A
I'm
not
sure.
If
I
don't
think
you
can
talk,
though
so,
but
you
should
be
able
to
mention.
Ask
us
anything
in
the
chat.
B
So
if
assuming
you're
talking
about
the
revision
tag
stuff,
you
should
be
able
to
do
that
just
fine
with,
if
you're
using
the
operator
deployment
behind
the
scenes,
it
actually
just
uses
a
mutating
web
hook
and
if
that
web
hook
is
created
through
the
tag
command,
it
shouldn't
matter
how
sdo
is
deployed,
so
it
should
just
work.
A
Yeah,
I
agree
so
just
making
sure
sam,
I
I
do
think
he's
asking
his
istio
is
using
like
operator
controller.
So
it's
like
there
is
a
it's
your
operator
that
watches
it's
your
operator.yaml
file
and
I
don't
think
the
tag
actually
needs
anything.
Any
change
on
the
on
the
control
plane
other
than
you
know
just
having
the
110
right.
It
doesn't
have
like
special
flag.
You
have
to
enable
this
in
one
turn,.
A
Yeah
exactly
so,
it
should
definitely
work.
Yeah
agree.
I
think
class
was
asking
about
share
the
slide
back.
Absolutely
we
will.
We
will
also
make
the
recording
out
too,
and
every
thanks
for
saying
that
it's
yo
110
makes
the
stateful
set
nightmare
disappear
totally
agree.
I
know
I
was
so
frustrated
like
a
year
or
two
ago.
You
know
the
moment
I
run
to
people
with
istio.
My
two
people
died.
A
So
thanks
for
that
feedback.
We
really
appreciate
it.
Oh
and
also
your
post
post,
greg
ql
cluster
too
so
makes
sense.
That's
another
staple
set
now
christian.
Also
had
a
question
is
discovery
selector
only
available
for
istio
operator.
Can
it
be
used
with
istio
cuddle?
So
if
you
remember
my
demo,
I
actually
use
it
with
istiocado.
So
the
answer
is
yes,
you
can
definitely
use
with
istio
cuddle.
The
only
thing
I
showed
with
istio
operator
was
the
yamafell.
A
So
so
I
do
have
the
istio.
So
it's
a
control
plane
with
the
discovery
yama
file,
so
this
yama
file
you
can
take
it
deploy
with
istiocado.
You
can
also
take
it
to
deploy
with
server-side
operator
controller,
so
either
approach
should
work.
I
believe
helm
should
work.
I
just
haven't
personally
tested.
A
Okay,
let
us
know
if
you
have
any
other
questions,
if
not,
I'm
actually
going
to
ask
sam
a
question
if
that's
okay,
sam
yeah,.
A
I
want
to
I
want
to
ask
you
this
question
so
remember
how
I
showed
this
demo
by
the
way.
This
is
the
script.
I
was
just
wrong
right
so
when
I
show
this
demo,
when
I
upgrade
my
istio
ingress
gateway,
which
I
I
can
show
you
what
it
looks
like,
so
it's
basically
pretty
just
empty
profile
with
issue
ingress
gateway.
So
my
question
to
you,
sam,
is
in
this
case
I'm
using
revision.
110
0..
Can
I
change
that
to
revision
canary,
given
I
you
know,
I
use
tag
to
say
canary
point.
B
I
don't
know
I
I
don't
believe
you
can
use
the
revision
tag
there.
Yeah
and
that's
that's
kind
of
an
unfortunate
thing
is
there
are
some
commands
where
we
haven't
made
it
so
that
you
can
use
the
tag
everywhere.
You
can
use
a
revision,
however,
when
you're
using
gateway
injection
to
install
the
gateway
tags
will
work
fine,
but
I
believe
here
the
tag
would
not
work
because
it
just
gets
pointed
straight
to
that
revision
for
the
control
plane
when
it's
trying
to
reach
out.
A
Yeah,
you
are
absolutely
right.
I
actually
tried
it,
it
doesn't
work.
So
this
is
an
interesting
thing.
Yeah,
with
the
gateway
injection
coming
in
israel
110.
It
does
make
sense
to
just
using
the
tag
with
like
treating
israel
gateway
as
a
normal
workload.
So
we
can
just
use
tag
there.
The
reason
I
didn't
show
israel
gateway
injection
in
my
demo
is:
I
haven't
been
able
to
get
it
work,
so
the
documentation
pr
has
not
merged
to
isro.io.
A
So
we
are
looking
forward
to
get
that
working
for
you
for
you
guys
really
soon,
so
you
guys
can
explore
and
provide
feedback,
because
we
really
love
that
feature
and
we
want
to
you
know,
enable
that
feature
as
default.
Maybe
in
a
few
releases
down
the
road
based
on
your
feedback,
so
definitely
try
it
after
the
pr
documentation
pr
merges
after
I
got
it
working
and
provided
us
the
feedback.
A
Okay
looks
like
christian
is
typing,
so,
let's
see
if
he
has
another
question,
so
let
us
know
if
you
have
any
questions,
if
not
we'll,
probably
just
close
this
out
in
a
minute
or.
A
A
A
A
That
I
guess
we're
going
to
close
out
this
webinar.
Thank
you
all
for
joining
us
and
thank
you,
sam
again,
for
you
know
joining
us
on
this
webinar
and
we
will
be
back
in
touch
with
the
slides
and
recording.
Thank
you
all.