►
From YouTube: Debugging Istio within the Department of Defense
Description
#IstioCon2021
Presented at IstioCon 2021 by Nick Nellis & Adam Toy .
Since the release of Istio 1.0, a major development effort has been spent on making it easier to use. Whether you are already running Istio in production or trying it out for the first time, it’s important that you know about the latest and greatest when it comes to debugging and maintaining istio.
Adam Toy from the Department of Defense will walk you through how the USAF’s Platform One program is utilizing Istio to establish a zero-trust PaaS infrastructure, as well as some of the new things Istio has to offer in terms of debugging and maintainability he has learned along the way.
A
Hey
everyone,
I'm
nick
nels,
a
software
engineer
at
tetrate,
I'm
also
an
istio
contributor,
and
I
do
a
lot
of
istio
support,
I'm
in
in
slack
and
discuss
that
is
io,
and
today,
I'm
here
with
adam
toy
from
rancher
federal,
we're
both
currently
contractors
for
the
department
of
defense
and
working
on
their
platform.
One
initiative
and
we'll
talk
about
that
in
a
little
bit
and
we're
going
to
go
through
some
of
the
debugging
that
you
can
do
now
with
seo.
A
Over
the
last
year
a
lot
of
support
has
been
added,
and
so
we're
gonna
we're
gonna
talk
about
that
and
do
a
pretty
cool
demo
for
you.
A
Two
years
ago,
companies
that
were
trying
istio
were
largely
ahead
of
the
curve
and
they
had
different
expectations
of
istio.
They
knew
they
might
have
to
hack
on
it
a
little
bit
to
get
it
to
work
in
their
environment,
but
now
shifting
to
2021
we're
seeing
issue
a
much
more
mainstream
product,
and
so
the
expectations
have
shifted
to
being
issue
being
more
reliable.
A
More
maintainable
and
easier
to
use-
and
you
can
really
see
from
the
highlights
of
2020
that
that's
reflected
of
the
maturity
of
seo
a
lot
of
work
has
gone
into
this
lifecycle,
management,
debugging
and
just
general
adoption
easiness,
and
so
I'm
going
to
head
it
over
to
adam
kind
of
talk
about
platform.
One
and
then
get
started
with
this
demo.
B
B
I
have
been
working
with
the
department
of
defense
and
the
intelligence
community
for
around
the
past
seven
years,
mostly
I've
been
helping
them
with
their
digital
modernization
and
their
transition
over
to
kubernetes.
B
B
There's
just
a
lot
of
security
controls
and
constraints
that
pretty
much
every
application
needs
to
adhere
to
before
it's
approved
to
go
into
those
production
environments
and
even
though
we're
deploying
to
satellites
and
fighter
jets,
we
know
that
our
technical
challenges
aren't
unique
to
us
and
I'm
sure
most
of
you
have
run
into
very
similar
challenges
in
the
environments
that
you're
working
with
so
what
platform
one
has
done:
they've
actually
compiled
a
number
of
cloud-based
cloud-based
devsec
ops,
build
tools
including
ci,
cd
pipelines,
real-time
monitoring,
log
aggregation
and
even
a
zero
trust
infrastructure
that
provides
a
secure
environment
that
can
be
inherited
by
all
these
mission
application
teams
to
enable
them
to
deliver
code
much
faster
and
more
consistently
than
they
ever
have
before
at
a
very
high
level.
B
It
basically
off
loads
a
lot
of
that
initial
heavy
lifting,
so
they
can
focus
on
their
specific
use
case.
So
one
of
the
key
elements
of
platform
one
is
known
as
big
bang.
So
what
big
bang
is
it's
a
fully
free
and
open
declarative,
continuous
delivery
tool
for
deploying
core
dod
hardened
and
approved
packages
into
a
kubernetes
environment,
and
just
for
context,
when
I
say
hardened,
what
I
mean
is
these
images
are
actually
built
off
of
other
hardened
and
approved
base
images,
and
then
they
themselves
are
ran
through
the
platform.
B
B
It
also
bites
very
heavily
into
the
get
ops
approach
of
managing
all
of
your
kubernetes
resources,
and
at
this
point
we
all
should
be
big.
Bang
itself
is
backed
by
flux,
which,
if
you
don't
know
it's
a
git
ops
engine,
but
it
also
does
support
deploying
arduino
cd,
if
that's
your
preference
for
managing
your
downstream
applications.
B
So
what
you're
seeing
here
this
is
just
one
implementation
of
the
big
bang.
This
is
what
we're
using
internally
at
platform,
one
for
supporting
mission
applications,
and
this
is
what
we
call
party
bus.
I
do
want
to
point
out
that
this
is
in
no
way
comprehensive
of
all
of
bigbang's
capabilities,
and
things
are
getting
added
constantly.
So
I
would
highly
recommend
that
you
check
out
the
git
repository
and
the
getting
started
guide
both
of
those
are
at
the
top
of
the
slide
and
you'll
see
them
at
the
end
of
the
presentation,
but
just
to
walk.
A
B
We
are
using
a
lot
of
the
istio
core
components,
so
we're
using
ingress
gateways
for
ingress
into
our
cluster,
and
then
we
also
deploy
virtual
services
at
the
application
namespace
level
for
that
fine
grain
routing
to
specific
workflow
workloads,
depending
on
application
requirements,
and
just
to
establish
that
zero
trust
model
that
I
was
referring
to
before.
We
also
deploy
istio
authorization
policies,
as
well
as
kubernetes
network
policies
to
for
the
most
part,
only
allow
inner
namespace
traffic
for
us.
B
The
applications
that
we
mention
for
the
most
part
are
scoped
at
the
namespace
level,
and
they
usually
only
need
to
talk
to
other
workloads
in
the
same
name
space
if
there
are
exceptions
to
that,
for
instance,
a
centralized
off
service
min
io,
or
maybe
a
common
api
that
exists
in
some
other
name
space.
I
will
take
a
more
customized
approach
to
those
off
policies
and
network
policies,
but
we
still
analyze
them
to
make
sure
that
only
the
traffic
that
should
be
getting
through
is
getting
through.
B
Also
big,
bang
out
of
the
box
provides
an
entire
monitoring
stack
for
you.
That
includes
your
prometheus
grafana,
kiali
and
jaeger.
In
addition
to
that,
it
also
provides
log
aggregation
with
efk
or
elastic
fluid
bit
in
kibana,
and
that's
just
great
for
aggregating
all
of
your
pod
logs
and
then
being
able
to
do
analysis
on
those
logs
later.
B
Another
really
cool
thing
that
we're
doing
is
we're
offloading
the
need
for
application
teams
and,
more
importantly,
application
code
to
implement
authentication
authorization.
What
we're
doing
we're
implementing
an
off
z,
envoy,
filter
with
a
tool
called
auth
service
which
we'll
talk
about
a
little
bit
later
in
demo,
but
basically
what
it's
doing.
It's
offloading
all
of
the
job
validation,
as
well
as
doing
the
redirects
to
our
centralized
ssl.
B
For
us,
our
entire
cluster
is
deployed
onto
aws.
That
includes
gov
cloud,
as
well
as
the
classified
environments
we
like
to
enable
our
application
teams
to
use
some
of
the
other
aws
resources
like
rds
and
s3,
and
we
want
those
to
appear
as
part
of
our
mesh.
So
we
also
define
service
entries.
Just
so
idio
istio
can
properly
manage
the
traffic
and
protocols
to
and
from
us.
So,
like
I
said,
big
bang
is
an
awesome
tool.
It's
completely
free
and
open.
B
It
can
basically
get
you
up
and
running
with
an
entire
secure
devsecops
build
platform
with,
hopefully
minimal
effort
from
you.
So
I
would
highly
recommend
you
check
it
out,
but
we
are
here
to
talk
about
troubleshooting
seo,
so
let's
go
ahead
and
get
into
it
just
to
help
us
walk
through
things.
We
did
develop
a
very
simple
demo
application
and
what
you're
seeing
here
is
just
a
very
high
level
overview
of
how
the
workflow
should
work
for
this
application
just
to
walk
through
that.
B
So
when
a
user
makes
a
request,
if
they
haven't
logged
into
sso
before
or
their
token
is
invalid
or
expired,
they
should
automatically
be
redirected
to
key
cloak,
which
is
that
centralized
sso
oidc
provider
that
we
use
when
they
log
into
key
club.
Their
valid
auth
token
should
be
set
as
a
jot
header
in
that
request,
and
then
they
should
be
redirected
back
to
the
application.
B
One
other
note
that
the
back-end
api
is
actually
going
to
use
that
jot
authentication
token
from
key
cloak
extract
the
user's
id
from
it
and
then
use
that
to
pull
the
user's
location
from
a
mongodb
backend,
and
since
this
is
istio
con,
here's
a
high
level
breakdown
of
all
the
kubernetes
and
sdo
components.
We're
using
we're
going
to
kind
of
use
this
as
a
basis
of
understanding
as
we're
walking
through
the
application.
B
Also,
I
do
eat
my
own
dog
food,
so
this
entire
application
is
deployed
into
a
k3s
cluster
running
on
my
home
lab
and
it's
also
deployed
on
top
of
big
bang,
just
to
prove
that
I'm
going
to
go
ahead
and
go
over
to
my
terminal
and,
like
I
said,
big
bang
is
deployed
entirely
on
top
of
flux.
So
I'm
going
to
utilize
the
flux,
cli
and
just
run
flux
get
home
releases
in
the
big
bang
namespace.
B
B
So
you
can
see
here,
like
I
said,
all
of
the
images
that
are
being
deployed
via
bigbang.
Are
these
hardened
images
they're
all
coming
from
registry1.dso.mil?
These
have
all
gone
through
that
ironbank
pipeline
they're
all
approved
and
secured.
These
aren't
your
docker.
I
o
upstream
images,
which
is
really
cool
and
really
secure.
B
B
So
you
can
see
here,
it's
very
simplistic.
It's
pretty
much
three
deployments
a
ui,
a
back
end,
a
database
and
then
kubernetes
services.
On
top
of
that,
there's
also
a
few
istio
resources.
On
top
of
this
that
we'll
talk
about
as
we
work
through
this.
Let's
go
ahead
and
go
to
our
browser
and
just
see
how
our
application
is
performing,
so
I'm
going
to
navigate
to
welcome.istiocon.xyz
and
hit
enter
so
right
off
the
bat
you
can
see
I'm
getting
a
404
issue,
an
http
error,
that's
not
ideal
for
any
application.
B
Generally
speaking,
when
I'm
working
with
istio-
and
I
see
a
404
error,
the
first
thing
I
want
to
confirm
is
that
this.
This
is
in
fact
an
issue
internal
to
our
cluster,
whether
it
be
kubernetes
istio
or
the
app
code
itself,
and
not
something
external.
So
a
very
quick
and
easy
way
to
do
that
is
to
just
pull
up
a
javascript
console
and
I'm
going
to
refresh
the
page
really
quick
and
just
click
on
that
request.
That
was
getting
that
404
and
I'm
going
to
go
to
the
headers
tab.
B
And
if
I
come
down
to
my
response,
headers
you'll
see
that
there
is
a
response
center
hat
set
of
server
istio
envoy.
So
what
that's
telling
me
is
that
the
request
did
in
fact
get
into
our
cluster.
It
reached
either
the
ingress
or
one
of
those
envoy
sidecars
and
that
set
this
header
and
set
the
request
back
as
a
response
center.
B
B
One
of
those
capabilities
is
called
analyze,
so
what
analyze
does
is
it'll
actually
do
a
live
analysis
of
all
of
your
istio
configurations,
whether
that's
for
an
entire
namespace,
a
workload
or
even
static
files,
it'll
print
out
any
validation
issues,
it
fines
along
the
way.
Personally.
For
me,
this
ends
up
covering
about
90
of
the
issues
I
encounter
while
I'm
troubleshooting
hdo,
and
it's
always
my
first
line
of
defense.
So
I'd
always
highly
recommend
that
you
use
this.
B
If
you
run
into
issues
just
to
make
sure
you're
covering
those
bases,
so
anyways
we're
just
going
to
run
this
at
the
name
space
level,
so
I'm
going
to
go,
go
istio,
ctl,
analyze,
dash
and
welcome
app
and
that'll
take
a
second
to
run
okay.
So
you
can
see
that
we
do
have
some
issues
here
right
now,
we're
just
going
to
focus
on
these
first
two,
which
end
up
being
the
same
problem.
What
this
is
telling
us
is
that
we
have
a
virtual
service
named
app
ingress
in
our
welcome
map.
B
Namespace,
that's
referencing,
an
istio
gateway
that
doesn't
exist.
It's
referencing,
istio,
ingress
gateway
in
the
istio
system,
namespace,
so
just
for
all
clarity
and
making
sure
everyone
understands
kind
of
where
we
are
in
the
flow.
I'm
going
to
go
back
to
my
slides,
we're
right
here.
So,
basically,
the
request
is
getting
into
our
ingress
gateway,
but
since
that
virtual
service
is
registered
with
an
ingress
gateway
that
doesn't
exist,
it
can't
actually
route
the
traffic
too.
B
So
for
us
this
should
be
a
pretty
simple,
quick
fix.
Let's
take
a
look
at
all
the
gateways
we
have
defined
in
our
cluster,
so
I'm
just
going
to
do
k
get
gateways
dash
a
so.
We
have
two,
for
the
intents
of
this
demo,
we're
going
to
focus
on
the
main
gateway
that
gateway
is
the
default
gateway
that
does
get
created
by
big
bang
and
that's
the
one
that
our
virtual
server
should
ultimately
be
using
so
for
a
quick
fix,
I'm
just
going
to
modify
my
virtual
service
edit
vs.
B
In
my
welcome
map,
namespace
app
ingress
and,
like
I
said,
if
we
come
down
to
our
spec
in
our
gateway
section,
you
can
see
that
this
is
referencing,
that
istio
ingress
gateway.
So
I'm
just
going
to
quickly
update
this
to
main
save
it,
and
you
can
see
it
was
edited
and
I'm
just
going
to
arrow
up
and
analyze
one
more
time.
B
B
We
did
make
it
to
the
ui,
but
unfortunately
for
us
we
can't
get
to
the
back
end
we're
getting
a
503
so
in
context
of
sort
of
where
we're
at
in
the
application
we're
here
now
we
made
it
through
our
virtual
service.
Our
virtual
service
is
successfully
routing
to
our
ui,
but
unfortunately,
there's
still
some
underlying
issue
that
it
can't
get
to
the
back
end
deployment.
So
I'm
going
to
come
back
to
my
terminal
and
let's
take
a
look
at
that
last
remaining
issue,
that's
being
displayed
by
analyze.
B
Naming
standards
within
istu
are
very
important,
especially
in
context
of
kubernetes
services.
If
you
don't
specify
a
name,
istio
doesn't
know
what
protocol
to
use
and
it
can
cause
a
lot
of
other
issues.
So
let's
just
go
ahead
and
take
a
look
at
this
service
and
see
what's
going
on,
so
I'm
going
to
edit
that
in
my
backend
service
and
I'm
going
to
come
to
the
bottom,
let's
take
a
look
at
our
ports
section,
so
you
can
see
here
that
there's
three
fields
listed
port
protocol
and
target
port,
but
there's
no
name
field.
B
So
I'm
just
going
to
quickly
come
in
here
and
add
name
http
for
me.
I
know
this
is
http
traffic
and
I
know
that's
that's
what
this
name
needs
to
be,
but
istio
does
support
a
lot
of
other
protocols.
They're
all
very
well
documented
and
a
quick
google
search
search
should
show
you
all
those
if
you're
curious.
B
But
since
this
is
http,
I'm
just
going
to
name
it
http
and
save
it.
You
can
see
that
it
was
edited
and
I'm
going
to
analyze
one
more
time
great.
So
we've
gotten
past
all
of
our
validation
issues.
Everything
should
be
good
from
that
side
of
things.
Let's
come
back
to
our
application
and
see
what's
going
on,
so
I'm
going
to
click,
ok
and
do
one
more
refresh,
so
we
made
it
a
little
bit
further.
B
Instead
of
that,
503
we
were
seeing
before
we're
seeing
a
403
just
in
context
again
of
where
we're
at
this
is
where
we're
at.
I
haven't
made
it
much
further,
we're
still
stuck
at
that
back
end
deployment
and
if
I
come
back
to
the
ui,
so
generally
speaking,
if
I'm
working
with
istio-
and
I
see
a
403
error-
that's
an
http
forbidden
there.
B
Usually
my
first
assumption
is
that
it's
something
related
to
istio
authentication
and
authorization,
whether
it
be
an
authorization
policy
or
something
of
this
sort.
One
quick
way.
I
can
mostly
confirm
that
is
checking
again
the
javascript
console
and
I'm
going
to
come
down
to
that
request.
That
was
throwing
that
403
and
instead
of
going
to
headers
I'm
going
to
go
to
the
response,
and
you
can
see
that
the
response
coming
back
from
this
is
an
rbac
access
denied.
B
B
Some
of
those
features
are
considered
experimental
and
even
though
they
are
experimental,
they're
still
very
powerful,
especially
in
context
of
troubleshooting.
One
of
those
features
is
known
as
off
z
check,
so
I'm
going
to
clean
up
my
terminal
and
let's
do
istio
ctl
exp
for
experimental
optic
check
and
I'm
just
going
to
output
the
help
message.
B
So
what
op
z
check
is
doing?
It's
actually
providing
a
comprehensive
list
of
all
the
authorization
policies
that
apply
to
either
a
specific
deployment
or
pod,
whether
those
off
policies
are
scoped,
the
mesh,
namespace
or
workload
level.
It's
just
a
great
way
for
seeing
how
istio
is
managing
authentication
and
security
on
that
specific
workflow.
B
B
So
I'm
going
to
come
down
to
the
specs.
So
this
is
a
pretty
straightforward
off
policy.
All
this
is
saying
is
that
workloads
within
the
welcome
app
namespace
are
allowed
to
talk
to
other
workloads
within
the
same
namespace
that
welcome
that
namespace.
So
this
is
your
inner
namespace
off
policy,
but
since
this
is
a
back-end
api
and
users
browsers
need
to
hit
it
in
order
to
grab
that
user
data
just
display
in
their
browser.
B
This
isn't
quite
exposed
enough
for
the
purposes
of
our
application
and
we're
going
to
need
to
expose
it
a
little
bit
more.
So
I'm
going
to
get
out
of
that
now,
fortunately,
for
us,
I
do
have
an
off
policy
ready
to
go
and
just
to
walk
through
what
this
auth
policy
is
doing.
B
Again,
it's
scoped
at
the
welcome
map
namespace
level,
but
it's
only
going
to
apply
to
workloads
that
are
labeled
with
app
backend,
which
will
apply
to
our
backend
workload,
exactly
as
we
want,
but
for
this
one
it's
going
to
allow
traffic
from
the
istio
system
namespace.
That
is
where
our
ingress
controller
is
in.
The
ingress
point
of
our
cluster
is
that's
where
we
want
this
to
only
be
allowed
not
to
the
world.
B
This
way,
only
basically
a
request
from
the
browser
can
get
to
this
workload
instead
of
things
from
another
namespace,
for
instance,
and
it's
only
going
to
allow
http
get
requests
which,
for
the
purposes
of
this,
is
plenty
wide
enough.
So
I'm
going
to
go
ahead
and
apply
that
dash,
f
backend,
so
you
can
see
that
was
created.
Let's
come
back
to
my
terminal
and
let's
rerun
that
check
command
one
more
time,
so
you
can
see
now
that
we
have
two
authorization
policies.
Exactly
as
I
wanted.
B
The
one
thing
I
also
like
to
check
is
just
make
sure
that
label
selector
is
working
and
this
isn't
being
applied
to
any
workload
that
I
don't
want
exposed
to
the
world.
So
I'm
just
going
to
quickly
run
that
against
my
manga
deployment,
which
is
our
back-end
database.
We
would
never
want
that
actually
exposed
to
the
world.
B
You
can
see
exactly
as
expected.
The
only
thing
being
applied
to
that
is
the
internal
auth
policy.
So
that's
exactly
what
we
want.
So
let's
come
back
to
the
browser
and
we'll
do
one
more
hard
refresh.
So
the
good
news
is
we
made
it
past
all
of
our
http
error
responses.
B
B
So
if
you
didn't
know
every
side
core,
sidecar
proxy,
that
gets
deployed
by
istio
has
this
dynamically
changing
configuration
that
gets
injected
by
an
istio
component
called
pilot,
and
inside
of
that
configuration
it's
basically
telling
the
proxy
and
the
mesh
how
to
route
traffic
using
pattern
matching
and
defined
protocols-
and
just
I
do
want
to
give
a
quick
overview
of
how
our
auth
redirect
flow
works.
For
that,
the
easiest
way
is
to
pull
up
the
slides.
B
Let's
go
to
the
next
slide,
which
is
where
we're
at
right
now
for
context
we're
getting
into
our
ui
and
our
back
end,
but
the
user
isn't
getting
authenticated.
So
what
we
have?
We
have
an
envoy
filter
called
off
service,
that's
defined
in
our
istio
system,
namespace,
which
is
our
istio
root
namespace.
B
B
So
when
a
request
comes
into
that
sidecar,
that
request
is
going
to
actually
percolate
through
all
of
the
envoy
filters
it
has
defined
before
it
reaches
your
application.
So
for
this
specific
filter,
what
it's
doing
when
the
request
comes
into
it,
it
takes
that
request
and
proxies
it
to
an
off
service
deployment
in
the
same
cluster.
B
If
you
don't
know
what
auth
service
is
it's
a
great
tool
maintained
by
tetrate
and
that's
manage
managing
our
drop
validation,
our
redirects
to
our
centralized
sso,
whether
a
token
is
expired
or
invalid
or
just
doesn't
exist,
and
the
last
thing
it
does.
It
will
set
the
jot
authentication
header
in
the
request
before
it
passes
that
request
to
the
application
that
way
the
application
can
consume,
that
header
parse
it
and
be
able
to
identify
a
user
and
that's
exactly
what
our
application
is
going
to
be
doing
today.
B
But
if
we
are
not
getting
redirected,
we
need
a
way
to
check
just
to
see
if
that
filter
is
actually
being
applied
to
our
sidecar.
So
fortunately
with
istio
ctl.
We
have
a
way
to
do
that,
so
we're
going
to
come
back
to
my
terminal
reset
this
and
I'm
going
to
use
the
proxy
config
capability,
and
I'm
going
to
do
this,
like
I
said
against
the
listener
on
ports.
B
B
So
you
can
see
here.
This
is
an
array
of
objects
inside
of
this
array.
These
are
all
the
envoy
filters
top
to
bottom
that
a
request
is
going
to
percolate
through
before
it
gets
passed
on
to
your
application
and
what
I
would
expect
to
see
here
with
my
x,
external
off
z
policy-
is
that
listed
here
somewhere,
but
as
I
scroll
down
here,
I
don't
see
it
anywhere.
I
don't
see
any
reference
to
auth
service,
which
is
one
of
the
parameters
of
that
envoy
filter.
B
There's,
really
no
reference,
and
by
the
time
I
get
down
to
the
end
of
this
array.
It's
clearly
not
there.
So,
unfortunately,
our
envoy
filter
isn't
being
appropriately
appropriately
applied
to
this
workload.
So
let's
go
ahead
and
we'll
exit
out
of
this
much
like
everything
else
in
kubernetes
envoy
filters
use
those
label
selectors
and
that's
usually
the
first
place
I'm
going
to
check.
So
let's
just
take
a
look
at
our
labels
on
our
deployments
and
I'm
going
to
do
git,
deploy,
dash
and
welcome
app.
B
A
B
B
So
I
just
made
a
change
to
my
mesh.
Like
I
said
inside
of
every
sidecar,
is
this
dynamically
changing
configuration
and
as
I
made
that
change
istio
pilot
should
take
that
change
and
push
it
to
all
the
side
cars
that
way
they
know
how
to
route
traffic
with
this
updated
envoy
filter.
But
how
can
I
be
sure
that
pilot
successfully
did
that?
B
So
when
I
run
that
you
can
see
that
we're
synced
across
the
board
pretty
much
every
sidecar
that
is
deployed
within
this
namespace
that
change
that
I
just
made
was
pushed
to
it
and
it's
all
appropriately
synced.
If
I
arrow
up
and
let's
rerun
my
proxy
config
command
one
more
time,
let's
look
for
http
filters
and
I
scroll
down
on
this
array
this
time,
you'll
see
near
the
bottom.
B
Now
that
I
do
have
this
new
entry
here
for
my
external
auth
z,
envoy
filter,
and
this
is
referring
to
that
off
service
deployment
running
in
the
same
cluster,
exactly
as
I
would
expect.
So,
if
I
come
back
to
my
browser-
and
I
do
usually
have
to
do
a
quick,
close
and
reopen
bear
with
me
one
second,
this
is
just
for
local
caching
issues.
B
Let's
navigate
back
to
welcome.istiocon.xyz,
you
can
see
now
I've
been
appropriately
redirected
back
to
our
centralized
ssl
and
if
I
log
in
with
my
credentials,
you
can
see
that
it's
displaying
the
welcome
message
exactly
as
we
wanted
it
to
so.
Hopefully
that
demo
shows
some
of
the
really
powerful
stuff
you
can
do
just
by
using
utilizing
some
of
those
capabilities
of
istio
ctl.
B
But
beyond
that
it
still
provides
a
lot
of
other
benefits,
especially
when
it
comes
to
the
maintainability
of
your
clusters
and
beyond.
Sd
ctl
there's
a
lot
of
other
tools
that
you
should
be
utilizing
both
for
trouble,
shooting
and
maintenance.
It's
just
tough
to
squeeze
those
all
into
40
minutes,
but
to
talk
a
little
bit
more
about
those,
I
am
going
to
kick
it
back
over
to
nick.
So
nick
all
yours.
A
Great
yeah
amazing
amazing
live
demo
that
adam
just
did
there,
but
you're
probably
wondering
he
only
showed
us
how
to
use
istio
ctl
and
that's
because
a
lot
of
the
work
that's
been
done
in
the
community
over
the
last
year
or
two
has
really
wrapped
up
into
these
awesome
commands
and
we've
kind
of
laid
out
here.
The
steps
that
you
can
take
to
really
you
know,
get
some
really
effective,
debugging
strategy
into
istio.
A
A
But
then,
if
you
really
need
to
dig
deeper,
the
easiest
way
to
do
that
is
to
look
at
the
proxy
configuration
of
envoy
using
the
seo
ctl
command.
It
allows
you
to
filter
down
to
your
ports
and
applications
so
that
you
don't
have
to
dig
through
thousands
of
lines
of
json
proxy
status.
Is
also
really
effective
to
make
sure
that
your
your
mesh
is
all
up
to
date
and
everything
is
being
received
correctly.
A
Then
there's
some
other
tools
that
you
can
definitely
run
to
in
different
states
to
check
to
maintainability.
You
know
maintain
your
istio
clusters
healthy
by
running
upgrade
with
the
dry
run,
option
check
to
see
if
it's
ready
for
it.
You
know,
issue
upgrade
verify,
install
checks
that
your
issue
deployment
is
up
and
running
correctly,
and
then
I
wanted
to
focus
really
on
the
last
one
here.
This
is
a
newer
addition
to
a
ctl
called
bug
report
and
what
it
does
is.
A
It
runs
through
some
analyze,
for
example,
and
a
couple
other
commands,
and
it
gathers
all
of
the
istio
configuration
and
context
of
your
cluster
and
bundles
it
into
a
zip
file,
and
so
it's
a
really
effective
way
to
share
seo
problems
with
seo
that
you're
trying
to
debug
we've
been
running
it
in
the
dod.
If,
if
they
have
problems,
they
run
bug
report,
they
can
send
me
that
zip
file.
I
can
look
at
pretty
much
everything
I
need
to
from
an
istio
standpoint
within
their
cluster.
A
A
Some
of
the
things
we
didn't
talk
about.
Oh
we
didn't
have
time
to
show
a
lot
of
other
amazing
things
that
happened
within
the
community
in
the
last
year
and
that's
in
terms
of
monitoring.
The
seo
operator
has
become
a
lot
more
mature,
and
so
these
are
things
that
you
should
absolutely
be
using
and
deploying
especially
all
the
monitoring
tools,
prometheus
grafana
tracing
kealy
and
so
to
really
have
an
effective,
istio
deployment.
A
A
That
makes
it
easier
for
that
life
cycle
management
of
istio,
easier,
install
easier
to
operate
and
easier
to
upgrade,
and
then
it
also
offers
a
couple
different
flavors
of
this
deal,
fips
compliance,
for
example,
we're
working
with
the
dod
to
get
those
hardened
approved
images
into
this
distribution,
and
so
definitely
go
check
that
out
it's
open
source
and
we
have
a
lot
of
weight
behind
it
already
and
there's.
Also
a
community
started
to
make
istio
easier
to
adopt.
A
Then,
if
that's
not
working,
go
check
out,
discuss
that
issio.io
there's
a
lot
of
great
posts
of
people,
who've
had
problems
with
seo
and
different
types
of
troubleshooting.
That
they've
done.
It's
a
really
great
forum
that
I
frequent
and
I
I
help
I
do
a
lot
of
support
for
so,
if
your
you,
your
use
case,
hopefully
isn't
that
unique
and
somebody
else
may
have
run
into
the
same
problem
as
you,
and
so
you
can
discuss
bugs
that
may
have
already
been
fixed
or
you
know,
submit
new
ones.
A
But
finally,
if
that
hasn't
really
solved
your
debugging
problems,
you
should
absolutely
get
involved
in
the
istio
community,
bring
your
problems
to
the
slack
workspace
and
post
them
there.
It's
a
really
good
way
to
get
in
touch
with
the
developers
and
contributors
of
istio
and
then
finally,
if
none
of
that
else
is
working,
it's
probably
a
bug.
You've
probably
found
in
it
in
this
debug,
submit
that
to
github,
don't
forget
to
run
issue
a
ctl
bug,
report
yeah
and
I
think
that's
largely
it
go
check
out
the
big
bang
platform.