►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
there
we
go
I'd
like
to
thank
everyone
for
joining
us
today.
At
the
cncf
live
webinar,
securing
your
workload,
communications
with
open
service
mesh,
I'm
libby,
schultz
and
I'll
be
moderating.
Today's
webinar
I'll
read
our
code
of
conduct
and
then
hand
over
to
philip
gibson
senior
product
program
manager
at
microsoft.
A
A
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
speak
as
an
attendee,
but
there
is
a
chat
box
on
the
right
hand.
Side
of
your
screen,
please
feel
free
to
drop
all
of
your
questions
there
and
we
will
get
to
as
many
as
we
can,
either
in
the
middle
when
it's
pertinent
or
throughout.
At
the
end,
this
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
A
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
and
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
programs,
page
at
community.cncf.io,
under
online
programs.
A
B
All
right,
thank
you,
libby
for
that
intro
welcome
everyone.
Thank
you
for
showing
up.
I
think
we
got
a
really
good
session
today.
B
We're
going
to
be
focused
on
really
around
securing
your
workload,
communications,
how
you
can
actually
add
a
little
more
security
or
strengthen
your
security
posture
with
your
kubernetes
cluster,
so
our
agenda
is
going
to
look
like
this
looks
like
it
slides
moving
along
here,
we'll
do
a
quick
overview
of
osm
for
those
who
may
not
be
familiar
with
the
the
service
mesh
and
then
again
this
is
going
to
be
pretty
demo
heavy.
B
I
got
some
recorded
demos
here
that
are
probably
newer
than
what
you've
seen
out
there
and
I
would
have
done
this
live,
but
again
we
want
to
make
sure
that
there's
no
network
disconnects
etc,
but
this
is
the
demo
list,
we're
gonna
start
off
doing
our
traffic
access
demo
and
stay
tuned.
For
that
we'll
talk
about
it.
B
We're
gonna
build
a
little
story
line
around
some
of
these
demos
leading
into
the
next
integration,
and
then,
after
that,
we'll
talk
about
the
opa
integration
and
then
we'll
move
on
to
our
contour
integration.
So
to
allow
you
to
do
true,
end-to-end
encryption
from
tls
to
the
back
end
mtls,
you
know
workloads
in
your
environment
and
then
we'll
finish
up
with
the
egress
workloads,
how
you
can
actually
control
what
your
pods
are
communicating
to
outside
the
cluster.
So
I'm
excited
about
this.
B
A
lot
of
these
integrations
here
have
been
driven
by
the
community.
They've
actually
were
things
that
we
kind
of
put
more
priority
on
in
the
backlog.
So,
thanks
to
all
those
who
participated
in
letting
us
know
the
use
cases
that
you're
most
interested
in
so
osm.
This
is
probably
the
newest
service
mesh
in
the
community.
We
open
source
around
july
2019
and
our
core
principles
have
always
been
to
provide
a
very
simplified
experience
for
the
most
common
use
cases
of
a
service
mesh.
B
You
know
we're
not
into
the
speeds
and
feeds
race.
You
know
one
thing
that
we
do
feel
really
good
about
with
osm
is
we
are
using
a
very
tried
and
true
data
plane,
which
is
the
envoy
proxy,
and
so
a
lot
of
our
backlog
is
really
about
just
opening
up
a
lot
of
the
envoy
functionality
in
a
very
elegant
way.
That
is
very
simplified
for
for
the
operator.
B
So
again,
these
are
kind
of
our
core
principles:
being
simple
effortless
to
install,
maintain
and
operate
painless
to
troubleshoot
and
again
easy
to
configure
vbsmi
specification
which,
if
you're
not
familiar
with
that
specification,
this
is
kind
of
a
abstraction
layer.
You
know
for
service
meshes
so
that
consumers
of
service
measures
can
really
experience
a
simplified
api
integration
experience
with
their
apps,
tooling
and
everything
else.
So
this
is
a
growing
ecosystem
and
we
look
to
really
build
out
a
lot
more
functionality
in
the
original
spec
going
forward.
B
So
with
that,
let's
go
ahead
and
jump
into
our
demos,
I'm
a
visual
guy.
So
when
I
do
demos,
I
know
it's
a
bunch
of
I
like
to
say
it's
a
bunch
of
text
flying
back
and
forwards.
So
I
keep
my
powerpoint
animation
skills
going
here,
where
I
kind
of
give
you
a
you
know
a
preview
of
what
you're
going
to
see
in
the
actual
demo
itself
right.
B
So
what
we're
going
to
do
is
we're
going
to
do
the
traffic
access
demo
and
again,
there's
going
to
be
a
little
twist
at
the
end.
Here
we're
going
to
build
upon
a
story
and-
and
then
you
can
see
if
something
like
this
could
exist
in
your
environment
as
well,
but
we'll
start
off
with
deploying
our
book
buyer
in
our
bookstore
demo.
B
Application
we'll
have
osm
installed,
enabled
to
manage
both
the
book
buyer
and
the
bookstore
name
spaces
where
those
applications
are
deployed,
and
then
we
have
this
permissive
traffic
mode
being
true,
which
means
that
you
know
we're
going
to
pass
traffic
through
the
envoy
sidecars,
but
we're
not
going
to
require
any
additional
policies.
B
So
this
allows
you
to
easily
kind
of
onboard
your
environment
and
then
figure
out
which
policies
that
you
want
to
really
mandate.
Who
can
communicate
in
the
mesh
itself
so
once
we
do
that?
Obviously
you
know
we
have
the
envoy
proxies
and
then
everything's
gonna
run
smooth
at
this
point,
we'll
turn
it
to
false
and
then
again
we'll
deploy
our
policies
out
there
and
then
you'll
see
that
the
book
buyer
will
still
be
able
to
communicate
to
bookstore
v1
service
there.
B
Actually
I
got
ahead
of
myself
actually
when
we
flipped
the
permissive
mode
to
false.
That
means
we're
gonna
have
to
have
a
policy.
We'll
then
create
that
policy
deploy
it
out
into
the
cluster
and
then
that's
gonna
resume
our
communications
back
for
the
bookstore
application.
A
B
Twist
right
we
have
our
service
mesh.
We
have
a
explicit
policy
allowing
book
buyer
to
only
speak
to
bookstore
or
v1.
That's
really
kind
of
what
you're
going
to
want
to
use
a
mesh
for
this
is
all
mtls
you
know
encrypted.
But
on
top
of
that
we
actually
have
a
specific
policy
saying
that
book.
Buyer
can
only
speak
to
bookstore
v1,
but
what
if
someone
in
your
environment
goes
rogue
right?
B
B
Rogue
you
know
so
what
if
we
spin
up
a
new
service,
maybe
you
know
about
it?
Maybe
you
don't
and
you
have
again.
B
Maybe
this
is
a
rogue
admin
who
has
been
paid
or
knows
who
this
application
is
and
what's
stopping
them
from
allowing
the
book
thief
to
talk
to
the
bookstore
and
again,
you
know,
as
we
progress
through
all
these
demos
you're
going
to
see
how
we
can
kind
of
you
know,
thought
some
of
this
type
of
activities
in
your
environment
and
you'll
see
how
easy
it
is
for
an
admin
who
has
permissions
to
do
so
be
being
able
to
allow
this
type
of
activity.
B
B
So
first
thing
we're
going
to
do:
is
we
already
have
our
book,
buyer
or
the
bookstore
application
deployed?
So
we'll
just
take
a
look
at
those
pods
itself
and
you'll
see
that
we
have
our
book.
Buyer
bookstore
book
warehouse
all
deployed,
simple
three-tier
kind
of
microservice
application,
and
then
next
thing
we're
gonna
do
is
we're
gonna,
look
at
the
list
of
all
of
the
namespaces
that
osm
is
currently
managing
to
provide
service
mesh
functionality
so
you'll
see
again
the
buyer
bookstore
warehouse
all
enabled,
so
everything
is
good
there.
B
Next
thing
we're
going
to
do
is
we're
going
to
take
a
look
at
the
permissive
mode,
the
current
permissive
mode?
What
I'm
showing
you
here
is
just
the
book
by
you'll
see
it's
two
of
two,
so
you
have
your
application
container
alongside
your
your
envoy,
sidecar
proxy
as
well.
So
everything
is
all
configured
so
now,
let's
go
ahead
and
just
take
a
look
in
the
query
and
see
what
our
current
status
is
for
permissive
mode.
B
It
is
set
to
true
again
that's
going
to
allow
us
to
pass
traffic
with
no
explicit
policies
in
the
mesh,
and
what
we'll
do
here
quickly
is
we're
going
to
query
onto
the
book
buyer
pod
and
then
we're
going
to
call
this
logs
and
just
verify
that
it
can
speak
to
our
bookstore
v1
service.
Here.
B
And
so
our
standard
out
the
way
we're
formatting
this
is,
you
can
see
it's
hitting
the
bookstore
endpoints
that
traffic
is
traversing
through
the
envoy,
sidecar
proxy,
we're
getting
the
200
okay.
So,
at
this
point,
everything's
okay
book
buyer
is
able
to
speak
to
the
bookstore
and
purchase
books.
B
So
next
thing
we'll
do
is
really
again.
What
you
want
to
service
mesh
for
is
to
kind
of
control
the
the
levels
of
access
going
on
inside
your
cluster,
we're
going
to
go
ahead
and
change
the
permissive
mode
policy
here
or
setting
we're
gonna
patch
this
over
to
be
false,
and
that's
gonna
instruct
the
osm
controller
to
ask
for
specific
routing
rules
or
traffic
access
rules
to
allow
traffic
to
happen
inside
the
mesh.
So
as
soon
as
we
do
that
we
can
go
back
and
tail
the
logs
off
the
book.
B
Buyer
and
you'll
see
that
instantly
that
traffic
has
been
stopped.
It
can
no
longer
communicate
to
books
or
v1
service,
and
so
that's
that's
great.
That's
what
you
want
the
service
mesh
for
next
thing.
What
we'll
do
is,
let's
go
ahead
and
just
allow
the
book
buyer
to
speak
to
the
bookstore,
so
we've
got
an
explicit
rule
here.
B
I
won't
go
into
the
manifest.
This
is
all
kind
of
public
here
it's
on
our
repos,
but
essentially
this
is
saying:
hey
book,
buyer
can
speak
to
bookstore.
If
you
want
service
once
that's
deployed
again,
we
can
go
ahead
and
tell
those
logs
and
we
see
that
access
or
or
the
communications
has
been
resumed
from
the
book
power
being
able
to
talk
to
the
bookstore
so
great
that
that's
that's
what
we
want
now:
here's
where
we
get
a
little.
B
You
know
things
get
a
little
interesting
right,
so
this
is
kind
of
the
some
of
the
rogue
activities
that
could
happen
in
the
environment,
so
we're
going
to
go
ahead
and
create
a
namespace
for
book
thief
and
we're
going
to
then
add
that
namespace
to
osm
to
be
able
to
manage
so
we're
actually
going
to
bring
that
namespace
into
the
mesh.
B
And
then
we're
going
to
go
ahead
and
deploy
the
book
thief,
service
or
application.
And
again
it's
going
to
want
to
speak
to
the
bookstore
service
to
steal
books
as
a
thief
would
want
to
do
and
so
right
now
our
policies
are
only
allowing
the
book
buyer
to
talk
to
the
bookstore.
So
if
we
go
ahead
and
tail
the
logs
off
of
the
book
thief,
we
should
see
that
it's
going
to
be
blocked
because
there's
no
explicit
policy
allowing
this
activity
to
happen.
B
And
there
you
go
so
at
this
point,
your
message
is
protecting
you
right.
You
have
the
new
service
in
the
mesh.
There's
no
explicit
policies
to
allow
so,
and
so
your
bookstore
v1
service
is
protected.
B
But
what,
if
you
got
somebody
who's
going
to
go?
Rogue
right?
You
have
an
admin
or
an
operator
who
has
the
permissions
to
kind
of
override
this
activity?
Maybe
your
are
back
is
not
in
place
here.
So
what
you're
saying
here
is
hey
here's
an
operator
they
have
permissions
to
do
so.
They're
gonna
go
ahead
and
edit
actual
traffic
access
policies
and
they're.
Just
simply
gonna.
Add
the
book
thief
to
the
manifest
and
allow
it
to
then
talk
to
the
bookstore,
and
so
again
this
is
on
the
fly.
B
B
B
Okay,
so
we've
added
that
traffic
access
policy.
Let's
now
go
ahead
and
tell
the
logs
of
book
thief
and
voila.
It
now
has
access
to
steel
books
from
the
bookstore,
and
so
again
you
know
this
is
really
the
show
that
yes,
the
service
mesh
can
protect
you.
It
can
actually
give
you
the
encryption
that
you're
looking
for,
but
again
it's
not
a
one-stop
shop
in
really
securing
your
cluster
there's
other
things
that
you're
going
to
want
to
look
into
and
investigate
to
really
tighten
up
your
posture
of
communication
inside
your
cluster.
B
So
how
do
we
get
this
rogue
admin
to
not,
you
know,
turn
things
on
that
shouldn't
be
on.
So
the
next
thing
you're
going
to
see
is
our
osm
opa
integration,
and
this
is
really
the
extension
from
envoy
which
allows
envoy
to
actually
have
an
external
authorization.
B
You
know
system
or
policy
to
really
you
know,
have
an
additional
set
of
guards
or
gates
to
ensure
that
you
know
your
policies
are
what
they
are,
and
so
what
you're
going
to
see
here
is
again
just
just
as
we
left
off
book
thief
is
stealing
books
from
bookstore,
and
so
what
we're
going
to
do
is
we're
going
to
deploy
opa
if
you're
not
familiar
with
the
open
policy
agent.
B
This
is
basically
a
a
project
around
policies
for
kubernetes
and
in
this
instance
here
I
do
have
opa
deployed
in
the
same
cluster,
just
as
a
demo,
most
likely
in
your
environment
you're,
going
to
want
to
have
opa
deployed
outside
of
the
cluster
somewhere
that,
where
there's
more
guarantees
of
who
has
access
to
implement
policies
in
your
environment,
so
we'll
deploy
lpa
and
then
what
we're
going
to
do
is
we're
going
to
have
this
policy
and
we're
going
to
tell
opa
that
hey
only
the
book
buyer
can
talk
to
the
bookstore
and
we'll
deploy
that
now
again,
the
the
local
osm
traffic
access
policy
is
still
allowing
the
book
thief
to
talk
to
the
bookstore.
B
But
now
we're
going
to
have
this
additional
check,
this
additional
external
authorization
provider
to
ensure
that
our
policies
are
correct,
so
we'll
have
that
deployed
and
then
what
you
what's
going
to
happen
now
is
is
part
of
the
communication.
Handshake
is
both
book.
Buyer
and
book
thief
now
have
to
go
to
the
opa
and
say
hey,
or
do
you
have
a
policy
for
me
to
be
able
to
talk
to
the
bookstore?
B
And
so
again
it's
happy.
This
is
normal.
We're
then
gonna
go
and
look
at
our
book
thief
pot
and
we're
gonna
tell
it's
stand
it
out
and
see
if
they
can
still
talk
to
them
or
still
steal
books
from
the
the
bookstore
service,
and
they
can
because
again
there's
a
local
traffic
access
policy
that
allows
you
to
do
so
in
the
mesh.
B
So
now
what
we're
going
to
do
is
let's
go
ahead
and
set
up
this
external
provider.
That's
going
to
actually
give
us
an
additional
gate
of
authorization
in
inside
the
mesh
itself,
so
we'll
go
ahead
and
we'll
create
a
namespace
for
opa
and
then
we'll
deploy
the
opa
application
in
that
name
space
and
then
here's
where
our
integration
is
so
we
have
to
edit
our
osm
mesh
config
and
we're
going
to
go
down
and
we're
going
to
basically
tell
the
mess
config
that
hey.
I
want
you
to
also
use
this
external
provider
being
opa.
B
B
So
since
we
added
this
external
provider,
all
of
the
communication
is
stopped
in
the
mesh,
because
we
have
no
rules
in
opa.
At
this
point,
that's
going
to
pass
anything
so
it
defaults
to
false,
so
you'll
see
that
we
got
a
status
503,
don't
mind
some
of
this
other
stand
it
out.
We
got
to
fix
some
some
of
the
way
we
were
logging
this,
but
we
basically
looked
at
both
the
book
buyer
and
the
book
thief.
They
can't
access
anything.
B
So
now,
let's
go
ahead
and
edit
opa
and
give
you
some
rules.
That's
going
to
allow
just
only
the
book
buyer
to
speak
to
the
bookstore
and
real
quick
we're
just
going
to
look
at
the
logs
coming
out
of
the
opa.
If
you're
familiar
with
this
there's
a
bunch
of
decision
ids,
it's
kind
of
computing.
Everything
and
you'll
see
that
in
those
attempts
everything
was
a
false.
So
that's
why
none
of
the
book,
buyer
or
the
book
thief
was
able
to
speak
to
the
bookstore
service.
B
Let's
now
and
and
again
we're
just
verifying
here
that
your
local
osm
services
policy
is
allowing
both
the
book
buyer
and
the
book
thief
to
speak
to
the
bookstore
service.
B
If
you're
familiar
with
the
rego
template,
we're
going
to
say
the
you
know
for
bookstore
we're
going
to
do
a
get
request
to
that
path,
book
spot
and
we're
only
going
to
allow
the
book,
buyer
and
there's
a
couple
of
endpoints.
We
also
need
to
get
the
get
new
book
in
point,
but
this
is
essentially
the
policy.
B
B
B
You
can
see
that
the
communication
has
been
resumed
and
again
this
is
an
external
authorization
provider
and
if
we
simply
again
look
at
the
book
thief,
you'll
notice
that
it
is
still
blocked.
Even
though
there's
a
local
osm
policy
that
says
the
book
thief
can
speak
to
the
book.
The
bookstore
that
extra
hop
going
back
to
opa
to
validate
those
rules
is
prohibiting
the
book
thief
from
speaking
to
the
bookstore.
B
And
we're
just
simply
going
to
just
look
at
the
this.
Is
this
I'm
sorry,
the
decision
law
coming
out
of
opa
and
you'll
see
that
the
results
are
true
for
the
book,
buyer
speaking
to
the
bookstore.
B
All
right,
so
this
next
integration-
and
we
get
a
lot
of
questions
about
this.
You
know
we
were
asked
many
times
if
we
were
gonna
actually
build
a
specific
ingress
for
osm
and
we
we
kind
of
looked
to
see
kind
of
what
was
out
there
in
the
community
and
obviously
contour
is
a
very
well
known
and
trusted
ingress,
and
so
we've
actually
worked
with
the
contour
team
to
actually
bring
that
functionality
into
osm,
and
so
we
believe
this
is
a
really
good
kind
of
collaboration
inside
the
cncf
community.
B
And
so
what
we're
going
to
show
you?
What
you
can
do
with
contour
is
a
lot
of
things
really
around
the
ete.
You
know
full
encryption
from
your
tls
to
the
mtls
backend.
So
what
you're
going
to
see
here
is
we're
going
to
have.
We
already
have
we're
gonna
have
our
contour
ingress
deployed
we're
gonna,
deploy
a
sample
app.
B
This
is
the
http
bin
app
here,
because
we're
gonna
test
some
of
some
things
a
little
a
little
later
here,
but
so
we
have
that
app
deployed
and
then
with
contour,
which
what
you
can
do
is
you
can
actually
put
a
policy
to
say:
hey.
This
ingress
is
only
allowed
to
talk
to
this
back
end.
So
at
this
point,
you're
already
securing
the
communications
between
the
ingress
to
whatever
back-end
workloads
that
you
have
so
again.
B
If
you've
not
identified
this
or
configured
it
any
other
service
in
your
cluster,
that
ingress
will
will
not
route
any
traffic
to
so
the
policies
are
very
fine-grained,
and
so
what
we'll
see
here
is
just
standard.
Http,
you're
gonna
have
an
external
client,
be
able
to
hit
the
contour
ingress
and
then
get
access
into
our
application
here,
http
bin
application.
B
Now,
beyond
that
again,
the
question
that
we
get
is
hey.
How
can
I
get
full
ete
encryption?
B
You
know
from
my
external
clients
coming
into
the
ingress
and
then
back
to
our
back-end
workloads,
and
what
we're
able
to
do
is
we're
able
to
share
the
ca
bundle
that
comes
from
osm
and
we're
able
to
take
that
cert
put
it
on
to
the
contour
ingress
and
then
that's
going
to
allow
us
to
basically
pass
off
the
tls
encryption
to
mtls
to
the
back
end,
and
so
you
will
essentially
will
have
a
true
end-to-end
encryption
path
for
all
that
for
that
communication,
that
that's
happening
and
so
that'll
all
check
out
here.
B
Next
thing
we
can
do
is
again,
we
talked
about
you
know,
really
authorizing
the
ingress
to
talk
to
back
ends,
etc.
If
we
ended
up
renaming
this,
this
ingress
to
a
different
name-
that's
not
seeing
name
for
the
ingress
to
talk
to
the
back
end.
All
that
traffic
will
stop
too.
So
those
certs
are
very
specific
to
those
running
instances
in
your
in
your
cluster
there.
B
So
currently,
right
now
we
just
have
the
osm
system
we're
going
to
now
deploy
or
we're
going
to
look
at
the
service
for
envoy
in
that
osm
system.
You'll
see
that
we
have
oh
boy,
I'm
sorry,
not
envoy
contour.
I've.
A
B
Running
as
a
low
balancer,
so
our
ingress
is
live
and
then
what
we
have
to
do
is
to
restrict
the
traffic.
We
just
have
to
create
a
annotation
there
for
the
the
namespace
itself
and
then
for
some
of
the
commands
downstream.
I'm
just
going
to
go
ahead
and
grab
the
ip
addresses
of
the
actual
ingress
itself.
B
And
then,
let's
just
get
an
enumeration
on
those
pods
for
the
application.
You
see
we're
up
and
running
with
202
application,
pod,
I'm
sorry
application
container
as
well
as
the
envoy
sidecar,
and
then
our
service
endpoint
is
up
and
running.
So
now
we're
going
to
go
ahead
and
create
the
contour
ingress
specifically
for
this
back-end
service.
B
And
this
is
all
upstream
integration.
So
if
you're
curious
hey,
how
do
you
set
this
up
simply
visit
the
contour
project
site?
All
of
the
documentation
is
completely
applicable
here,
and
so
we
got
that
ingress
deployed
and
now
we're
going
to
do
we're
just
going
to
test
the
application
from
an
http
standpoint.
B
So
this
is
not
https,
but
that's
coming
across
is
okay,
so
our
external
client
is
able
to
access
that
backend
service,
and
now
we
will
actually
configure
this
to
be
full.
True,
end-to-end
encryption
go
ahead
and
annotate
the
service
here
for
the
tls
and
then
we'll
create
the
contour
proxy.
Here,
that's
going
to
make
sure
that
the
the
back
end
will
be
tls
as
well.
B
Okay,
so
all
of
that's
deployed-
and
I
mean
you-
you
won't
see
anything
like
magical
here
other
than
us
getting
okay,
that
the
traffic
is
is
happening,
but
this
is
actually
representing
true
end-to-end
encryption
from
the
tls
to
the
ingress
and
then
that
being
translated
over
to
mtls
on
the
back
end
and
everything
comes
back.
Okay
here.
B
This
next
set
here
we're
just
going
to
verify
that
unauthorized
sources
can't
access
the
back
end.
Here
again
we're
going
to
kind
of
change
the
the
name
of
the
the
ingress.
B
And
since
we've
changed
the
name
of
the
ingress,
that
ingress
is
not
allowed
to
talk
to
the
back
end.
So
again,
if
we
do
that
curl
command
again
coming
from
external
client
you'll
see
that
this
is
all
forbidden.
So
no
traffic
is
able
to
pass
through
in
the
ete
scenario
here
and
then
we
can
validate
the
client
access
here
as
well,
and
if
we
wanted
to,
we
can
actually
skip
all
of
this.
B
All
right
so
again,
ete
encryption,
I
know
that's
highly
desirable
and
I
know
we
had
a
lot
of
people
in
the
community
asking
us
how
to
how
to
do
that.
Oh
sam
and
we're
able
to
do
that
with
our
contour
integration
here.
B
So
next
what
we
want
is
this
isn't
really
an
integration,
but
this
is
just
a
feature
that
we
have,
which
is
really
controlling
the
egress
in
your
environment.
B
So
you
may
know-
or
you
may
not
know
what
workloads
are
being
deployed
in
your
environment-
they
could
be
reaching
to
known
endpoints
or
they
could
be
going
to
endpoints
that
you
don't
want
them
to
go
to
or
unauthorized
endpoints.
So
here's
a
simple
way
that
we'll
look
at
some
of
the
configuration
that
osm
allows
you
to
do
to
actually
control
the
egress
communications
of
the
pods
or
and
or
workloads
that
are
being
deployed
in
your
your
cluster
here.
B
So
first
thing
is
permissive
mode.
Is
false,
so
again
we're
going
to
be
asking
for
access
policies
to
to
have
traffic
pass
in
the
cluster.
We
do
have
this
notion
of
the
global
egress
being
true.
So
by
default,
when
you
deploy
anything,
we're
going
to
just
create
a
simple
curl
application,
we'll
create
a
curl
namespace,
we'll
deploy
the
application
and
by
default
we're
going
to
allow
your
workloads
in
this
case
this
part
to
just
be
able
to
leave
the
cluster
for
communication.
B
B
The
global
egress
policy
to
false,
and
what
that's
going
to
do
is
that's
basically
going
to
lock
down
all
the
communication
of
your
workloads
trying
to
exit
exit
the
cluster
itself,
and
so
what
you'll
see
here
is
immediately
we'll
go
ahead
and
exactly
into
this
curl
app
and
it
won't
be
able
to
exit
the
cluster
and,
and
that's
a
good
thing
so
now
you're
going
to
get
into
really
fine
grained
controls
of
which
pods
in
your
cluster
that
you
want
them
to
have
access
to
the
cluster
itself.
B
So
with
that
we'll
go
ahead
and
create
a
specific
fine
grain
policy,
that's
just
going
to
allow
only
the
curl
app
to
actually
exit
the
cluster.
All
other
workloads
will
still
be
confined
by
the
overall
global
policy
here,
which
is
going
to
stop
all
the
traffic
here.
So
once
we
do
that,
we'll
go
ahead
and
flip
that
and
then
so
again
we
got
something
specifically
saying
that,
yes,
the
curl
app
can
actually
access
anything
outside
the
cluster.
B
And
again,
that'll
come
at
come
back
is
true,
so
we're
going
to
allow
everything
outside
the
cluster
by
default.
Now,
let's
go
ahead
and
set
up
our
testing
app,
which
is
going
to
be.
You
know.
A
B
B
B
B
And
you
see
that
you
can
and
again
with
that
egress
policy,
it's
very
specific
to
the
endpoint,
so
that
curl
app
couldn't
hit
any
other
endpoint
that
has
not
been
identified
in
that
policy.
So
again,
you
can
get
very
fine-grained
controls
using
this
next
thing.
We'll
do
is
we'll
just
show
you
us
removing
the
policy
there
and
if
we
go
back
and
issue
the
same
command,
it
can
no
longer
exit
the
cluster
itself.
B
So
that's
kind
of
all
the
demos.
Hopefully
you
saw
something
there
that
may
inspire
you
to
do
something
and
actually
heighten
your
security
posture.
Next,
what
we'll
do
is
we're
gonna
just
talk
about
some
of
the
things
that
are
on
the
roadmap
for
osm
coming
up
here
we
actually
last
week
or
so
at
kukan.
B
We
made
an
announcement,
so
we
actually
have
our
v.1.0.orc
that's
all
available
and
the
stable
version
of
that
is
going
to
actually
be
available
here
really
soon.
Next,
we're
talking
about
expanding
the
smi
functionality.
So
again,
if
those
familiar
with
that
specification
around
really
building
this
abstraction
layer
for
service
meshes,
we
think
that
there's
more
things
that
envoy's
doing
that
we
can
actually
bubble
up
into
that
spec
to
make
it
more
easy
to
access
some
of
those
features
such
as
you
know,
retries
and
circuit,
breaking
et
cetera.
B
So
there's
there's
a
big
initiative
going
around
for
that.
We're
also
looking
to
expand
our
vm
support
so
being
able
to
bring
things
outside
of
the
cluster
to
the
mesh
itself,
and
we
currently
have
some
things
that
are
experimental,
and
so
this
is
out
there
as
well.
So
please
check
that
out.
If
you
want
to
see
how
you
can
bring
vms
into
your
cluster
as
well
as
multi-cluster
support.
That's
been
probably
one
of
the
hottest
topics
as
of
late
people
want
to
be
able
to
actually
extend
these
functionalities.
B
B
So
we
got
envoy
supporting
windows
now
and
we
actually
just
did
a
demo
at
this
coupon
north
america
showing
container
support
in
osm,
and
so
we're
happy
about
the
direction
of
where
we're
going
with
that,
because
that's
going
to
basically
open
up
a
lot
more
workloads
that
we
have
customers
asking
for
to
be
secured
by
service
mesh
and
then
last
just
wasm
watson,
filters
extensions.
B
You
know
there's
a
lot
of
activity
around
the
whole
wasm
space,
and
so
we
actually
have
some
of
this
working
and
we're
actually
looking
to
the
community
to
give
us
a
little
more
feedback
on
some
of
the
filters
or
just
wasn't,
support
they're.
Looking
for
in
the
cluster
itself,.
B
All
right,
so
that's
it
for
the
road
map.
I've
not
been
looking
at
the
chat
here,
but
I
think
we
got
enough
time
here,
we'll
open
it
up
for
questions.
If
you
haven't,
please
anything
that
you
see,
you
need
me
to
verify
or
clarify,
or
you
just
got
any
general
questions
about
the
osm
project
itself.
B
All
right,
thank
you.
Hopefully
I
don't
slaughter
your
name
here
georges
thank
you
for
showing
up
thanks.
So
much.
I
see
you
bridget,
okay,
yeah,
no,
no
specific
questions!
Yeah!
If
you
want
to
get
involved,
please
go
out
to
open
service.
Mr.Io
that'll
be
a
link
to
the
actual
github
repo,
there's
tons
of
kind
of
first
issues
that
you
can
get
involved
with
so
yeah,
and
then
we
also
have
our
monthly
osm
call.
B
I
don't
have
the
exact
dates,
but
if
you
do
want
to
kind
of
meet
some
of
the
maintainers
myself
included,
you
can
actually
come
to
those
calls
and
again,
if
there's
things
that
you're
looking
to
see
happen,
you
got
certain
use
case
that
you
don't
see
osm
being
able
to
handle.
Please
show
up
to
those
calls:
let's
have
a
conversation
about
them
and
see
if
we
can
include
those
use
cases
that
we
may
have
not
thought
of.
B
Yeah,
so
the
question
does
osm
throttle
connection
request
not
at
this
time.
That
is
something
that's
in
our
backlog.
It's
basically
something
we
would
do
from
the
onboard
level.
So,
as
I
mentioned
earlier,
we
are
using
the
envoy
proxy
data
plane,
so
you
know
we
can
basically
tap
into
everything
that
envoy
is
doing.
You
know
again.
Our
purpose
here
is
to
kind
of
make
those
configurations
super
easy,
and
I
believe
we
do
have
that
as
part
of
our
backlog
again
with
circuit
breaking
and
retries.
B
Okay,
we
got
a
question
from
satish:
is
osm
cli
a
dependency,
or
can
everything
be
done
with
coup
control?
Oh,
yes,
you
will
need
the
osm
binary
to
interact
with
the
osm
control
plane
and
so
that
that's
much
in
line
with
all
the
other
meshes
out
there
they're
going
to
have
a
very
specific
binary
to
configure
the
service
mesh
so.
A
B
B
Do
you
provide
integration
for
monitoring
answer
is
yes,
so
there's
a
couple
ways
you
can
achieve
this
with
the
open
source
itself.
We
do
have
integration
into.
You
know
prometheus
and
grafana,
so
you're
able
to
do
some
monitoring
in
that
way.
B
Osm
will
also
be
provided
as
an
add-on
to
the
aks
service
in
azure
and
once
we're
on
the
azure
platform.
There's
a
lot
of
things
we
can
tap
into.
So
there
is
integration
with
azure
monitor
as
well,
and
then
future
integration
with
app
insights
to
actually
draw
up
a
lot
of
some
of
the
distributed
views
as
well.
B
So
if
you're
egressing
out
of
the
cluster
who's
ever
going
to
support
that
is
going
to
be,
on
the
other
hand,
so
whatever
endpoint
that
you're
configuring,
so
you
did
see
that
we
configured
ingress
coming
into
the
cluster
via
contour,
and
so
we
do
support
again
tls
over
to
mtls
end
to
end
encryption,
but
anything
you're
talking
about
that's
going
to
go
outside
the
cluster.
At
that
point
you
become
the
client
and
it's
going
to
be
on
the
servicing
side
and
if
they're
going
to
support
any
encryption
for
you.
B
Thank
you
all
that
showed
up.
Hopefully,
you
saw
something
new
and
excited
to
try
it
in
your
environment
again
go
ahead.
Kick
the
tires
on
this.
If
you
got
any
issues
again,
go
to
github
post
your
your
issues,
we're
on
it,
we
got
our
maintainers
are
looking
at
those
cues.
Quite
often
so
yeah
feel
free
to
give
us
some
feedback
on
anything
that
you've
seen
here
today
or
anything
else
that
you
see
in
the
documentation
to
try
out.
A
Awesome,
thank
you
so
much
phillip
thanks
everyone
for
joining
us.
The
recording
will
be
up
online
later
today.
You
can
access
it
through
this
link
for
registration
or
on
our
youtube
playlist,
and
I
think
that
that
is
it.
But
thank
you
so
much
for
your
time
and
we
will
see
y'all
at
the
next
live
webinar
thanks.
Everyone
for
joining
us.