►
From YouTube: CNCF Networking WG Meeting - 2018-07-17
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
You
guys
hit
me
now:
I
was
gonna,
say:
let's
get
down,
it's
going
to
get
started
with
kind
of
introduction
to
we've
met
and
then
we'll
play
a
little
bit
of
a
QA
afterwards
and
I'm
sure
I
have
a
few
more
people
join
us
as
we
get
going,
but
I.
Don't
think
I
want
to
wait
any
long
so
Brian
over
to
you.
Yeah.
A
A
C
Completely
random
other
direction
after
they
told
us
to
talk
about
it
with
everybody,
which
was
which
was
a
moment
of
really
special
engineers,
but
we're
back
again
now
I
mean
we've.
Now
is
a
project
which
you
know
we're
proud
of.
It's
got
a
lot
of
users,
it's
people
who
don't
necessarily
want
to
understand
about
networking.
We
just
want
that.
Cuba
needs
to
work.
C
So
you
know
we
are
really
keen
to
see
it
going
to
the
CN
CF
and
become
a
more
widely
collaborated
tool
to
have
a
more
formal
roadmap
and
all
of
those
good
things-
and
you
know
we
want
the
blessing
of
the
networking
working
group
to
do
that.
This
is
an
update
and
also
can
requested
a
bit
about
the
scope
which
we
also
want
to
put
into
this.
Yes,.
B
B
Talking
about
yeah,
it's
an
overlay,
Network,
we'll
talk,
I
got
some
pictures
of
what
exactly
we
mean
by
that
the
number
one
thing
that
we
aim
for
is
that
it
just
work
kind
of
invisible
infrastructure.
So
we
don't
really
speak
the
language
of
networking
people
if
you
like,
we
try
and
come
at
it
from
the
perspective
of
someone
who's
trying
to
run
their
application
working.
So
as
far
as
possible.
We
we
try
and
make
it
just
work,
and
it's
completely
open-source,
so
just
walk
briefly
through
how
most
people
use
it.
B
I
think
we've
gone
through
30
million
pools.
Now
one
of
the
interesting
things
about
this
kind
of
open
source
software
is.
We
have
virtually
no
idea
what
most
most
people
are
doing
with
it.
We
can
observe
the
number
of
people
doing
a
docker
pool
on
the
container.
We
talked
to
some
of
them,
which
are
listed
at
the
RAI.
We
know
some
people
have
integrated
with
the
software,
but
by
and
large
we
don't
get
to
hear
what
people
are
doing
with
it,
which
is
kind
of
weird.
B
So
most
people
will
use
it.
This
way
they
will
be
creating
their
kubernetes
cluster
and
they
will
run
a
queue.
Ketil
apply
command
with
a
URL
which
is
a
little
bit
complicated
because
we
we
want
to
find
out
the
version
of
their
system,
but,
broadly
speaking,
it
installs
as
a
daemon
set,
which
means
we
run
or
kubernetes
will
run
one
copy
of
this
on
every
node
in
the
cluster.
B
So
in
this
picture,
I've
got
the
cluster
is
the
is
the
outer
rectangle
nodes,
are
the
inner
rectangles
and
so
depicting
that
we
run
a
demon
on
every
machine
as
a
demon
set,
and
they
automatically
figure
out
where
each
other
are
and
they
form
a
mesh
talking
to
each
other.
They
communicate.
They
form
a
control
plane
by
gossip
and
this
they
can
form
a
mesh
in
one
kubernetes
cluster.
B
B
B
Zooming
in
in
a
lot
more
detail
this
on
one
node.
If
you
setting
up
one
pod
and
there's
a
thing
called
a
CNI
plugin
CNI
is
the
container
network
interface.
That
is
a
CNC
F
project
and
cubelet,
which
is
the
controlling
demon
of
kubernetes,
will
call
this
command
ad,
and
this
is
our
plugin,
which
is
now
installed
on
the
machine.
We'll
set
up
a
virtual
Ethernet
interface
into
the
pod
as
connected
on
a
bridge
locally
I'm
just
going
to
whiz
through
this
really
quickly.
I
guess
some
of
the
people
will
be
very
familiar
with
this.
B
B
We
have
a
number
of
different
technologies
that
we
use
to
make
things
work
under
different
circumstances,
but
the
most
common
is
is
VX
land.
This
runs
as
a
kernel
module
every
packet
going
from
one
pod
to
another,
gets
encapsulated
wrapped
up
inside
another
packet.
Those
packets
go
machine
to
machine,
so
your
underlying
network
doesn't
really
see
the
pod
traffic.
This
is
what
lets
you
have
a
lot
of
flexibility
in
picking
IP
addresses
and
making
up
completely
new
networks
in
software
on
top
of
the
real
network
yeah.
So
that's
basically
how
it
works.
B
We
set
up
this
daemon
running
on
every
machine.
The
demons
talk
to
each
other,
they
share
the
address
space.
Cubelet
tells
us
to
connect
up
pods
once
they're
connected
up,
we
route
the
packets
and
they
go
across
your
underlying
network.
Like
this,
let's
see
motivation
and
differentiation,
so
I
talked
about
the
it
just
works
idea.
I
talked
about
cross
domain
cross,
whatever
you
like,
and.
B
B
We
do
encryption
known
to
node,
we
support
multicast,
which
a
lot
of
people
don't
or
that's
almost
unique
amongst
container
overlay
networks.
I
talked
about
CNI
CNM,
which
is
the
docker
plug-in
model,
and
we
also
have
an
implementation
of
the
kubernetes
network
policy,
which
is
a
effectively
a
firewall
between
every
pod
in
your
cluster.
A
C
Generally,
for
CN
CF
is
to
identify
a
number
of
different
project
areas
or
groups
of
projects,
security
pipelines,
observability,
networking
storage,
server
list,
to
name
to
name
some
and
then
basically
a
more
or
less
identical
template
for
how
they
described
the
world.
So
you
know,
in
the
case
of
security
the
example
that
was
given
was
giving
people
security,
landscape,
the
security
reference
architecture
showing
them
patterns
for
how
to
build,
stack
and
telling
them
you
know
what's
in
the
space.
C
So
that
would
be
something
that
could
be
a
deliverable
of
the
networking
working
group
that
it
could
say.
Okay,
so
you've
got
C
and
I,
but
in
the
networking,
bigger
picture
of
cloud
native
here
are
the
other
things
that
we
think
you
need
to
be
aware
of.
I'm
thinking
of,
if
you're
thinking
about
cloud
native
networking
and
then
multicast
and
firewalls
could
be
examples
of
things
that
that
are
in
that
list.
A
A
C
Know
one
of
the
very
first
large
companies,
the
team
to
take
notice
of
we've
met
with
Amazon,
because
you
do
things
on
UCS,
like
service
discovery
with
it,
and
that
just
made
a
bunch
of
demos
and
meetups
and
things
just
easy
yeah
and
they
continue
to
be
interested
in
it.
So
I
think
that
they
would
probably
be
a
backer
I've
asked
them
many
times.
If
we
did
a
you
know
push
for
CTF
for
net
I,
just
think
you
should
just
get
on
with
it.
C
A
Know,
there's
there's
a
couple
other
projects
like
can't
Eve,
that
you
know
you've
thought
about
like
trying
to
like
bundle
up
a
couple
of
different
projects
together
and
taking
that
whole
bundle
forward.
But
we
can
talk
about
that
offline.
You
kind
of
see
if
that
makes
sense
to
do
that
versus
just
individual
projects.
C
Yeah
yeah
I
mean
with
coffee.
Remember
we
have
the
kolben
use
the
Cisco.
Yes,
they
seem
to
be
open
to
the
idea
of
being
able
to
focus
on
teams
kind
of
policy
areas
and
what
do
I
think
the
fashionable
phrase
of
the
networking
community
is
intact
and
there
were
a
bunch
of
things
that
they
were
working
there.
That
mean
they
needed
have
let
there
are
three
stuff
just
to
get
that,
but
they
didn't
if
they
decide
to
support
the
content,
implementation
hilarity
networking.
C
A
B
So
encryption
was
mentioned
so
that
kind
of
fits
in
basically
on
top
of
encapsulating
the
packet.
We
also
encrypt
it,
and
we
do
that
in
the
kernel
as
well.
If
we
can
it's
basically
using
the
same
kernel
modules
that
ID
set
users,
but
but
we
don't
require
you
to
set
up
certificates
all
over
the
place
to
use
it,
so
encryption
kind
of
sits
in
the
same
or
right.
B
Next
to
this
box,
mark
VX
land
and
we
encrypt
node
to
node,
that's
distinguished,
say
from
envoi
or
a
linker
D,
which
will
be
encrypting
plot
to
pod
right
and
then
multicast.
Multicast
is
kind
of
a
bit
of
an
outlier.
Multicast
falls
out
of
our
arms
limitation
because
it's
a
layered
to
implementation,
so
we
were
actually
transporting
packets
Mac
to
Mac
and.
B
B
It's
been
going
a
long
time,
so
we
have
a
bunch
of
pull
request
sponsor
bunch
of
github
stars
and
the
whole
thing
is
about
twenty
four
thousand
lines
of
code,
the
main
implementations
and
go
a
lot
of
tests
and
ancillary
things
are
in
shell
script.
That
gives
you
some
idea
of
the
scale
of
the
thing.
So
any
any
further
questions
about.
We.
A
A
B
B
We've
scoped.
We
we
also
run
a
daemon
on
every
machine
and
those
demons
collect
information
about
what's
running
they.
They
talk
to
Linux.
To
talk
about
processes.
They
talk
to
doctor
to
talk
to
hear
about
containers.
They
talk
to
kubernetes,
to
hear
about
daemon
sets
and
services
and
things,
and
they
they
send
what
we
call
reports.
They
upload
all
that
data
to
the
scope
app,
which
can
then
render
that
dynamically
on
the
screen
so
effectively
draws
an
architecture,
diagram
of
your
running
system,
and
then
we
also
have
stuff
going
the
other
way,
controls
and
pipes.
B
B
So
we
are
running
we've
scoped
here
we,
this
is
we've
scope,
packaged
as
a
component
of
our
commercial
offering
we've
cloud,
but
basically
all
the
software
is
is
open
source
and
we're
visualizing,
an
install
of
the
sock
shop,
which
is
a
demo
application
that
we
put
together
for
this
kind
of
purpose
and
right
now
we're
looking
at
kubernetes
controller.
So
so
most
these
things
are
deployments
I.
B
B
B
Containers
in
this
deployment
and
the
network
connections,
things
like
that,
we
can
zoom
in
on
one
of
those
containers,
we'll
get
a
little
bit
of
metrics
data
I
mentioned.
We
can
do
things
like
restart
a
container.
We
can
open
a
shell
inside
a
container
if
so
that
we
kind
of
talk
about
it
as
a
DevOps
console
that
it
can
be
used
for
a
bunch
of
visualizing
or
troubleshooting
activities.
B
What's
going
on
in
my
system,
let
me
you
know
hop
on
here
and
take
a
look
at
it
see
what's
going
on,
maybe
fix
some
config
or
something
like
that.
That's
that's
how
we
see
scope
being
used,
it's
a
very
graphical,
interactive
program,
so
I'm
looking
at
controllers
here
we
can.
We
can
look
by
hosts
we're
running
the
system
on
three
hosts
at
gke
and
they're
talking
to
various
cloud
services.
B
B
B
C
So
next,
one
thing
that
is
important
about
we've
scope
is
that
we
decided
recently
to
move
into
government's
into
a
community
model
properity
measure
for
going
into
CNCs.
The
reserve
call
once
every
two
weeks.
There
is
a
slack
Channel
and
unfortunately
coincides
with
this
very
presentation
today
and
it's
a
place
for
collaboration
to
occur
between
companies
that
are
developing
scope
with
alongside
us,
such
as,
for
example,
a
bunch
of
folks
in
the
storage
space
working
on
you
know
container
volumes.
C
So
if
you
look
in
the
brain
this
one
yeah,
this
is
a
blog
post
by
the
CTO
of
a
company
called
Maya
data
about
a
protocol,
open
EBS
that
you've
probably
seen
talking
about
adding
the
storage
volumes
to
scope,
and
this
this
is
being
useful,
so
by
port
works
in
demos
who
contribute
to
as
well.
So
you
can.
C
Can
get
the
famed
benefits
of
being
in
a
community
environment
with
this,
because
you've
got
a
different
set
of
use.
Cases
that
certainly
need
works
would
not
have
understood
how
to
approach.
It
would
have
taken
us
a
long
time
to
figure
out
what
the
optimal
choices
were
being
done
by
people.
If
you
actually
know
the
space,
there
are
other
people
who
think
this
is
exciting,
like
people
in
service
mesh.
C
C
D
C
Know
that's
a
decent
number
of
people
and
we
just
like
to
you
know
we
would
like
it
to
be
broader,
more
popular,
take
advantage
of
what
it
can
do.
We
don't
have
any
particular
desire
to
kind
of
lock
it
in
as
a
proprietary,
open
source,
monetization
path.
We
take
what
we
already
have
with
we've
care.
That's
good.
A
Sorry,
it
just
got
really
noisy
in
my
little
ellie
over
here.
Allergies,
no
I,
think
that
that
to
me
is
really
interesting
and
I
mentioned
previously
at
the
the
service
mesh
use
case.
I
think
it's
one
that
we
talked
about
in
my
last
last
call
is
why
scope
Kane
kind
of
seemed
like
interesting
to
us
to
talk
about
with
you
how
health
big
is
at
keys?
Anyone
outside
of
weave
in
that
community
now
that
you've
opened
up
or
is
it?
Is
it
still
looking
for
additional
contributors
from
outside
of.
B
B
C
So
Ken,
do
you
think
that
anyone
in
your
immediate
network
of
current
future
past
employers,
friends
etc,
would
be
interested
in
participating
in
scope
and
helping
with
CN
CF?
The
main
participation
so
far
seems
to
be
from
these
data
storage.
Guys,
although
there
is,
there
are
sort
of
smatterings
of
interest.
Some
other
places
like
IBM
they've
got
customers
using
scope
and
Finance,
apparently,
and
also.
A
A
A
C
C
A
A
C
A
A
D
D
A
D
D
D
Heard
of
it
before,
okay,
great
okay,
so
the
open
policy
agent,
it's
an
open
source
general
purpose
policy
engine
and
earlier
this
year,
was
added
it
to
the
CF
as
a
sandbox
level
project.
And
so
the
goal
of
the
project
really
is
to
give
sort
of
a
reusable
building
block
to
the
ecosystem.
That
allows
people
to
offload
policy
decisions
at
any
layer
of
the
stack
sort
of
in
any
system.
So.
D
Of
whether
you're
talking
about
you
know
API
authorization
or
in
Mission,
Control
and
kubernetes,
or
you
know,
host
level
access
with
SS
agents,
you
don't
things
like
that.
Opa
gives
people
this.
You
know
reusable
building
block
that
allows
them
to
codify
policies
around.
You
know
who
can
do
what
across
the
stack
and
so
the
core
thing
that
the
project
gives
people
is
this
high-level
declarative
policy,
language
that
we
call
Rago
and
Rago
is
sort
of
purpose-built
for
authoring
policies.
D
That
answer
questions,
like
you
know,
is
this
user
or
a
lot
to
perform
this
operation
on
this
resource,
or
you
know
what
annotations
need
to
be
applied
this
deployment
when
it's
created
in
kubernetes
and
so
on?
So
it's
a
it's
a
high
level
sort
of
declarative
language
and,
and
one
of
the
interesting
things
about
it.
Is
it's
not
just
limited
to
boolean
yes-or-no
decisions
like
a
lower
tonight,
you
can
actually
express
just
sort
of
decisions
that
are
represented
by
arbitrary
JSON
data.
D
D
So,
regardless
of
whether
you
integrate
it
as
a
library
or
you
run
it
as
I
was
little
daemon
or
as
a
sidecar,
it's
basically
the
same
model,
you
you
run
it
next
to
the
service
and
you
offload
policy
decisions
from
the
service
to
OPA
when
they
need
to
be
made
when
it
comes
to
actually
evaluating
policies,
OPA
doesn't
introduce
any
kind
of
runtime
dependencies
or
anything
like
that.
It
keeps
all
of
the
policy
and
data
that
it
uses
to
make
decisions
in
memory.
D
So
that
means
that
it's
quite
lightweight-
and
it's
quite
easy
to
deploy-
you
don't
have
to
you-
know,
run
another
instance
of
NCD.
You
don't
have
to
run
another
database
anywhere.
All
of
the
policy
and
data
that
open
uses
to
make
decisions
are
kept
in
memory
and
then,
in
addition
to
the
core
kind
of
evaluation
engine
which
includes
the
parser
and
compiler
to
the
language
in
the
interpreter,
there's
also
a
collection
of
tooling
that
helps
you
sort
of
build
test
and
debug
your
policies.
D
So,
for
example,
there's
an
interactive
shell
which
you
can
use
to
test
policies
kind
of
manually
when
you're
developing
them.
We
have
IDE
integrations
that
you
can.
You
can
basically,
you
know,
load
your
policies
into
vs
code
and
then
you
know
evaluate
portions
of
them,
see
your
you
know,
test
coverage
and
so
on.
We
have
a
test
framework,
so
you
can.
Actually,
you
know
check
that
your
policies
are
covering.
You
know
the
entire
API,
as
you
would
expect,
and
then
we
have
a
bunch
of
debugging
facilities
like
policy.
Tracings
that
you
can.
D
You
know
understand
why
a
particular
policy
decision
is
being
returned.
The
way
it
is
and
then,
in
addition
to
the
core
project,
there's
also
sort
of
a
standard
library
of
policies
that
cover
different
use.
Cases
like
API
authorization,
admission
control,
auditing
and
more,
and
we
also
have
integrations
with
a
bunch
of
projects
like
kubernetes.
This
do
you
know
terraform,
you
name
it,
and
the
nice
thing
about
those
integrations
is
is
that
you
can
actually
get
up
and
running
with
OPA
without
having
to
write
any
code
to
integrate
it.
D
So
a
lot
of
our
use
cases
are
around
enforcing
invariants
or
constraints
and
different
infrastructure
or
platform
projects
like
kubernetes,
and
so
today
you
can
basically
take
OPA
and
you
can
run
it
on
top
of
kubernetes,
for
example,
as
an
admission
controller,
without
having
to
write
any
code.
So
with
the.
D
D
No
ok,
all
right
so
in
terms
of
community.
We
started
the
project
about
two
years
ago,
actually,
two
and
a
half
years
ago
now,
and
the
goal
at
the
answer
that
the
beginning
was
just
to
get
this
policy
engine
off
the
ground
by
defining
the
language
and
defining
the
sort
of
implementing
the
core
kind
of
evaluation
engine.
And
so
we
did
that.
D
And
then
in
the
summer
2016
we
took
okhla
out
and
we
started
integrating
it
with
a
bunch
of
different
open-source
projects
and
solving
actual
policy
enforcement
problems
that
people
are
having
at
different
parts
of
the
stacks.
So
we
built
a
mission
controller
for
kubernetes
to
enforce
all
kinds
of
invariants.
In
your
kubernetes
system,
we
indicated
with
docker
to
do
similar
kind
of
admission
or
container
runtime
security
enforcement
at
the
host
level.
D
We've
integrated
with
Pam
the
pluggable
access
module
framework
on
Linux,
so
you
can
actually
do
access
control
over
SSH
and
sudo,
and
then
it's
been
integrated
with
a
handful
of
other
projects,
different
parts,
the
stack
it
one
of
our
most
common
use.
Cases
is
around
just
API
authorization
and
micro
service,
API
authorization
and
so
opens
actually
integrated
into
sto.
The
eesti
of
security
sig
took
open
and
they.
D
Plugin
there
I
think
it's
one
of
the
I
think
it's
the
only
external
authorization
plug-in
that
they
have
today
we're
integrated
with
linker
D.
We
have
approved
constant
integration.
There,
we're
gonna
go
with
Cloud
Foundry.
Someone
just
recently
built
an
integration
with
Kong,
so
API
offers
engine
is
a
very
common
use
case
for
Oba.
Basically,
you
know
every
every
project
every
product
that
runs
in
the
enterprise
has
to
have.
You
know
some
form
of
authorization,
whether
it's
role
based
access,
control
or
attribute
based
access
control.
D
They
need
to
have
it
and
so
opens
a
nice
sort
of
building
block
there
a
tool
there,
because
it
helps
them
get
to
market
faster,
so
stirrup
steerer.
We
were
one
of
the
main
we're
one
of
the
main
contributors
right
now
about.
Halfway
through
last
year,
we
started
working
with
folks
at
Google
firebase
on
some
improvements
to
the
language
and
so
sort
of
as
of
last
year
that
we're
there
so
the
two
main
sponsors
of
the
project.
D
In
terms
of
adoption,
we
have
a
number
of
people
using
open
today
and
it's
sort
of
a
different,
a
different
degrees
of
deployment.
Netflix
is
one
of
the
early
sort
of
adopters
of
the
project,
and
today
they
use
it
for
authorization
across
internal
micro
services
and
internal
services
at
Netflix.
So,
for
example,
they
use
it
to.
You
know:
control
access
to
HTTP
and
G
RPC
ad
is.
They
are
also
using
it
for
access
controlling
Kafka
where
they
have.
D
You
know
certain
topics
that
have
a
very
large
fan
out,
and
in
that
case
a
bad
write
to
one
of
those
topics
is
very
expensive
to
recover
from
so
they
use
open.
Of
course,
access
control
over
who
can
write
into
Kafka
I
mean
then
they
have
a
number,
a
number
of
other
use
cases
coming
online
and
a
number
of
other
teams
I.
Remember
of
teams
that
actually
use
open
today,
but
Opie's
not
sort
of
limited
to
API
authorization.
It's
also
used
for
for
a
number
of
other
use
cases.
D
Medallia
is
another
example
of
a
company
using
open
today,
so
they
have
a
completely
different
use
case
around
risk
management
in
terraform
and
AWS,
and
so
what
they
do
is
actually
they
compute
a
risk
score
for
terraform
changes,
and
then
they
check
whether
the
person
making
the
change
has
a
sufficient
risk
budget
to
actually
go
off
and
make
those
changes
to
the
infrastructure.
That's.
D
Important
because
terraform,
what
it's
fairly
easy
to
use.
However,
it's
very
easy
to
have
a
large
impact
on
your
infrastructure
by
changing
Leonore,
like
single
line
or
in
single
handful
of
lines
in
a
terraform
file,
and
so
you
really
want
to
have
control
over
who
can
do
that
and
how
much
risk
they
can
that
they're
allocated.
B
B
So
take
the
example:
you
just
talked
about
you.
You
said
whether
a
person
would
be
allowed
or
not
allowed.
The
scent
of
persons
in
most
organizations
is
something
that
will
be
mastered
in
a
particular
place
like
sa
PE
or
something
like
that.
So
what
what
is
the
typical
connection
between
the
master
of
that
data
and
the
publicity
engine
so.
D
That
so
yeah
that's
a
good
question.
So
usually
it
depends
on
that.
It
depends
on
the
on
the
integration
and
the
type
of
policy
or
trying
to
enforce
for
API
authorization
and
most
and
a
lot
of
you
know,
policy
enforcement
use
cases
that
are
sort
of
identity
based
where
you
have
a
collection
of
attributes
for
identity
is
sort
of
multi-dimensional
and
there's
a
collection
of
attributes
that
you
want
to
authorize
over.
You
know
those
things
are
often
used
provided
as
input
to
the
policy
queries.
D
So
when
you
run
up
when
you
execute
a
policy
career,
you
can
supply
arbitrary
JSON
input,
and
so
you
know
that
that
input
could
come
from
LDAP
or
it
could
come
from.
You
know,
John
token
or
whatever
it
open
doesn't
really
care.
It's
just
JSON
data
that
you
provide
as
input,
and
then
the
policy
applies,
meaning
to
that
then
to
that
data,
so
that
that
data
can
come
from
from
a
variety
of
different
sources.
D
Yeah,
that's
that's
that's
right!
So
then,
basically,
the
you
know
like
Oprah's
not
trying
to
solve
authentication
in
any
sense,
it's
it's!
It's
purely
for
you
know
more
authorization
and
sort
of
policy
decisions
right.
So
so
it's
so
usually
the
you
know,
there's
some
some
sort
of
authentication,
but
it
has
to
be
an
authentication
system
in
place
in
front
and
then
author,
you
know
controlling
exactly
what
a
person
can
do
is
hand
it
off
to
OPA.
D
D
Okay,
so
in
terms
of
use
cases,
you
know
we.
Basically
it's
been
sort
of
two
nine
years
and
we've
taken
over
when
apply
that
really
across
the
stack
we've
used
it
in
communities
we've
seen
using
kubernetes
quite
heavily
for
admission
control,
we
can
use
tele
from
micro
service,
API
authorization
risk
management
to
the
infrastructure
host
level.
You
know
access
control
and
data
protection,
or
you
know,
managing
access
to
data
lakes
is
something
that's
sort
of
coming
online
and
we
have
a
number
of
integrations
that.
D
Cases
when
we
look
at
when
we
look
at
use
cases
when
we
sort
of
start
we're
gonna
use
case,
there
are
a
number
of
different
dimensions
that
we
use
to
kind
of
evaluate
the
use
case.
So
obviously
you
know,
the
type
of
policy
that
you
want
to
enforce
is
obviously
relevant
and
that's
sort
of
like
a
first
order
thing
that
we
look
at.
So
is
this
authorization
policy
or
is
it
a
rate
limiting
policy
or
a
risk
management
policy
right?
What
is
it?
What
are
you
guys
trying
to
achieve?
D
What
kind
of
expressiveness
does
that
policy
require?
So
you
know
some
policies
you
can
express
them
just
as
a
sort
of
a
sequence
of
you
know,
a
linear
time
kind
of
sequence
of
you
know,
boolean
conditions
right,
and
so
that's
very
simple.
Other
policies
require
more
expressiveness.
They
need
you
need
the
ability
to
search
over.
You
know
potentially
a
large
amount
of
data
in
order
to
make
a
decision
right.
D
They
need
to
have
access
to
all
of
the
other
ingress
a--'s
that
exist
in
the
kumite
is
deployment,
and
so
so
and
they
so
then
they
have
to
search
over
those
in
order
to
make
a
decision.
And
so
then
the
data
are
context
that
open
access
to
is
important.
You
know
how
much
data
does
it
depend
on?
How
does
open
get
that
data?
Where
does
it
come
from?
You
know?
How
often
does
it
need
to
be
to
be
synced
and
so
on?
D
What
kind
of
decisions
do
you
need
to
generate
right
if
it's
just
an
authorization
policy
and
then
a
lot
of
time?
It's
just
a
boolean,
true
or
false.
You
know
or
allow
deny
decision,
but
you
can
actually
express
you
can
express
decisions
that
are
arbitrary
JSON.
You
know
so,
for
example,
you
know
a
rate
limiting
policy
might
return.
You
know
the
number
of
requests
that
the
user
is
allowed
to
make.
D
Or
you
know
in
an
admission
control
policy,
it
might
be
a
set
of
annotations
or
something
like
that
to
apply
or
a
JSON
patch,
even
to
apply
to
a
resource
when
it's
created
and
then
there's
a
question
of
you
know:
how
do
you
actually
integrate
with
OPA?
You
know:
what's
tag
the
actual
enforcement
integration
going
to
look
like
what's
the
on
then,
conversely,
what
is
the
management
integration
going
to
look
like
how
do
the
policies
of
data
get
into
OPA?
D
What
are
the
performance
requirements
right?
A
lot
of
the
time
the
the
integrations
or
the
use
cases
break
down
to
you
know:
low
latency,
high
frequency
decisions
that
are
relatively
simple
but
need
to
be
made
very
often,
and
then
the
other
camp
is
sort
of
you
know
more
they're,
tolerant
of
higher
latency
'z,
but
they
have
to
search
over
a
large
amount
of
data
and
the
cost
of
a
decision.
There
is
a
little
bit
higher
and
then
there
are
kind
of
different
modes
for
applying
OPA.
D
D
D
They
have
existing
support
for
things
like
bucket
bucket
policies
and
you
know,
object
level
ACLs,
but
what
they
need
is
some
more
expressiveness.
They
need
basically
a
curious
access
control
in
order
to
implement
their
policies.
So
these
are
things
like
based
on
time
of
day
or
based
on
you
know
the
geographic
region
of
the
collar
and
so
on,
and
so
the
existing
ACL
systems
in
s3
and
therefore
Mineo
and
stuff.
Don't
don't
cover
that
so
we
have.
A
D
We
have
integrations
or
more
kind
of
sophisticated
integrations
to
help
people
with
data
filtering
coming
online
that
leverage
some
kind
of
core
functionality
and
oppo.
Where
you
know
you
have
any
once
you
once
you've
done
kind
of
request
once
you've
solved
request
level
authorization.
The
next
step
is
to
make
sort
of
authorization
decisions
based
on
data,
for
example,
in
a
relational
database
right.
So
you
might
have
a
collection
of
you
know,
elements
and
an
API
response,
and
you
want
to
filter
which
ones
are
going
to
be
sent
back
to
the
client
based
on
policy.
D
A
D
A
D
A
Yeah,
this
is
really
good.
There's
a
lot
of
interest
in
a
working
group
with
policy
associate
on
network
policy
and
security,
very
policies,
and
so
I
would
definitely
take
this.
This
presentation
and
our
next
work
group
meeting
I
think
we're
gonna
have
like
sto
talking
to
us.
We
may
have
some
more
discussion
on
policy
and
then
OPA
as
part
of
the
sto
effort.
That's
going
on
there,
but
we'll
play
again.
I
definitely
want
to
come
back
to
you
guys
with
some
additional
questions
and
clarification.
Yeah.
D
D
There's
something
specific
that
we
can
drill
into
like
we'd,
be
happy
to
do
that.
I
know.
One
thing
that
we've
heard
from
people
is:
you
know
that
they
want
to
enforce
policies
over
the
network
policies,
including
kubernetes
right
so
like
who
has
the
right
to
you,
know,
commit
to
create
certain
types
of
network
policies.
That's
been
something
that's
come
up.
D
You
know
as
well
as
what
what
I
thought
yep
of
like
opus
policy
language
to
actual,
like
you
know,
enforcement
in
the
data
plane
and
that's
something
we
haven't
really
pushed,
but
it's
something
that
we
were
sort
of
interested
in
so
we'd
be
happy
to
do
sort
of
a
more
focused
follow-up
as
necessary
there.
As
you
know,
if
it's
desired
yep,
that's
a
big
one.
Okay,
thanks
for
the
time
and.