►
Description
Come join us where we go over the original network policy implementations, https://kubernetes.io/blog/2016/04/kubernetes-network-policy-apis/ , and some of the early work on them by our calico and other friends - and also explore the future of cluster network policies, administrative network policies...
A
B
A
A
Today,
we're
going
to
do
a
special
episode,
special
special
episode
luther.
Do
you
want
to
introduce
yourself
and
spectrocloud.
B
B
We
do
capping,
we
do
multi-cluster
management
with
cappy.
We've
got
a
bunch
of
stuff
sitting
on
top
of
it,
so
we're
deep
in
the
cappy
repos
and
we
are
pulling
that
all
together
and
we're
the
blind
clusters
for
people.
B
A
B
Bare
metal
automation,
no,
no.
Our
claim
to
fame
is
that
we
made
a
cappy
provider
to
for
mass,
and
then
we
did
a
bunch
of
weird
bare
metal
mass
things
and
then
we
added
more
on
top
of
that
and
pulled
all
the
providers
in.
So
we
actually
physically
wrote
a
capture
provider
which
I'm
going
to
try
to
upstream
one
day
and
then
we
we
then
took
the
rest
of
the
copy
providers
and
implemented
them.
A
A
So
so,
let's
take
a
look
here,
so
this
is
like
the
for
one
of
the
early
issues
right.
So
you
can
see
this
was
casey.
Casey
created
this
he's
a
you
know,
chair
over
at
sig
network
he's
works
for
tajero,
and
so
this
was
he
works
on
calico
and
my
I'm
getting
a
spinning
wheel.
So
I
gotta
stop.
While
my
computer
catches
up
with
me
all
right,
so
he
so
the
original
whoops,
the
original
policy
api
was
this.
It
was
a.
A
It
was
like
an
ingress
right,
so
it
was
an
ingress
api.
It
was
not
an
ingress
api
but
like
it
was
a
it
only
implemented.
Let's
take
a
look
at
it
right,
so
the
original
implementation
is
here
right
and
the
reason
we're
doing
this
is:
we've
got
some
new
people
ramping
up
to
help
with
the
cluster
admin
network
policies,
and
in
order
to
do
that,
it's
kind
of
helpful
to
know
how
all
this
stuff
got
started.
So
originally
there
weren't
caps-
I
guess
so
there
was
these
doc
proposals,
network
policy
mdi.
A
Those
of
you
who
have
been
involved
in
kubernetes
for
a
while
will
remember
this,
so
we
used
to
have
these
proposal
docs
and
then
the
proposals
would
go
and
and
then
eventually
they
made
the
upstream
enhancements
repo.
But
so
in
the
original
proposal
you
could
see.
There's
a
you
know
here
we
go.
There's
the
namespace
spec,
there's
the
name,
namespace
network.
Look
at
all
these!
A
So
these!
I
don't
think
these
are.
Oh,
this
is
a
markdown.
This
is
the
original
proposal,
but
I
think
they
they
like
changed
it
or
something
because
I
don't
think
they
yeah
okay.
So
let's
start
with
here.
This
is
the
best
place
to
start
right.
So
so
originally
there
was
ingress
rules,
but
there
weren't
egress
rules
right,
which
is
what
I
wanted
to
show
right
and
and
so.
A
You
know,
I
think
they
considered
egress
egress.
Let's
see
that's
a
good
question.
This
proposal
does
not
yet
include
egress
policy
right
which
will
actively
undergoing
discussion
of
the
sig.
These
are
expected
to
augment
this
proposal
in
a
backwards
compatible
way
right.
So
the
first
original
network
policies-
this
was
in
2016
right.
A
They
didn't
have
egresses
right,
so
it
had
pod
selectors,
it
had
ingress,
but
it
didn't
have
this
idea
of
a
policy
type
and
an
egress
right
so
and
that's
important
to
when
it
comes
to
trying
to
understand
the
the
policy
api
because
it
if
for
those
of
you
that
use
it,
you
know
that
you
actually
have
to
let's
go
find,
let's
go
find
one.
So
let's
go
find
a
network
policy.
I
have
one
here.
So,
let's
see
so,
if
I
go
to.
A
Where
is
it
tv.
A
Is
prototypes
cd
right?
So
if
I
go
in
here
and
I
look
at
a
policy-
let's
say
engine
x
policy-
well,
you
know,
I
hope
I
actually
put
it.
Maybe
I
didn't
even
put
it
because
it's
a
default
well,
so
this
is
a
good
example
right.
If
I
look
at
this
policy
here,
you
can
see,
there's
no
there's
no
type,
there's
no
there's
no
type
and
and
sort
of
the
reason
so
like.
If
I
look
in
here
example,
network
policies-
right,
I
think,
there's
almond
bee-
has
a
really
good
website.
A
We
can
look
at
the
original
box
here,
so
you
have
ingress
and
you
have
egress
and
you
have
these
two,
but
you
also
have
this
weird
policy
types
struct
right,
so
I
could
add
this
struct
here
and
that
would
be
the
same
equivalent
policy
right,
but
I
can
do
this
policy
types.
Okay.
Let
me
add
this.
A
A
Okay
and
I
was
to
coop
ctl
delete,
dash,
f
engine
x
policy,
diamo
group
ctl
create
f
engine
x,
dot,
yeah,
okay
and
then
I
go
and
I
edit
it
again
we'll
see
that
this
information
gets
defaulted
for
me
right,
so
you
might
say
well.
Why
is
that
right
and
well?
That's
why?
Because
the
original
pr
from
a
long
time
ago
was
made
to
just
support
ingress,
so
that
became
the
default
moving
forward
and
that's
kind
of
something
important
to
think
about
with
apis
right
like
if
you
only
support
one
version
of
any
api.
A
If
you
only
support
one
thing
in
an
api,
that
thing
is
the
default
forever
right.
So
that's
the
first
lesson
learned
here
right
like
if
you're
making
an
api.
I
guess,
like
you,
have
to
keep
in
mind
that,
like,
if
you
assume
something's
always
going
to
be
the
case,
then
it
will
always
be
the
default
case
right.
So
even
once,
that's
not
the
case
anymore.
A
A
That's
that
was
the
original
one
and
then
so,
and
we
had
the
first
bug
fix
right.
So
let's
go
here
and
we
can
look
at
how
they
validate
the
network
policies.
So
if
you
go,
if
I
go
here
and
then
I
let
me
see
the
source
kubernetes
right,
I
get
show
this
right.
If
I
go
in
here
so
just
a
few
weeks
later,
I
think-
and
I
think
casey
did
this
one
too.
He
had
this
bug
fix
right.
So
this
is
the
validate.
This
is
another
interesting
little
tidbit
of
history.
A
Here
we
used
to
have
this.
This
used
to
live
in
the
api
extensions
validation.go.
Now
it
lives
in
the
it
lives
in
a
different
place.
So
if
I
go,
if
I
go
to
get
grep
here
now
it
lives
in
networking
validation.go
right
so
now
it
used
to
live
there.
Now
it
lives
in
packages,
apis,
networking
validation,
so.
A
I
was
wondering
the
same
thing,
and
so,
if
I
look
in
here,
do
we
still
have
it?
I
guess
we
still
have
it
so
then
the
question
is:
do
we
still
maintain
this
like?
What?
What
do
we
do
in
here?
I
guess
all
the
v1
beta
1
api
still
have
conversion
hooks
in
them,
and
then
we
have
a
thing
called
fuzzer
and
then
we
have
this.
I
don't
know
what
the
hell.
This
thing
is.
Api
machinery
packaged
you
till
one
time.
A
A
A
A
A
So
what
that
meant
was
this
append
kept
happening
properly
and
then
all
of
the
errors
then
properly
got
glommed
on
and
then
that's
that
allowed
it.
So
that,
like
a
lot
of
different
things,
could
go
wrong
when
you
created
a
network
policy
and
then
that
would
be
caught
server
side
so
and
then
it
would
be
rejected.
The
object
wouldn't
be
created
so
like
if
you
do
like,
if
here
I
made
one
so
and
now
in
the
next
policy,
so
a
good
way
to
see
how
the
validation
stuff
works.
A
You
know
is
in
this
case
you
it's
not
it's
not
necessarily
trivial,
to
figure
out
how
to
see
the
validation
code
working
because
most
of
the
things
that
you
might
do
would
be
either
like
syntax
errors.
That
would
just
be
caught
generically
by
the
api
server.
But
this
is
one
good
example,
because
ricardo's
this
is
ricardo's
end
port
right
pr
and
this
one
like
one
example
of
why
you
have
custom
validation
right
is
that
you
have
logic.
A
A
I
think
it's
like
here
right,
less
than
n
port.
You
can
see
this
code
in
the
validate
network
policy
port.
So
then,
when
he
does
this
right,
if
I
go
and
I
create
this-
you
can
see
this
getting
tripped
right.
You
can
see
how
this
gets
called.
So,
if
I
do
cube,
ctl
create
dash
f
that
see
it
gets
mad
at
me.
It
doesn't.
Let
me
create
the
object
right,
it
says:
is
invalid
spec
zero,
ingress,
endports
invalid
value,
two
two
two
right!
So
that's
this
validation
code.
A
So
anytime,
you,
you
create
a
new
when
you,
when
you
update
the
network
policies
stuff,
like
you,
have
to
go
or
any
any
of
these
apis
in
kubernetes.
You
go
into
the
package,
slash
api
whatever,
and
then
you
have
to
like
add
some
some
like
boilerplate
in
here
to
just
check
the
fields
after
they
get
intercepted
by
the
api
server
and
send
down
to
your
thing
right.
So
there's
the
first
bug
fix.
So
this
is
a
good
history.
Are
you
having
fun
so
far
luther
this.
A
So
we're
yeah,
so
this
is
2016
right.
So
then
what
happens
between
2017
and
2020.
well.
So
during
this
time
this
is
when
we
go
from
having
an
api
to
psyllium
and
calico
and
other
early
network
policy
adopters
actually,
like
you,
know,
sort
of
implementing
the
api
right.
So
we
have
here's
the
psyllium's
original
one
and
then,
if
you
go
to
page
19,
you
can
see
when
they
did
this
and
it's
kind
of
interesting,
because
you
could
see
how
these
projects
have
evolved
over
time.
A
Right,
like
one
of
the
things
that
was
interesting
to
me,
is
celium
kind
of
out
the
gates
was
a
network
policy
thing
right,
like
I
mean
like
a
network
policy
centric
like
kubernetes
centric
project,
it
looks
like
because
you
go
back
and
it's
like
psyllium
0.8.0
and
it
was
like
kubernetes
from
day
one.
It
seemed
like
almost
to
me
and
and
so
and
and
if
you
look
back
in
the
early
mailing
list,
thomas
graf
who's,
you
know
in
charge
of
celine
was
very
active
in
the
early
days.
A
I
think
in
the
network
policy
stuff
I
mean,
I
guess,
he's
still
active,
but
but
you
know
so
and
then
of
course
the
calico
stuff
was
different,
though
because
calico
came
out
like
in
the
open
stack
days,
so
I
think
they
evolved
to
become
a
kubernetes
provider.
A
So
you
know,
if
I
go
back
here,
there's
a
lot.
I
can
go
a
lot
further
back.
I
think,
but
well
I'm
not
100
sure,
though
about
that,
so
nobody
from
calico
is
here
to
justify
whether
that's
true
or
not.
Oh.
A
A
B
A
B
A
A
Okay,
so
I
can
see
here
I
have
this
anterior
entry
gateway.
What
is
this
brp
hy?
I
don't
know
what
that
is,
and
then
I
have
this
ovs
net
dev.
So
this
is
a
device,
so
you
can
see
the
open
v
switch
device
is
like
plugged
in
right
there
and
I
have
the
entry
gateway
plugged
in
there
and
then
you
can
see
like
I
you
know
when
I
make
new
containers.
A
B
A
Here
we
go,
I
guess
all
my
containers
are
landing
on
the
same
node,
so
you
can
see
here
each
one
of
these
are
these
are
the
devices
for
the
containers
and
each
one
has
its
own
little
name.
So
you
can
see
these
are
all
my
core
dns
containers
and
then
each
one
of
those
devices
then
gets
plugged
into
like
this.
Is
the
open,
v-switch
thing
so
they're
plugged
into
that
switch
that
way
right,
and
so
let
me
see
here.
So
that's
so.
A
It's
like
a
layer,
two
thing,
whereas
in
calico
it's
all
doing
it
over
ip
stuff
right
and
there's
no
switch.
That's
like
sitting
between
your
pods
in
the
outside
world
right,
it's
all
broadcastable!
So
all
right!
Now,
if
I
go
back
so
that's
that
so
then
these
two
so
interesting
can.
B
A
A
So,
okay,
so
now,
if
I
go
to
so
in
2017
google
announced
calculus
support
and
then
in
and
then
we
created
the
network
policy
working
group,
not
too
long
after
that.
I'm
sorry
it's
very
long
after
that.
Now
it's
2020.
What
am
I
talking
about?
So
here's
the
network
policy
working
group,
so
you
know
this
was
like
early
days.
Oh,
this
is
20.
A
A
This
was
enough
constant
20
20.
She
left
us
recently
she's
now
at
google
hi
susan.
A
A
That's
about
the
time
we
started
running
our
network
policy
and
to
end
tests,
and
about
a
year
later
we
announced
the
network
policy
sort
of
sort
of
our
attempts
at
making
network
policy
a
a
a
a
conformant
way
to
sort
of
scan
network
policies
in
a
collective
way,
and
we
had
we
added
these
table
tests
to
it.
That
was
in
this
was
all
in
2021,
and
then
this
is
kind
of
around
the
time
that
the
network
policy
working
group
sort
of
started
to
take
off.
A
So,
as
you
know,
now
we
can
now
we
can
run
these
tests,
so
we
made
it
so
that
now
you
can
run
these
tests
and
these
tests
will
print
out
tables
and
the
tables
will.
Oh
god
I
hate
it.
When
it
does
this,
the
tables
will
help
you
visualize
the
policies
right,
so
you
can
say
group
ctl
get
nodes,
coupe
ctl
edit
node.
A
I
don't
know
why
these
tests
get
hung
up
on
this
stuff.
I
always
for
I
mean
there's
a
good
reason.
I'm
gonna
unpaint
this
node
so
that
we
can
just
run
the
test
so
yeah.
It
should
start
yeah,
so
it's
starting
so
you'll
see
these
network
policy
tests,
give
you
some
notion
of
conformance,
and
then
we
built
this
tool
called
cyclonic.
A
I
p
blocking
udp
deleting
pods
multi
peers,
all
this
stuff
and
then
those
tags
it
prints
out
which
of
all
those
different
types
of
network
policies
which
ones
worked
and
which
ones
didn't
so
it
has
its
way
of
generating
policies
and
then,
based
on
the
policy,
that's
generated,
defining
a
matrix
that
can
automatically
validate
the
policies
using
the
same
logic
that
we
use
in
the
upstream
kubernetes
e2es.
So
then
that
came
out
so
so.
At
that
point,
I
think
we
were.
A
We
were
getting
a
lot
done
in
terms
of
sort
of
like
figuring
out
how
to
make
the
network
policy
api,
like
a
first
class
citizen
in
the
kubernetes
world,
and
then
people
asked
us
for
these
things.
So
I
made
a
blog
post
about
this
yesterday
because
I
wanted
to
make
sure
I
had
something
easy.
A
I
could
access
during
this
stream,
so
we
kind
of
started
looking
at
this
problem
of
like
what
are
the
things
people
are
asking
for,
because
you
know
the
network
policy
api
allows
you
to
do
some,
some
basic
things
that
we
all
know
about:
selecting
pods,
selecting,
namespaces,
selecting,
ciders
and
then
so
we
needed
to
figure
out
like
okay.
What
are
all
these
people
asking
for?
A
Because
we
had
this
network
policy
working
group
that
had
been
running
for
a
couple
of
years,
and
so
we
created
sort
of
we
had
all
these
group
hangouts
and
then
everybody
would
come
and
yell
at
us
about
all
the
things
the
policy
api
didn't
do,
and
I
made
this
horrible
diagram
that
tries
to
express
this.
So
in
these
blue
things
here
you
have
all
the
things
that
people
wanted.
So,
for
example,
they
wanted
action
policies.
They
wanted
the
ability
to
prioritize
policies.
My
policy
is
better
than
yours.
A
You
know
I
want
to
block
everything,
and
I
know
you
won't
allow
things
but
you're
not
as
important
as
me,
so
my
policy
wins
right
now.
One
thing
people
asked
for
was
which
was
kind
of
an
interesting
one.
Is
the
idea
of
a
secure
gateway
policy
where
like?
If,
for
example,
this
web
app
wants
to
talk
to
these
web
apps
on
this
other
place,
then
it
always
accesses
them
through
a
specific
gateway.
A
Somehow,
and
then
people
wanted
default,
port
ranges
that
were
allowed
and
disallowed,
and
then
some
people
asked
for
things
like
you
know,
being
able
to
whitelist
all
dns
access
so
they're,
like
I'm,
not
worried
about
people
use
doing
dns.
I
want
that
to
always
work,
and
then
people
asked
for
what
else
did
people
ask
for
default
service
policies
right?
A
So
one
person
was
asking
for
this
idea
that
every
pod
that
comes
up
should
be
able
to
access
kubernetes.default.service.local
right,
because
that's
just
like
a
fundamental
internal
thing
that
every
pod
should
be
able
to
access
right,
because
that's
that's
where
the
internal
dns
lives
right.
What
is
wrong
with
this
cluster.
B
A
B
A
B
Well,
it
means
valid,
but
like
it's
just
it's
cool
that
you've
documented
all
this
and
tried
to.
I
really
like
how
you
took,
and
you
made
the
connection
you
like
me.
I,
like
you,
you're
good
dude.
No,
I
like
how
you
went
in
and
got
in
the
connections
and
then
you're
like
people
were
asking
for
this
little
call
out
to
it.
That's
just
a
fascinating
way
to
do
that.
I
just
because
it's
all
complicated
stuff
and
it's
all
just
you
know
lights
going
down
a
tube
somewhere.
You
got
to
figure
out
what
you're
doing.
I.
A
A
Okay,
so
anyways
what
we
boiled
it
down
to
is
this
people
wanted
port
ranges.
The
other
thing
one
thing
people
ask
for
is
namespace
is
by
name
and
we're
able
to
get
that
in.
So,
if
you
actually
look
here,
we
added
a
thing
now,
if
you
go,
if
you
go
into
a
cluster
group,
ctl
get
ns
group
ctl,
edit
ns,
let's
say
default,
you'll
see
that
now
there's
a
metadata.name
field.
A
A
The
reason
we
did
that
was
so
that
people
could
make
network
names,
so
people
could
make
names,
so
people
could
make
network
policies
against
a
namespace
without
having
added
their
own
labels
to
the
namespace
right
so
like,
for
example,
I'm
going
to
make
100
namespaces
and
I
want
to
build
a
default
network
policy
for
each
one
of
those.
But
I
don't
want
to
also
have
to
add
labels
to
those
namespaces,
because,
maybe
I
don't
have
permissions-
are
back
rules
to
add
labels
to
those
namespaces.
A
But
if
the
api
server
by
default
always
labels
every
namespace
with
its
own
name,
then
I
automatically
now
have
a
mechanism
for
building
network
policies
against
that
right.
Yeah.
So
so
that's
this
right.
So
that
was
this
one
that
was
done
by
the
network
policy
working
group
right.
So
why
does
this
suck
so
bad?
What
is
going?
Oh.
A
B
A
Anyway,
so
okay,
then
moving
forward
to
so
then
finally,
we
hit
so
now
we're-
and
you
know,
here's
our
so
so.
Finally,
then
we
move
forward
to
the
caps
right,
so
this
is
kind
of
what
we
wanted
to
get
to
so
finally,
by
2022
we
started
getting
some
caps
coming
in
right,
so
this
is
how
long
it
takes
to
change
something
in
a
big
open
source
project
right.
So
so
we
have
three
caps.
Three
years
later,
we
have
three
caps
right.
That's
how
long
it
took
so
we
had.
We
got
the
network
policy.
A
B
A
B
B
B
A
Okay,
whatever
and
then
we
have
admin
network
policy,
network
policy
status.
So
the
idea
of
the
network
policy
status
status,
right
network
policy
status,
cap
right.
A
A
A
There
are
certain
things
they
I
think
in
ebpf.
They
didn't
want
to
originally
support
port
ranges.
I
don't
remember
why
that
was
the
case.
I
think
that
changed,
but
it
became
clear
that
there
was
certain
situations
when
certain
providers,
you
know-
and
it
could
just
as
well
be
andrea,
maybe
someday
there'll-
be
something
android
doesn't
support.
It
could
be
any
any
cni
provider
might
at
some
point.
A
Yeah
and
publish
it
so
that
if
a
user
makes
a
new
policy,
but
it's
not
supported
it's
very
clear-
that
the
status
of
the
policy
is
that
it's
not
not
implemented
yet,
and
then
that
way,
you
don't
have
this
cve
problem
where
they
create
a
policy,
and
you
have
a
silent,
open
door
right,
yeah
yeah,
so
so,
okay,
so
but
then
that
came
along
and
then
the
adword
network
policy
is
the
one
which
is
kind
of
worth
talking
about
today,
a
little
bit
admin
network
policy.
A
Is
this
one,
and
so
the
admin
network
policy,
I
think,
there's
a
diagram
in
here
we
can
use-
is
the
idea
of
it's
kind
of
the
realization
of
this
thing
that
people
were
talking
about
of
this.
This
idea
of
hierarchical
network
policies,
right
where
you
know
you
have.
A
Here's
the
user
stories
deny
traffic
at
the
cluster
level,
so
you
have
a
sensitive
namespace
and
you
want
to
make
sure
that
nobody
ever
accesses
that
right
and
you
want
to
have
that
and
then
you
want
to
have
like
allowing
traffic
at
that
cluster
level.
So
maybe
you
want
all
of
these
to
always
be
accessible
and
in
other
cases,.
A
Delegation
to
an
existing
case
network
in
the
diagram
below
egress
traffic
justin
for
the
service
pub
in
namespace
bar
is
delegated
to
network
policies
implemented
in
food
name
space
one.
If
no
network
policies
touch
the
delegated
traffic,
the
traffic
will
be
allowed
okay.
So
in
this
case,
I
guess
the
idea
is
in
the
diagram
below
the
egress
traffic
destined
for
the
service.
A
I
don't
get
this
one,
really,
I'm
going
to
skip
that
cause.
I
don't
understand
it
created,
isolate
multiple
tenants
for
the
cluster,
so
in
this
case
you
want
to
build
tenants
in
my
cluster
that
are
isolated
by
default,
so
isolation
by
default
between
name
spaces.
That
makes
a
lot
of
sense.
I
mean
like
open
shift.
I
think,
does
that
yeah.
A
A
There
we
go
I'm
going
to
try
to
recreate
a
new
cluster,
but
that's
where
we're
at
now
is
the
admin
network
policy.
That's
the
next
thing,
that's
happening
so
andrew
stroikus
is
doing
that,
and
over
here,
vmware
is
going
to
be
helping
him
with
that
and
there's
other
folks
in
the
upstream,
in
the
entrance
side
and
and
all
over,
the
place
that
have
been
working
on
this
like
yang,
has
been
working
on
it
and
object
worked
on
it
for
a
while,
and
so
this
is
living,
I
believe
in
a
crd.
A
B
B
A
B
A
A
B
A
B
A
I
don't
know,
have
we
and
is
anyone
from
andrea
here
today?
I
don't
know
if
we've
implemented
them
or
not,
yet
let's
take
a
look
and
see
so
the
type
is
admin
network
policy
right.
So
if
I
go
into
the.
B
A
A
B
B
B
B
A
B
B
Well,
just
read
it
out
there,
so
spec
egress
ports,
port
number
protocol
forbidden
must
be
undefined
or
structural.
So
it's
probably
validating
with
half
all
that
all
that
yeah
all
the
open
api
schema
stuff
right
up
there
and
then
it
doesn't
like,
what's
down
below
so
go
down
to
the
bottom,
the
egress
go
to
the
go
back
into
that
yaml
file
go
to
the
bottom
of
the
egress
sections.
B
B
A
B
Your
machine
here
so
open
up.
I
just
cut
out
that
that
confirmations
on
emal
I.
A
B
Yeah,
I
think
you
can
just
do
a
customized.
I
don't
do
this
all
the
time
we
actually
automate
this
stuff
at
work.
Anyone
know
the
commands
customize
something
you
just
apply
it
and
we'll
go
do
it.
I
can
try
to
find
my
code
in
my.
A
A
B
Is
see,
there's
you
just
dimitri
salt
of
the
earth
that
guy
what
did
he
do
customize
build
period
and
then
pipe
it
to
cube.
Ctl
apply,
dash,
f,
dash.
A
A
A
A
Okay,
we
tried
doing.
A
B
B
B
A
B
I
think
he
thinks
there's
a
secondary,
no
config
crd.
Wasn't
it.
B
B
A
A
A
But
that
was
a
good
idea.
Maybe
that's
the
way
we're
supposed
to
do
it.
B
A
A
B
Could
you
do
one
of
two
things?
Could
you
either
move
your
console
or
your
cli
up
a
little
bit
or
could
you
kill
the
things
that
you
pin,
because
it's
like
what
you're
typing
is
right?
In
the
middle
of
the
there
we
go
jesus
christ,
what
I'm
just
trying
to
upgrade
your
production
value
around
here
here
we
go
customize.
B
Okay,
yeah,
it's
still
thin
for
him
still
hates
us
forbidden,
must
be
identified.
It's
the
same
thing.
How
did
you
get
it?
Just
just
do
the
the
build
without
the
dot
and
whatever
just
do
the
and
no
get
the
get
rid
of
the
cubes.
You
don't
apply
stuff.
Let's
just
see
what
it
spits
out
all
right,
yeah,
I'm
gonna,
do
that.
A
A
A
B
A
So
and
those
for
folks
that
want
to
learn
about
those
and
that's
how
it
is
entry
and
network
policy.
If
you
look
at
those
they're
here
right-
and
I
just
wanted
to
make
sure
folks
know
that
we
have
them
here
right
and
wait,
this
is
an
old
0.13.
Why
is
this.
A
A
Here
we
go
so
the
so
if
people
are
interested
in
running
these
no
wrong
one
people
are
interested
in
running
the
the
cluster
network,
policies
and
stuff,
like
that,
the
cluster
scoped
ones,
the
ones
with
the
logging,
some
of
the
ones
that
I
showed
you
earlier
in
the
in
the
diagram
on
the
left,
like
a
lot
of
that
stuff,
has
been
implemented
in
andrea
and
we
have
old,
intra
live
episodes
where
we've
gone
through
some
of
those
so
like
we
did
a
demo
of
the
fully
qualified
domain
policies
and,
let's
see
they
have
tiers
right,
so
they
have
tiers,
which
is
kind
of
like
similar
to
the
admin
network
policies
that
we're
just
looking
at
there's.
A
Let's
see,
I
don't
know
what
this
icmp,
oh
icmp,
they
support
icmp
policies.
I
didn't
know
that
multi-class
egress
traffic
and.
A
Yeah
and
they
actually
follow
some
of
the
same
things
so
as
you
can
see,
since
the
policy
crds
are
similar
to
these
admin
network
policies,
they
actually
do
the
same
thing
where
they
give
the
ability
to
edit
those
only
to
the
cluster
admin.
A
Yeah
and
then
here's
the
here's
the
way
that
they
implement
priority.
So
a
lot
of
the
stuff.
That's
in
the
upstream
policy,
comes
from
the
way,
andrea,
calico
cielium.
I
guess
you
know
have
already
been
doing
this
now.
I
don't
know
if
psyllium
does
the
tiered
policies
and
all
that
stuff,
I
haven't
looked
much
at
that.
What's.
A
A
I
don't
know
why
that
would
be
like
what
happens
if
I
set
it
smaller.
I
don't
know
why
that
would
who
knows
anyways.
We
have
another
show
on
that:
okay
cool.
So
that's
the
history
of
the
policy
api.
Here's
where
we're
at
today
and
now
you
all
now,
you
all
have
seen
it
and
if
you
guys
don't
want,
let's
see
if
we
can
rerun
those
tests
before
we
go.
B
A
B
A
B
Just
you
know
you
come
in
and
like
so
I
mean
I
just
I'm
so
used
to
people
saying
like,
let's
just
roll
canal,
so
we
have
network
policies
right,
we're
just
taking
the
calico
network
policies
and
then
using
it
with
with
flannel
so
yeah.
I
don't
know
why
someone
made
that
choice
at
one
point,
instead
of
just
adding
it
to
flannel.
That's
just
a
little
bit
of
pizza
history.
I
have.
I
don't
know.
A
A
As
you
can
see
here,
there's
a
lot
of
different
stuff
to
validate.
When
you
have
a
single
policy
right,
you
have
to
make
sure
that
all
these
different
other
name
spaces
don't
get
corrupted
by
it.
So
you
know
we
found
that
there
was
bugs
and
really
all
the
cni
providers
found.
I
mean
we
found
subtle,
bugs
and
really
all
of
them
at
one
point
or
another.
I
think
with
this.
B
A
And
so
it's
not
easy
to
implement
these
right
and
there's
a
lot
of
different
quarks
in
the
api
like,
for
example,
one
of
the
quirks
I'll
show
you
that
a
lot
of
people
don't
know
about
what
the
hell
is.
This
see
this
right
here,
it
turns
out
that
different
cni's
may
or
may
not
implement
loopback
policies.
So
if
I
declare
a
policy
that
I
say,
netpol
xa
should
be
able
to
talk
to
netball.
X8
should
not
be
able
to
talk
to
net
paul
xa.
A
A
That's
just
one
example
of
how
this
gets
really
dicey,
right
all
right!
So
that's
where
we're
at
so
you
got
anything
else.
Luther.
B
A
A
A
B
They're
just
too
hard
to
use-
let's
be
honest:
when
is:
do
you
think
the
acnp
stuff
is
gonna
actually
start
coming.
A
That's
the
the
rescue
that
didn't
work.
We
couldn't
get
that
to
work
either
really.
B
Well,
for
two:
oh
man,
this!
This
is
not
good
for
demos.
Maybe
this
show's
snake
bit
demo
snake
bit
the
snake
bit
like
if
something
just
keeps
not
working
for
you
over
and
over
again.
Something
just
says
your
snake
bit
never
heard
that
before.
A
B
B
Well,
how
about
we
boot
it
up
and
see
if
we
can
get
it
running.