►
From YouTube: CNF WG Meeting 2021-08-30
Description
CNF WG Meeting 2021-08-30
A
B
C
B
B
C
Are
we
going
to
review
there
last
tribulation
white
paper
that
we
did
last
week.
C
No,
the
google
docs
white
paper
player.
B
We
can
take
a
look
at
it.
I
think
that
maybe
be
a
follow-up
to.
C
B
Come
on
format,
clear
formatting
there
we
go.
B
B
B
I'll
bring
that
one
up
so
this
this
is
the.
B
It's
at
notes-
and
I
guess
the
start
of
what
could
be
some
papers
or
and
or
use
cases
around
least
the
principle
of
least
privilege
and
security
in
general
and
probably
would
go
into
a
lot
of
different
practices
and
supplemental
papers
that
we
could
go
into,
and
so
that's
linked
there
for
this.
If,
if
you
don't
have,
you
should
have
access
to
the
working
group,
so
this
link
is
from
the
working
group
notes
and
ian
went
through
this
last
week
somewhat
and
we've
had
a
bit
of
some
comments
and
stuff
added.
B
So
this
whole
document
here
is
can
be
used
as
reference
material.
Anyone
in
the
community
that
wants
to
go
and
use
this.
We
don't
have
a
specific
license,
but
I
think
if
it
was
going
to
drop
in,
it
would
be
tied
with
the
same
on
the
repo
a
creative
commons
license
to
cover.
But
essentially
this
is
a
a
large
chunk
of
reference
material.
Around
security
related
things,
at
least
privilege
was
one
of
the
comments
or
one
of
the
areas,
and
then
that
got
down
into
more
specific
things
like
the
no
root
and
container.
B
So
why
do
we
want
to
do
that?
But
it
covers
a
lot
of
other
things
and
the
top
part
is
to
try
to
get
something
closer
to
whether
it's
a
write-up
like
a
white
paper
or
a
you
know,
whatever
we
want
blog
or
just
the
supplemental
stuff
like
in
use
cases,
let's
see
so
some
of
these
sections.
I
don't
know
who
all
was
last
week,
so
this
may
be
review
for
you.
B
B
This
is
a
good
reference
that
we
didn't
go
over
before
ian
and
I
were
working
on
all
of
this
content,
but
there's
a
lot
of
overlap
in
this.
So
the
tag
security
white
paper,
which
is
written
by
folks
that
are
helping
involved
in
running
the
security
group,
including
people
from
like
falco
or
cis
systig,
who
that's
what
they
do
like
their
expertise,
is
in
security
that
there's
some
people.
B
I've
seen
on
the
group
like
from
opa,
so
the
open
policy,
agent
and
other
groups,
so
they're
sharing
a
lot
of
their
experience
with
this,
and
they
do
have
a
section
on
least
privilege.
B
So
they're
they're
talking
about
this,
this
is
building
up
on
a
lot
of
other
areas.
The
xero
trust
architecture
which
frederick
he's
talked
about
in
this
group
and
other
places
a
lot,
zero
trust
architecture.
But
this
is
building
on
all
of
the
other
security
principles
and
practices
that
they're
talking
about
and
they
go
into
authentication
authorization.
B
So
one
of
the
areas
in
kubernetes
that
we
haven't
talked
about,
but
could
be
talking
about
using
our
back
so
role-based
authorization
and
control
and
there's
a
lot
of
stuff
out
there
on
that
there's
books
about
it,
there's
videos
you
can
watch
but
talking
about
using
roles
to
limit
access
for
both
machines
and
as
well
as
real
people
that
are
maybe
manually
interacting.
B
So
that's
part
of
this,
so
the
account
layer
down
they're
going
into
that,
but
they're
saying
every
stack
every
layer
of
the
stack
you
should
be
doing
it
talk
about
rootless
services
and
containers.
So
this
is
one
of
our
focuses
and
trying
to
stop
access
between
containers
and
hosts
and
multiple.
You
know
multiple
containers
and
other
things,
and
then
the
roles
and
namespaces
aspect
of
this
as
far
as
privileged
access
can
limit
that
type
of
problems.
B
And
then
you
have
ways
that
you
could
implement
this,
which
you
can
see
if
you're
doing,
custom
implementations
or
if
you're,
using
a
a
a
cloud
or
a
service
based
place
like
openshift,
has
by
default
limited
access
on
privileges.
B
They
enable
that
there's
a
lot
of
stuff
built
in
going
through
from
rel
if,
if
you're
familiar
with
all
of
that
history,
building
up
into
the
open
shift
to
to
be
secure
and
there
you're
starting
to
see
that
in
other
places
by
default
and
there's
a
lot
of
stuff
going
into
azure
around
this,
I'm
not
going
to
go
through
this
whole
white
paper.
Right
now.
It's
pretty
huge.
But
if
folks
want
to
point
something
out
on
this
one,
we
could
jump
into
that
and
focus
on
a
particular
area.
B
So
one
of
the
things
that
we're
trying
to
cover
in
content
not
to
say
that
it's
going
to
be
in
a
specific
use
case
that
we
may
write
up
first,
but
we'll
want
to
have
it
available
somewhere.
So
it
could
be
supplemental,
it
could
be
somewhere
else's.
B
B
So
we
have
some
stuff
here
and
you
can
see
similar
things
if
we
go
and
look
at
other
reference,
material
and
and
the
areas
but
performance
networking
help
put
in
some
other
areas
where
there
may
be
a
a
need
to
have
privilege
or
why
it
might
be
used,
and
then
the
the
question
then
is:
is
there
something
where
there's
already
something
in
the
works
to
make
it
a
native
part
of,
I
guess
the
ecosystem
to
be
able
to
access
those
resources
so
pneuma
access?
B
Well,
that's
about
trying
to
be
specific
with
the
memory
allocation
and
there
is
stuff
in
the
works.
If
you
look
at,
I
think
it
was
like
v118
forward
in
the
kubernetes
trying
to
continue
to
move
forward
as
far
as
being
able
to
request
access
to
those
in
the
networking
domain.
B
Fine-Grained
access
is
usually
desired
and
necessary
in
the
cnf
testbed
itself.
We've
done
a
lot
of
stuff
where
we've
gone
in,
and
various
tests
to
expand
beyond
what
kubernetes
capabilities
to
do,
cpu,
pinning
pneumozone
alignment
and
other
things,
but
those
things
seem
to
be
growing,
and
then
you
got
a
bunch
of
other
areas,
special,
it's
just
hardware
resource.
What
we're
talking
about
is,
if
you
have
resources-
and
you
want
access
to
something-
then
how
do
you
ask
for
that?
B
B
C
Well,
yeah,
basically
I
guess
that's,
but
it's
one
of
my
comments.
You
know
we're
all.
I
consider
the
documents.
Fine.
A
I
I
want
to
add
another
small
point,
or
maybe
a
big
point
in
response
to
what
taylor
just
said.
It's
really
interesting
things
like
pneuma,
for
example,
are
getting
protected
ways
to
request
access
and
kubernetes.
They
already
have
it
it.
It
would
be
interesting
right.
A
lot
of
our
issue
here
with
privileges
is
exactly
that
paragraph
that
victor
highlighted.
A
We
need
to
do
things
and
the
only
way,
apparently
to
do
those
things
is
with
privilege,
so
we're
suggesting.
Well
don't
do
those
things
do
other
things,
but
if
you
look
at
the
history
of
container
technologies
right
now
we're
basically
using
lxe
the
very
fact
that
we
can
access,
for
example,
network
interfaces
in
a
safe
way
is
because
the
container
safeguards
that
right
using
linux,
namespaces
and
then
and
networking
it.
It
allows
you
to
access
an
interface
in
a
safe
way
without
accessing
anything
else.
A
There
could
be
technologies
in
the
future
that
would
allow
the
same
kind
of
name
spacing
perhaps
or
similar
for
other
all
the
list
of
technologies
that
I
put
there
right.
For
example,
access
to
a
gpu
could
be
done
in
a
safe
way,
potentially
without
requiring
privileges.
Right
or
an
fpga
or
similar,
so
my
point
here
to
say
is
that
there's
to
an
extent
all
of
this
has
been
black
and
white
right,
we're
saying
either
you
need
privileges,
you
don't
have
a
choice.
A
So
then
you
have
two
options:
either
find
an
alternative
or
you
know
we're
stuck
you're
going
to
need
privileges,
but
the
platform
itself
can
improve.
The
containerization
technologies
can
improve
in
such
a
way
that
you
will
have
safe
ways
to
to
access
these
technologies
yeah
something
to
work
towards.
D
D
It
is
the
principle
of
no
privileged
flag,
because
the
privileged
flag
is
not
fine-grained,
and
so
it
gives
you
a
lot
more
privilege
than
you
need
to
get
your
job
done,
and
the
rest
of
the
privilege
is
really
quite
dangerous,
so
yeah
exactly
the
point
that
you're
saying
here
is
that
the
reason
we
run
into
problems
with
privilege
is
in
fact,
because
the
level
of
privilege
we
can
request
is
not
precisely
what
we
need.
It's
not
the
least
privilege
that
you
need
to
get
the
job
done.
D
It's
actually
much
greater
than
that
at
this
point
in
time,
but
yeah.
I
absolutely
agree
that
the
point
was
not
to
say
you
should
never
use
expanded
privileges.
You
should
never
have
the
right
to
use.
You
know
to
ask
for
memory
on
your
numa
node.
You
should
never
have
the
right
to
change
networking
in
a
certain
way.
It's
it
is
that,
if
that's
the
thing
you
want
to
do,
you
should
be
delegated
exactly
the
rights
you're.
Looking
for
and
no
more.
B
B
I
I
think
that
having
privilege
could
be
the
least
privilege
it's
you're,
trying
to
get
something
done
and
you're
saying
within
the
environment
that
we're
in
grant
it
the
least
privilege
needed
to
get
your
job
done.
If
there
are
no
other
options
except
giving
full
privileges,
then
you
get
full
privileges
and
then
we
have
other
practices
to
say
how
do
we
safeguard
against
anything,
including
something
that's
partially
privileged,
but
definitely
for
full
privilege,
but
there
there's
never
anything.
That
is
we're
we're
not
to
go
with
what
ian
said.
B
We
are
not
explicitly
saying
you
never
can
break
a
best
practice.
Anyone
can
always
to
say
that
practice
doesn't
work.
For
me,
our
job
is
to
try
to
help
people
see
the
practices
that
are
communicated
by
the
community
as
good
practices
to
try
to
follow
so
that
you
can
know,
and
then
how
do
you
use
them.
B
E
Think
we
had
an
idea
before
that.
If
you
have
to
use
privileges,
then
don't
let
the
entire
application
use
hyperbridges.
Only
this
the
part
that
needs
like
the
part
that
needs
to
create
the
the
raw
socket
or
whatever,
so
only
that
pod
may
may
use
privileges,
but
the
rest
of
your
application
shouldn't
do
I
mean
once
something
cannot
be
met?
It's
not
a
reason
to
throw
the
entire
best
practice
out
the
window.
We
can
mitigate
for
that
and
kind
of
contain
the
exception
right
ronnie.
You
actually.
F
A
F
Thing
we
should
also
call
out
as
well
if
we
haven't
already
is
to
also
state
what
those
privileges
are,
because
it's
it's
one
thing
to
say:
oh,
I
have
privileges.
It's
like
okay.
F
We
understand
that
your
application
is
going
to
need
some
additional
privileges,
but
if
I
don't
know
what
those
privileges,
what
those
privileges
are
or
my
application
by
default,
installs
those
additional
privileges
without
letting
me
know
as
part
of
the
installer,
then
that
puts
me
at
risk,
and
so
we
want
to
make
sure
that
we're
also
forthcoming
that
we're
using
additional
privileges
and
that
helps
the
operator
determine
how
they
want
to
defend
the
system
like
if
you
need
full
privileges
on
a
cluster,
then
and-
and
I
know
that
it
needs
full
privileges
and
that
the
installer
is
going
to
do
that.
F
G
I
mean
we
keep
having
these
same
like
discussions,
even
though
we
keep
calling
out
it's
not
a
zero-sum
game
right,
it's
a
scale,
so
the
least
amount
of
privilege
to
get
what
you
need
done
could
be
that
you
have
maximum
privilege.
I
mean
you
are
root
right,
like
it's
a
scale,
and
maybe
we
haven't
like
articulated
that
enough.
You
know
in
like
the
intro
paragraphs
and
stuff,
but
I
mean
the
best
practice.
Is
you
get
the
least
amount
of
privilege
you
need
to
get
the
job
done?
G
That's
you
know
different
than
isolating
privileges,
that's
different
than
relinquishing
privileges.
You
don't
need
after
you
have
it,
so
we
probably
need
more
best
practices,
but
it's
not
a
zero-sum
game.
I
mean,
if
the
only
way
you
can
get,
what
you
want
done
is
through
maximum
privilege.
That's
where
some
of
the
other
practices
now
that
are
being
listed
also
help
us
right,
like
you
have
maximum
privilege,
because
that's
what's
required,
but
it's
isolated
to
where
you
need
it
right.
G
So
then,
obviously
we
want
to
look
at
that
white
paper
from
the
tag
but
like
we
just
we
have
to
like
we
always
get
into
these
like
debates
on
like
well,
I'm
going
to
have
to
have
this.
Well,
that's
fine
that
still
meets
that's
not
even
violating
the
least
privileged
practice.
That
is
just
saying
that
the
least
amount
of
privilege
I
need
is
maximum
privilege.
It's
not
like
a
zero-sum
game.
In
this
regard,.
D
Yeah,
I
think
at
least
for
some
of
these
privileges.
The
problem
of
having
the
privilege
is
not
that
it's
the
least
privilege
you
have
it's,
that
it
breaks
the
boundary
between
platform
and
application,
which
is
a
technically
different
thing
than
having
the
least
privilege
necessary.
I
mean,
even
if
every
privilege
was
completely
secure,
it
didn't
threaten
the
stability
of
the
platform
or
other
applications.
D
It
would
still
be
a
logical
thing
again
right.
Rooting
containers
is
a
matter
of
least
privilege,
because
it
makes
your
application
better,
but
having
root
in
a
container
absolutely
does
not
mean
that
you
can
step
outside
of
your
container.
It's
a
perfectly
safe
privilege
to
have
from
that
regard,
the
problem
with
something
like
capsis
admin,
which
is
so
necessary
for
a
lot
of
the
things
that
we'd
want
to
do
in
networking.
A
So
well.
G
G
D
Well,
kinder,
I
mean,
if
you
look
at
a
standard
unix
process,
for
instance
that
isn't
running
as
root,
then
99
of
standard
using
unix
processes
not
running
as
root,
can
do
their
job,
and
even
that
even
includes
ones
that
use
certain
levels
of
privilege,
because
you
use
a
separate
process
where
you
ask
for
precisely
what
you
want
and
it
goes
and
asks
for
the
colonel
to
do
the
thing
that
you're
in
need
of
doing
so.
There
are
examples
of
managing
privileges,
even
when
the
privilege
is
not
necessarily
terribly
kind.
Great.
A
Right
I'll
add
here,
another
practice
that
we
can
consider
is
using
sc
linux.
You
could
I
I
don't
know
exactly
how
to
do
this.
Sc
linux
is
pretty
complex,
but
I
wonder
how
it
can
support
limiting
specific
containers
right
because
if
you
have
a
specific
user
well,
if
you
do
have
a
privileged
container
running
under
cryo
kubernetes,
potentially
you
can
use
selinux
on
the
host
to
make
sure
that
you
limit
the
limit
what
it
can
do.
A
So,
even
though
you
requested
privilege,
you're
still
limited
because
of
that
user,
not
quite
sure
how
to
do
that
in
sc
linux,
but
pretty
sure
it's
you
could,
and
it's
probably
something
worth
worth
looking
into.
I
I
do
want
to
add
just
another
point
to
my
earlier
point
about.
You
know
things
changing
in
kubernetes.
A
We
need
a
qualifier
for
all
of
these.
With
the
version
saying
this
is
our
best
advice.
A
These
are
the
best
practices
considering
considering
kubernetes
1.20
cryo,
with
with
this
version,
because
future
versions
might
indeed
improve
the
fine
grain
control
that
we
have
so
right
now
we're
just
doing
best
practices,
but
it's
it's
frozen
in
a
moment
in
time
in
which
these
are
the
capabilities
that
the
platform
has.
I
think.
A
That
that's
exactly
true,
so
I'm
just
putting
a
note
to
us
that
we
need
to
make
sure
that
we're
qualifying
all
our
best
practices
really
with
versions
or
or
by
date.
If
that's.
B
Easy
just
I
don't
think
they're
going
to
all
need
need
them,
but
where,
where
needed,
we
can
specify
that
and
the
other
thing,
though,
to
remember,
is
all
of
them
can
be
updated.
So
when
we
come
in
and
look
that's
if
things
change,
then
we're
going
to
update
the
run
processes
as
non-root
is
older
than
kubernetes,
that
that
one,
when
we're
saying
run
with
privilege,
equals
false
by
default.
B
B
D
B
Anytime,
we
reference
specific
things.
We
can
do
that
so
that
the
sc
linux
thing
that
you
were
talking
about
before
tal
there's
a
whole
page
about
that
with
under
the
kubernetes
docs
for
use
in
security
context,
and
you
can
tie
it
in
with
different
systems
that
can
do
those
type
of
things
for
escalation
and
access
control
that
you
can.
You
can
do
this.
G
G
You
know
coming
in
with
the
bias
of
how
this
best
practice
applies
to,
like
you,
know
the
container
networking
world
and
like
running
cns
right,
because
what
we
don't
want
to
do
is
just
rehash
a
bunch
of
stuff
that,
like
security
experts
and
other
groups,
have
already
written
right,
like
I
mean,
there's
plenty
of
examples
when
you
start
getting
into
weird
things
like
onap
or
other
stuff,
where,
like
does
this
container
need
to,
I
can
definitely
just
maintain
and
hold
privileges
like
just.
G
A
Right
we
we
talked
about
this
a
bit
last
week.
I
think
what
we
need
to
do
is
at
least
create
a
list
that
we
think
is
important,
but
we
don't
have
to
detail
if
there
are
other
documents
that
do
a
better
job.
We
should
just
reference
them,
but
we
can
at
least
collate
all
those
documents
and
in
our
own
best
practice.
A
So
the
list
that
we
created
here
practices
is
great
and
some
of
them
could
be
detailed,
or
some
of
them
can
just
refer
you
to
something
else
or
we
might
be
able
to
summarize
it
in
a
way.
That's
convenient.
G
And
I
do
think
looking
at
that
white
paper
and
I
apologize
because
I've
been
out
for
medical
reasons,
I'm
catching
up,
but
like
there's
a
lot
of
like
content
that
is
specific
to
the
cnf,
so
then
it's
this
is
also
with
us.
You
know
trying
to
get
some
best
practices
just
going
because
we
were
all
sick
of
admin
stuff
but
like
this
is
also
where
tying
best
practices
to
the
use
cases.
G
You
know
will
also
kind
of
alleviate
a
lot
of
this
stuff,
because
if
we
have
a
use
case,
that's
specific
to
our
genre
here
that
we're
you
know
we
care
about,
and
then
the
best
practices
are
drafted
from
the
standpoint
of
how
are
we,
you
know
satisfying
these
use
cases.
I
think
some
of
this
will
just
happen
organically,
but
just
throwing
it
out
there.
F
Yeah,
I
think
that's
a
good
point
as
well
like
the
many
of
the
cns
use
esoteric
protocols,
and
it
would
be
interesting
in
time
like
we
talk
about
these
privileges,
but
what?
What
does
that
mean
in
the
context
of
sctp
or
if
I
have
to
run
one
of
the
the
5g
user
plan
tunnel
protocols
then
like
what
does?
F
How
how
do
the
best
practices
that
we
have
also
apply
to
to
those
particular
environments
as
well
and
like
this?
It's
also
two-way
street.
So
if
we
generate
information
that
ideally,
is
useful
in
the
creation
of
acns,
we
also
generate
information
that
could
eventually
go
back
to
sig
network
or
sig
security
or
or
other
organizations
within
kubernetes
that
we
could
say
these
are
the
things
that
work
for
us.
F
D
It's
yeah,
I
mean
the
the
primary
example
that
might
be
service
meshes,
which
generally
have
privileged
side
cars
because
they're
doing
what
we're
doing
right,
they're
trying
to
do
things
with
networking
that
we're
never
originally
conceived
of
and
the
only
way
they
can
do,
that
is
overstep
boundaries
that
have
been
set
in
stone
with.
You
know
very
blunt
instruments
to
get
round
of
them
and
yeah.
That
would
be
one
place.
F
And
there's
a
lot
of
other
topics
as
well
that
we
that
we
should
also
take
a
look
at
like
what
are
we
doing
in
terms
of
in
terms
of
placement
as
an
example,
because
some
of
the
workloads
are
latency
sensitive,
and
so
how
do
we?
How
do
we
make
sure
that
the
cnfs
are
designed
to
make
use
of
that
placement?
And
what
can
they?
What
can
they
use
in
order
to
in
order
to
ensure
that
or
what
are
the
limitations,
because
there
are
some
limitations
in
how
that
placement
works.
D
That
one's
interesting,
I
think,
there's
a
whole
category
of
things
here
where
the
knobs
we've
been
given
or
that
we're
creating
or
recreating
in
some
of
the
cases
where
we're
basically
borrowing
x
technologies,
largely
knobs,
for
this
will
make
things
run
faster.
Not
this
will
make
things
run
fast
enough,
latency
kind
of
fits
into
that
as
well,
because
if
we're
talking
network
latency,
then
we
can
say
well,
if
you
put
these
two
things
on
the
same
host,
it
will
run
faster,
but
in
actual
fact
we
don't
want
it
to
run
faster.
D
We
want
it
to
run
fast
enough,
which
is
a
guarantee,
not
a
you
know,
not
a
best
effort
thing.
So
how
far
we
can
take
that
in
the
future
is
an
interesting
question,
but
we
have
to
bear
in
mind
that
again
tweaking
placement
to
put
two
things
on
the
same
host,
which
is
the
easy
thing
to
do,
is
not
actually
what
we
want.
It's
just
a
step
in
the
right
direction.
One
step.
F
Yeah
and
also
at
what
cost
I
may
place
two
workloads
on
the
same
post,
and
I
I
use
something
else
where
there's
contention
there
or
one
of
the
processes
like
it's
very
common
for
data
planes
to
to
burn
a
core,
and
so
what
displacement
look
like
at
that
point,
and
so
it's
it's
not
just
the.
What
can
you
do,
but
also
what
are
the?
What
are
the
limitations
so
that
when
people
are
designing
and
architecting,
these
systems
up
front
they're
designing
to
not
just
what
do
we
want
to
do?
F
And
bringing
them
back
to
the
con
context
of
least
privileges,
the
privileges
that
are
there
are
not
very
fine-grained,
like
they're
they're,
better
than
they
used
to
be,
and
when
you
bring
in
things.
So
when
we
start
bringing
in
things
like
the
capabilities
like
we're,
definitely
going
to
give
too
many
capabilities
with
with
some
of
the
flags
that
are
there.
F
So
one
of
the
things
that
would
be
interesting
would
be
to
say
this
is
where
we
are
and
if
we
want
to
do
a
deep
dive
into
one
of
these
topics,
we
could
say
how
do
we
apply
something
like
like
evpf
through
falco
or
something
else
in
order
to
further
constrain
this
thing,
so
that
you
can
work
around
the
limitations
of
of
kubernetes
and
or
to
be
more
precise?
The
limitations
of
the
linux
kernel
itself
that
kubernetes
of
the
features
that
kubernetes
itself
makes
use
of,
and
so
I
think
that
there's.
F
D
D
D
F
Yeah
and
taking
it
back
to
a
towel
mentioned
as
well
about
what
are
the
additional
things
that
you
can
add
on
like
is
there
like?
I
install
this,
this
particular
cnf,
I
create
a
cnf
and
it
takes
the
least
number
of
privileges.
It's
still
insecure,
like
maybe
it's
using
cap
net
wrong,
and
I
can,
if
someone
breaks
into
the
system
or
into
that
specific
service,
then
there's
a
whole
lot
of
things.
They
can
do
there.
So
what
can
I
do
with
further
constraint?
F
I
may
be
able
to
do
something
on
the
se
linux
side
or
maybe
there's
something
I
can
do
on
the
evps
side
or
the
data
plane
that
it
connects
to
based
upon
the
interface
that
it
has.
Access
to
the
data
plane
can
constrain
in
certain
ways,
and
so
there's
there's
things
that
we
need
to
be
able
to
raise
to
say
that
it's
like
these
privileges
are
a
great
start,
they're,
not
enough.
You
can
further
constrain
and
fine-tune
using
these
additional
things
in
order
to
in
order
to
further
mitigate
them.
D
Yeah
so
cap
net
raw
as
an
example,
probably
dangerous
on
an
interface
that
your
average
system
grade
cni
provides
you
on
the
ground.
It's
not
going
to
expect
you
to
be
able
to
do
most
of
the
things
that
cat
net
raw
hasn't
been
designed
for
you
to
do
most
of
the
things
that
cabinet
raw
lets.
You
do,
on
the
other
end,
probably
perfectly
healthy
on
an
interface
that
you're
getting
from
some
secondary
cni
through
malta's
but
very,
very
difficult
to
narrow
down
when
it
is,
and
it
is
not
acceptable.
D
F
Yeah
and
it's
default
on
for
for
those
that
are
unfamiliar
so
by
default,
and
the
reason
it's
on
is
because,
in
order
to
respond,
not
even
make
a
ping
out
outwards,
but
literally
to
respond
to
a
ping,
your
container
needs
cabinet
raw,
and
so
the
default
is
to
enable
it-
and
this
has
been
the
source
of
multiple
types
of
attacks
such
as
our
poisoning,
dns,
poisoning
and
similar,
which
which
lead
to
more
complex
attacks
over
over
time.
F
So
we
want
to
be
able
to
to
say
if
you're,
building
a
cnf
and
we
expect
these
cns
to
to
live
in
places
that
are
sensitive,
potentially
then
like
you're,
going
to
have
cabinet
raw,
almost
certainly
it'll,
certainly
on
by
default
here.
F
So
we're
not
we're
still
not
following
like
we're
closer
to
the
to
the
principle
of
the
least
privileges,
because
we're
not
giving
you
a
cabinet
admin.
But
it's
still
there's
still
things
you
can
do
to
further
protect
yourself,
and
here
are
some
examples
of
of
how
you
can
of
how
you
can
do
that,
and
I
think
something
like
that
on
a
concrete
side
would
go
a
long
way.
A
A
Is
using
a
product
like
psyllium
psyllium,
takes
over
all
your
networking
and
implements
its
own
security
layer
there.
I
don't
know
specifically
if
it
could
protect,
protect
against
those
attacks,
but
it
definitely
can
protect
it
against
a
lot.
So
that's
another
potential,
maybe
something
we
could
add
to
the
list,
consider
psyllium
or
similar
products.
A
Also.
Another
practice
we
forgot
to
add
to
the
list
is
virtualization
that
that's
another,
a
potential
solution
to
these
problems.
You
could
use
a
full-blown
virtual
machine
with
cubert
or
copy
containers,
or
something
else
that
provides
better
protection
than
containerization.
D
I
think
that
both
of
those
solutions
slightly
missed
the
point,
because
the
the
issue
we
run
into
with
using
something
like
cabinet
raw
is
not
that
you
can
do
dangerous
things
with
royal
packets.
It's
that
there
is
an
unwritten
contract
between
the
workload
that
you
are
in
the
network
that
you're
attached
to
that
says.
You
will
use
the
network
in
these
ways,
so
for
any
for
an
unspecified
cni
that
contract
might
include.
Well,
you
won't
have
cabinet
raw.
So
it's
not
a
thing.
We
need
to
defend
against
facilium.
D
If
I,
the
workload
starts
sending
you
know
random
out
packets,
because
I
feel
like
it
and
you
the
network
start
ending
up
with
the
poisoned
art
cache,
because
you're
not
expecting
that,
then
that's
where
the
problem
comes
in,
it
isn't
use
technology
x.
It
is
have
expectations
that
are
agreed
on
both
sides.
F
Mode
and
capture
capture
packets
that
you
weren't
expecting
to
to
have
access
to
as
well,
but
that's
besides
the
point
like
in
terms
of
like
we,
we
identified
a
particular
problem.
We
met
with
cap.
Ed
roth,
there
are
multiple
mitigations
mitigations
include
virtualization.
F
You
can
run
something
that's
evpf
that
further
filters
it
down,
which
is
what
sodium
does
or
you
could.
You
could
run
this
in
user
space
networking
where
you're
not
using
the
the
kernel
mechanism
there
you're
like
if
you're,
using
something
like
sriv
and
direct
mode,
then
none
of
this
applies.
It's
all
up
to
your
your
data
point
at
that
point,.
D
D
D
B
B
D
Well,
I'm
not
aware
he
doesn't
work
with
it
without
cat
network
because
I
can't
send
an
icmp
packet,
certainly,
but
but
you
know,
for
instance,
I
see
responsible
work,
just
fine
without
it
you're.
E
B
D
D
F
F
Put
a
weirdness
on
the
system
that
that
I
was
being
shown
on
so.
D
A
question
more
generally,
I
mean
this
sounds
like
it's
gone.
The
openstack
way
of
nobody's
really
actually
said
a
given.
These
are
the
responsibilities
of
whatever
network
stack
you're
doing
so.
Is
there
such
a
thing?
Is
there
a
list
of
functionality
that
a
cni
that
considers
itself
to
be
a
good
actor
is
expected
to
stand
up
and
implement.
F
F
It's
incredibly
basic,
like
the
requirements
are
basically
nodes
can
communicate
with
other
nodes
where
node
is
defined
as
something
running
a
cubelet
nodes
can
talk
to
pods
and
pods
can
talk
to
pods
and
it's
left.
It
was
purposely
left
open
to
interpretation
outside
of
that.
D
F
Definitely
undefined
behavior,
but
de
facto
standards
have
appeared,
and
so
it's
it.
I
think
it's
it's
not
reasonable
for
us
to
go
off
and
define
what
all
those
things
are
but
in,
but
we
know
that
in
those
set
of
standards
things
like
net
net
cap
raw
are
set
like
there's
nothing
in
kubernetes.
It
says
it
should
or
should
not
be
set
and
there's
nothing
that
defines
whether
it
should
or
should
not
be
sent.
F
D
That's
what
what
I'm
saying
here
is
fine.
I
set
cabinet
raw
in
my
container
and
I
send
an
mpls
packet
if
there
is
no
documented
responsibility
of
any
given
cni
to
pass
mpls
packets,
which
I
think
you
can
safely
say
there
isn't,
then
I
shouldn't
expect
certain
capnet
raw
and
sending
an
mpls
packet
to
do
anything.
I
might
reasonably
want
to
do.
F
Yeah
and
I
see
what
you're
saying
there,
the
the
reality
is,
we
actually
don't
know
what'll
happen
on
a
per
ci
basis,
and
so
that's
something
that
we
we
could
call
out
to
say
that
these
things
are
not
well
defined.
So
you
need
to
you
need
to
state
which
cni's
you
tested
it
with.
D
That
would
be,
I
think,
the
point
I'm
trying
to
get
to
if
you're,
asking
for
a
specific
cni
by
names
or
features
or
functionality,
then
we're
doing
something
wrong
honestly,
because
either
cni's
are
defined
to
do
what
they
do
and
not
define
to
do
what
you
shouldn't
expect
and
we
should
not
use
anything
that
hasn't
been
defined
which
find
paints
us
into
a
corner
or,
alternatively,
there's
something-
that's
not
been
written
down
here-
that
there
is
a
subset
of
cni's
that
implement
a
certain
behavior
that
we
should
require
right,
a
not
by
name
not
by
saying
I
will
only
work
with
psyllium.
D
Ideally,
but
by
saying
I
need
a
cni
with
this
extension,
this
this
extra
capability,
sro
vcnis,
for
instance.
I
hate
the
fact
that
they're
a
cni
for
the
simple
reason
that
I
have
absolutely
no
I
mean
they
don't
implement
basic
cni
functionality
beyond
the
api.
They
don't,
they
don't
behave
like
cni's
are
stated
to
behave.
But
my
point
is
that,
if
what
I'm
looking
for
is
something
that
doesn't
behave
like
cni's,
usually
behave
or
are
guaranteed
to
behave,
then
I
need
a
way
of
expressing
that.
F
Yeah
and
the
contract
for
cni
is
is
very
simple.
The
problem
that
we're
going
to
run
into
that
we
already
have
run
into
is
the
same
as
in
in
openstack
like
in
openstack.
The
definition
of
what
of
what
neutron
coder
could
not
do
was
was
very
specific
and
similar
to
how
cni
is
like
we
are
ip
based.
D
F
Well,
but
in
the
control
plane,
you
still
have
to
specify
things
like
what
mac
address
it
was
being
assigned
and
similar.
It
did
not
specify
with
the
actual
movement,
but
part
of
part
of
what
people
did
was.
They
said
well,
we're
not
using
mac
addresses
for
this
particular
thing.
So
we'll
go
ahead
and
re.
You
reuse
that
field
for
something
like
mpls
and
eventually
they
added
in
the
plug-in
system
for
it,
but
it
definitely
caused
cause
pain
and
then
they
worked
out.
Oh,
we
could
actually
don't
even
have
to
go
through
neutron.
D
Around
portability
yeah,
the
point
I
was
making
was
rather
more
basic
than
that,
because
it
always
looked
like
from
an
addressing
perspective.
What
you
had
was
a
bridge
to
me.
Therefore,
you
might
reasonably
expect
that
things
like
broadcast
would
work
and
so
on,
but
that
was
not
the
promise.
In
fact,
the
the
issue
with
the
promise
is,
it
was
different
and
it
was
in
everybody's
head.
It
was
never
actually
written
down,
sounds
to
me
like
the
cni.
D
The
promise
is
written
down
and
we
should
read
it
and
then
we
should
refer
to
it,
but
we
should
make
absolutely
no
assumptions
about
anything
that
hasn't
been
written
into
that
promise
right.
We
certainly
overstepped
the
boundaries.
We
expect
more
than
we're
being
offered,
and
I
think
that's
understandable.
D
F
Cmi
was
designed
specifically
for
one
primary
problem,
which
is
how
do
I,
how
do
I
get
a
contain
a
container?
How
do
I
get
an
interface
set
up
in
a
in
a
container?
And,
interestingly,
if
you
look
at
kubernetes
the
there's,
no
concept
of
a
pod
subnet
in
kubernetes,
there
is
a.
A
F
D
Actually,
I
don't
think
there's
likely
to
be
anything
in
the
cni
that
says,
there's
an
interface
I
think
there's
likely
to
be
something
in
the
cni
that
ensures
that
every
pod
has
an
address
of
its
own
and
that
cni
will
make
sure
it
can
reach
other
things.
But
you
know
I
could
set
up
a
pod
with
ecmp
in
it
and
I
don't
think
I
would
be
breaking
any
rules.
F
I
think
I
think
it
is
bound
to
the
kernel
interface
itself
and
it
responds
with
the
interface
name
is,
but
I
could
be
wrong.
It's
been
a
long
time
since
I've
looked
at
the
specific
portion
of
cni,
but
this
is
easy
to
check.
The
spec
is
true.
D
Yeah,
the
reason
I'm
being
pedantic
here
is
because
you
know
the
very
little
I
can
do,
especially
without
privilege.
Coming
back
to
where
we
started
there's
very
little.
I
can
do
with
a
container
interface
name,
so
knowing
there
is
an
interface
or
even
what
it's
called
or
expecting
to
find
an
interface
or
any
of
these
things
generally,
not
not
relevant
to
you
know,
applications
that
kubernetes
was
intended
to
run.
I
do
not
need
to
know
what
the
interface
name
is
or
that
there's
one
interface.
D
G
I
think
one
thing
that's
really
confusing
and
also
comical
to
me
is
how
we
seem
to
want
to
extend
the
functionality
and
add
operators
and
crgs
for
everything
all
over
the
place,
except
for
like
when
it
comes
to
the
cni,
and
then
everything
has
to
be
done
through
that
context,
I
mean,
assuming
that
we
had
best
practices
that
showed
us
how
to
do
things
intelligently
and
keep
us
out
of
trouble.
G
Like
I
mean
lots
of
other
people
have
found
ways
to
put
interfaces
into
pods.
You
know
in
parallel
to
the
cni,
but
if
it's
storage,
if
it's
something
else,
everybody's
like
oh
I've,
got
like
15
operators
or
all
these
different
extensions
that
you
can
do
and
we're
just
going
to
change
the
functionality
of
everything,
but
in
networking
we're
still
going
to
shove
everything
into
the
cni
and
try
to
recreate
neutron
by
just
adding
an
infinite
number
of
plugins.
G
I
don't
quite
understand
why
we're
so
unwilling
to
add
extensionality
extension,
definitely
not
a
word
extensions
to
the
networking
space,
but
we
are
everywhere
else.
D
You
know
what
99
percent
of
people
want
to
do,
and
then
you
know
90
of
their
code
will
be
doing
what
this
one
percent
of
people
do.
That's
that's
not
going
to
happen
how
the
remainder
of
functionality
is
expressed.
If
you
stick
somehow
to
a
cni
definition
that
everybody
is
willing
to
implement
and
implement
properly
is
an
open
question.
D
I
mean
clearly
today
it's
typically
maltus
and
a
bunch
of
other
cni's
is
how
we
happen
to
have
done
it,
and
we
should
live
with
the
fact
that
that
really
is
the
best
practice
that
we
have,
but
it
doesn't
necessarily
mean
it's
the
only
way,
or
indeed,
if
it's
the
best
way
that
it
could
be
done
right,
nsm
demonstrates
an
alternative.
There
are
probably
others
out
there.
D
F
Yeah
and
like
in
terms
of
the
in
terms
of
the
cni
there's
also,
we
also
need
to
be
careful
in
terms
of
interface
versus
capabilities
and
saying
that
the
two
are
the
same.
So
it
was
something
that
came
up
early
in
the
multistage
when
they
were
trying
to
work
out
how
to
how
to
position
it
and
how
to
position
the
community
is
that
like
when
they
were
trying
to
get
multiple
interfaces
directly
into
the
kubernetes
api
itself
and
the
people
from
calico
stepped
forward
and
said?
F
Well,
we
don't
want
to
be
forced
to
implement
multiple
interfaces.
What
we're
just
going
to
we're
just
going
to
stick
across
on
the
back
of
the
of
the
current
interface
and
so
that
that
concept
of
multiple
interfaces
to
all
cmi's,
like
cni,
doesn't
prevent
you
from
starting
multiple
interfaces,
but
the
implementers
don't
want
to
be
forced
to
to
do
so.
So,
as
I
was
saying
like
it's,
it's
important
to
take
a
look
at
the
feature
set
that
you
that
you
want
and
that
feature
set
could
include
something
like
I
not
ignore
the
cni
part.
F
Running
after
after
the
plot
is
started,
and
so
it
comes
down
to
the
data
plane,
cmi
is
attached
to
you.
Will
that
data
plane
do
the
things
that
you
that
you
wanted
to
do
and
will
it
respect
some
contract
that
you've
that
you've
organized
with
it
and
there's
no
interface
there
in
kubernetes
that
beyond
you
can
speak
to
other
pods?
You
can
speak
to
other
things
and
the
limited
set
of
network
policies
they
add
in
to
go
above.
They
go
beyond
what
those
capabilities
are.
B
If
we
could
focus
on
what
what
are
some
best
practices,
whether
that's
using
cni
or
using
other
things,
the
capabilities
we
want
there
and
like
best
practice
for
using
those,
I
think,
would
be
a
good
focus
for
us,
not
that
you
can't
use
a
cni.
But
what
are
the
best
practices
and
a
use
case
that
we
can
write
up?