►
From YouTube: Kubernetes SIG Network 20170209
Description
Kubernetes SIG Network meeting from Feb 09, 2017
B
C
Yeah
I
can
give
a
very
quick
update.
If
you
can
hear
me
ok,
so
we
had
a
CNI
maintains
meeting
yesterday.
That
min
hand
came
to,
and
also
a
couple
of
folks
from
mesosphere
as
well
and
I.
Think
we've
come
up
with
something
for
the
cni
spec
changes
that
everyone's
almost
happy
with
as
a
PRF
in
the
CNI
repo
at
moment.
The
latest
PR
369
onion,
wants
to
take
a
look
in
comment,
but
the
intention,
the
hope,
is
that
there's
both
consensus
on
that.
You
just
need
to
work
out
some.
C
B
C
So
it's
a
a
pull
request.
That's
open
at
the
moment!
That's
actually
clarifying
some
things
in
the
conventions
documentation.
So
it's
it's
not
directly
aspect
change.
It
shouldn't
require
kind
of
code,
changes
lipsy,
a
nice
but
yeah,
it's
a
PR.
That's
up!
That's
not
the
most
yet.
So
if
you
want
to
comment
on
it,
then
they're
going
to
take
a
look.
B
Go
look
there's
a
link
to
that
in
the
chat
now
20,
but
he's
interested,
and
so
next
on
the
list
was
a
quick
update
on
network
policy.
I,
don't
see
Dan
Winship
here,
but
I
I
believe
the
state
of
that
is
relatively
unchanged.
From
last
time
there
Zane
open
PR,
which
attempts
to
move
it
to
v1
and
it
is
awaiting
feedback.
So
people
are
interested
in
that.
Please
take
a
look.
I
know
down.
Add
some
outstanding
questions
on
that.
E
B
Have
serviced
and
when
ships
not
here,
then
we
can
move
on
to
the
other
stuff
for
the
question
about
what
is
it
tenant
and
if
there's
a
general
consensus
that
made
space
as
a
tenant
boundary
sure
who
added
this
topic?
My
understanding
is
that
a
there's
not
consensus
that
a
namespace
as
a
tenant
boundary
and
that
you
know
from
previous
discussions.
Tenants
are
less
well
defined
than
that
and
could
potentially
deed
multiple
namespaces
for
a
single
tenant
who
said.
F
D
So
I'll
remind
you
of
you
know
what
I'm
hearing
from
people
who
want
to
use
cout
burn.
It
is
in
a
non-trivial
way,
I'm
dealing
with
a
couple
of
organizations
in
the
software
as
a
service
business,
so
they
have
what
I
refer
to
as
two-dimensional
multi-tenancy.
They
have
multiple
service
offerings
and
cross-cutting
that
they
have
multiple
customers.
So
we
don't
have
a
simple
cut,
the
district
degrees
of
focus
or
kinds
of
scope.
D
Justice
relevance
talked
about
here,
so,
for
example,
in
such
an
organization,
each
pod
is
dedicated
to
one
customers
use
of
one
service
and
they
plan
to
have
a
namespace
per
service
and
customer
that
it
is
a
meaningful
pair.
They
also
have
some
name
spaces
that
are
central
/
service
that
are
not
dedicated
to
a
given
customer,
but
do
stuff
like
monitoring
and
management
for
that
service.
They
also
want
to
have
some
name
spaces
that
are
central
per
customer
that
are
not
dedicated
to
anyone.
D
Services
is
a
interaction
with
that
customer
that
are
more
shared
by
issues
for
that
customer.
So
you
can
think
of
this.
As
a
two
dimensional
matrix,
describing
what's
going
on
within
that
organization,
where
some
cells
are
filled
and
have
a
corresponding
namespace
in
some
cells
are
just
blank
and
the
necessary
access
that
they
want
between
namespaces
is
across
rows
or
across
columns,
and
then
they
also
want
to
have
operators
for
the
whole
organization
and
have
access
to
the
whole
matrix.
D
So
that's
I,
think
of
the
name
spaces
as
kind
of
the
finest
grain
kind
of
Kemet.
But
there
are
tendency,
is
not
simply
namespaces,
and
nor
is
it
simply
a
hierarchy
of
namespaces
right.
Some
people
will
say
say:
an
organization
is
a
bunch
of
name
spaces
that
can
all
see
into
each
other
or
that
all
have
the
same
uniform
relationship
to
each
other,
and
that's
not
what
we
need
here.
F
D
Disagree,
I
would
say:
we've
had
a
lot
of
conversation
and
I
would
say
there
is
an
emerging
consensus
or
on
the
view
that
the
namespace
is
the
atom
of
tenancy,
but
we
need
to
be
able
to
construct
more
complicated
relationships
and
we're
not
going
with
again
a
simple
hierarchy.
We
want
to
allow
more
fine-grained
and
structured
relationships,
so
I
think
the
agreement
on
the
atoms
is
because
I
think,
is
a
pretty
much
extensis
on
the
atoms.
It's
how
we
build
the
molecules
and
bigger
structures
that
we're
trying
still
trying
to
figure
out.
B
Yeah
and
we've
had
them
some
joint
joint
calls
with
the
communication
with
cigars
in
the
past
and
I
think
they
kind
of
agree
with
this.
This
model
of
names
dish
being
an
atom
of
Tennessee,
and
from
my
perspective
on
these
discussions,
it's
it's
been
clear.
You
know
that
there
are
a
number
of
different
ways
that
people
want
to
use:
uber
Nettie's
as
a
multi-tenant
solution,
and
it's
not
necessarily
Cooper
Nettie's,
the
kokuboro
Nettie's
communities
job
to
declare
one
right
solution
rather
to
provide
the
tools
to
make
all
of
these
possible.
B
F
D
I
put
that
one
there,
so
I
think
when
we
last
talked
there
was
a
general
feeling
that
we
would
prefer
a
set
of
shared
dns
servers.
My
problem
is,
I
just
don't
know
how
to
make
that
actually
work,
because,
as
far
as
I
know,
cou
burnett
is
will
sometimes
masquerade.
The
client
address,
so
we
shared
dns
server,
can't
identify
the
client.
A
Yeah,
I
guess
this
is
the
way
here.
The
question
probably
would
be.
I
think
you
mentioned
it
in
the
context
of
trying
to
kick
off
some
tenancy
sort
of
some
notion
of
tenancy
in
the
namespaces,
and
I
think,
if
I
was
looking
at
the
document,
I
kind
of
wanted
to
understand
if
we
went
with
a
per
namespace
DNS
server
today
like
how
would
that
look
once
we
have
the
more
complicated
tenancy
issues
resolved
in
the
future,
yeah.
D
D
You
know,
enforce
the
appropriate
or
desired
access
and
again,
if
you
have
the
server
specific
to
the
client,
there
is,
you
know,
tantalizingly
close
opportunity
to
let
what's
visible
in
dns,
be
a
natural
consequence
of
what's
visible
through
the
API
and
I
tried
to
have
a
conversation
when
those
lines
in
the
sig
off
mailing
list,
where
I
was
playing
out
that
that's
really
just
not
the
way
the
API
works
nowadays
and
we
would
need
to
push
some
non-trivial
changes
up
through
the
koreas,
API
server
into
XD
in
order
to
make
that
work,
and
that
might
be
a
viable
path
and
there
might
be
something
else
that
could
be
done
in
the
meantime
and
openshift
seems
to
have
done
something
else.
D
In
the
meantime,
that's
kind
of
like
giving
these
kind
of
synthetic
views
seem
up
one.
So
the
observation
is
that
the
way
the
dns
server
today
in
coover
neighs
works
is
it
does
list
and
watch
on
api
server
for
services
and
tracks,
two
end
points
of
each
service,
and
if
only
it
were
the
case
that
the
api
server
were
to
respond
to
that
list
and
watch
requests
with
a
view
that
shows
only
the
services
that
this
client
has
to
say.
D
The
service
account
of
the
namespace
in
which
the
dns
server
is
running
its
authorized
to
see
then
the
existing
code,
which
is
work,
but
it's
not
going
through
any
kind
of
real
soon
I
mean
who
knows
maybe
so
there's
another
way
to
approach
it,
which
is
again
I
have
something
else
that
synthesizes
the
desired
view
as
an
open
shift
or
make
again
something
a
little
bit
more
specific
that
just
for
maybe
DNS
solves
the
problem
more
directly.
D
You
know,
but
the
point
is
that,
whatever
I
think,
what
I'm
trying
to
say,
however,
is
whatever
we
decide,
is
the
kind
of
way
of
expressing
this
molecule
concept.
You
know
that
can
be
implemented
either
in
the
dns
server
or
a
proxy
as
an
open
shift
or
encourage
api
server.
Making
me
write
stuff,
visible,
I.
A
See
in
terms
of
I
guess
specifically,
what
needs
to
be
answered
at
this
point
shared
versus
turning
space,
because
it
seems
to
me
like
there's
a
it
seems
a
little
bit
of
an
implementation
detail,
because
let's
say
you
have
some
mechanisms
to
poke
in
the
shared
dns
server,
but
with
a
different
endpoints,
for
example.
But
it's
the
same
one
and
somehow
is
able
to
identify
the
client.
Is
that
the
sort
of
what
you
want
to
resolve
the
question.
D
Well,
I
mean
there's
several
questions
to
answer:
I'll
need
them
all
resolved,
so
if
you
can
find
a
way
to
make
a
shared
server
work,
let's
hear
about
it,
I
mean
obviously
they'll
prefer
to
have
fewer
servers,
so
I
mean
that's
one
of
the
questions.
Another
issue
which
you
have
a
shared
server,
however
again,
is
this
issue
of.
How
do
you
get
the
right?
Restrictions
apply
right
if
the
server
is
specific
to
the
client
and
it
can
potentially
inherit
the
restrictions
that
are
already
applied
to
the
point.
D
I
A
Right,
yeah,
I
guess.
My
only
concern
at
this
point
is:
how
does
this
go
forward
in
the
future,
which
it
seems
like?
Okay
and
the
other
thing
is
scalability
with
respect
the
number
of
main
spaces,
so
it
is
required
for
ever
you
names
dates.
Could
they
possibly
be
shared
in
the
future?
You
know
if
you
have
a
cluster
with
thousands
of
thousands
of
these
days
without
with
the
first.
D
And
again,
I
think
that's
kind
of
a
you
spaced
specific
question.
Unfortunately,
in
the
use
cases
that
I'm
most
concerned
with
the
amount
of
work
done
in
a
namespace
is
much
bigger
than
the
amount
of
work
that
we've
done
by
the
name,
the
DNS
servers.
So
there's
a
lot
more
lot
going
on
in
eating
space.
So
throwing
in
a
few
DNS
servers
in
donating
space
is
not
a
real
problem,
but
in
other
cases
that
I'm
aware
of
that
I'm
not
focusing
on
at
the
moment.
It's
exactly
the
opposite
right.
D
D
No
I
would
disagree
because
again
the
idea
of
tech
namespace
is
the
atom
of
tenancy,
and
then
you
have
control
over
what
kind
of
access
can
happen
from
namespace
to
namespace
yeah
again,
you
know
you're
mixing
up
two
issues
here.
Right,
one
is
just
how
do
you
implement
that
the
other
is?
How
do
you
identify
the
tenant
so
again,
but
again
if
the
atoms
are
namespaces,
it
seems
to
me
you
know
I'm
not
again.
D
D
G
D
G
Did
build
bigger
things,
but
do
you
need
like
every
attribute
inherent
into
that
smallest
unit
is
really
a
question?
So
can
you
build
it
a
slightly
higher
level,
which
is
what
you
were
all
saying,
I
think
I
mean
I
mean
the
question
really
is:
do
you
need
a
dns
server
for
Evan
in
space
or
eugenia
dns
error
rate
slightly
higher
level
may
be
multiple
namespaces
built
into
a
tenant.
D
I
Thing
about
using
the
main
spaces
of
the
tenant
choose.
We
can
imagine
that
two
tenants
of
services,
but
they
want
to
build
off
to
each
other
and
then
and
then
asus
first
kind
of
policy,
if
you
only
can
have
a
shootings
in
the
main
station,
could
never
reach
anything
that
you
it
would
have
to
be
deployed.
Within
that
nature,
what
kind
of
breaks
manage
bases
and
regard
well.
D
I'm
not
sure
what
you
mean
by
namespace
first
I,
think
that
came
up
in
the
context
where
people
where
we
proposed
that
there
be
a
annotation
you
can
put
on
a
namespace.
That
says.
Cluster
first
means
use
this
DNS
server
and
someone
says
well:
no,
that's
just
namespace
first,
but
that
stale
saying
use
this
DNS
server.
That's
not
saying
anything
about
what
view
that
server
presents.
D
So
when
people
say
namespace,
first
I
think
they're
just
talking
about
a
way
of
accomplishing
namespace,
specific
configure,
dns,
client
configuration
and
we'll
need
this.
If
the
you
know,
if
there's
not
a
universally
shared
for
the
DNS
servers,
you'll
need
a
way
for
the
couplets
put
the
right,
client
config
in
each
pod.
But
that's
a
relatively
separate
and
easy
problem.
A
D
No
well
again,
I
think
that
kind
of
wraps
with
multiple
issues-
and
there
is
there-
was
a
no.
There
was
a
issue
open
that
wrapped
up
multiple
issues
and
can
trying
to
tease
things
apart.
So
I
see
two
issues
here.
Really
that
are
pretty
separate,
one
is:
can
we
share
DNS
servers
and
if
so,
how
I
haven't
seen
any
way
and
I
opened
a
thread
on
the
sig
network
mailing
list?
For
that
and
the
other
issue
is
okay?
D
B
B
So
it
sounds
like
we
should
give
that
discussion
on
those
two
mailing
list.
Threads.
G
Yeah
I'm
here
yeah,
so
we
had
a
lot
of
discussions.
We
have
a
dissenter
coppin.
We
have
had
quite
a
bit
of
discussion
and
a
lot
of
interest,
as
can
be
seen
in
the
dark.
So
at
this
point,
I
think
we
need
to
figure
out
whatever
the
next
steps
are.
I
have
started
prototyping
some
of
this,
except
for
the
last
week,
is
to
DC
with
my
own
web
service
level.
Two
get
it,
but
I
am
I
started
prototyping.
It
I
think
they
still
need
some
of
the
things.
G
Probably
more
conclusion,
I
think
one
point
of
contention
has
been
whether
that
needs
to
be
routes
injected
into
the
pod.
If
so,
how?
That's
one
item
that
has
come
up
quite
a
bit
in
the
discussion?
So
the
way
I
see
this.
If
you
have
to
network
interfaces
in
a
pod,
that
inherently
means
that
you
need
something
routing
information
inside
the
pod
and
I.
Think
I
also
summarize
this
in
the
document
as
well,
because
the
power
needs
to
be
able
to
tell
which
of
the
two
networks
it
it
uses
for
a
particular
destination.
G
In
basically
the
all
the
eye
Pam
related
information
like
if
you
want
to,
if
you
want
to
create
like
a
local
network
under
host,
you
can
do
that
if
you
configure
it
I
guess
on
the
on
the
cluster,
it
probably
has
to
be
like
the
same
on
every
every
node.
I
think
that's
quite
some
people
are
concerns
because
they
will
like
probably
like
to
use
different
networks
on
different
nodes.
I
think.
D
More
precisely,
the
issue
was
those:
does
the
details.
I
need
to
go
in
a
scene,
I
config
file
on
each
node,
or
can
the
details
go
I?
Think
we're,
as
we
saw
three
approaches
in
that
talk.
Lehmann
one.
Is
the
details
go
in
the
Cobra
ladies
API
object?
One
is
the
detail
in
see
my
config
files
repeated
on
every
node
1
l's
go
in
an
external
sdn
right.
G
So
the
first
approach
is
what
I
was
trying
to
do
make
in
the
queue
baby
is
the
approach
that
is
trying
to
take.
The
idea
was
cube.
Let
can
fetch
it
from
the
API
server
and
feed
it
into
the
scene.
I
can
see,
but
now
the
problem
is
that
potentially
be
seen
as
not
flexible
enough.
If
this
is
there
to
be
in
an
SDN,
you
probably
have
little
more
flexibility
right.
D
G
D
G
D
Me
put
it
another
way
is
just
part
of
this
problem.
If
an
API
object
for
a
network
contains
a
reference
to
a
preexisting,
you
know
already
defined
externally
sdn
network
and
this
session
can
to
network
obvious
refer
to
the
same
external
network,
particularly
if
they're
namespace
and
you
want
to
have
multiple
namespaces
accessing
the
same
external
sdn
network.
You
need
to
allow
multiple
networks
to
refer
to
the
same
external
network,
so
it's
kind
of
natural
to
say:
okay
out
there,
not
one
to
one
they're
different
things.
G
I
mean
just
just
to
keep
it
flexible.
Like
I
said
you
know,
so
there
are
like
three
three
different
in
you
know
viewpoints
that
we
need
to
consider.
So
the
idea
of
keeping
it
external
was
to
just
just
reducing
a
name
so
that
the
details
of
the
actually
configuration
can
go
into
the
into
the
into
the
SDN
or
argosy
and
I
config
itself.
So
now
I
guess
the
question
is:
if
you
refer
to
the
same
object,
you
see
different
network
object,
yeah
I
mean
so
that's
definitely
what
we
will
be
losing
by
doing
this.
G
H
I
was
thinking,
I
thought
that
you
should
create
API
object
which
are
created
by
the
administrator,
and
you
can
users
or
the
tenants
of
the
namespaces
of
where
were
behind
the
RV
or
three
or
whatever
are
allowed
to
use
those
depending
on
their
authorization.
So
I
as
a
normal
user
would
never
be
able
to
create
a
network
object
myself.
I
cannot
create
and
neck.
My
own
network
I
can
only
assure.
H
G
H
J
H
G
H
D
H
H
Or
maybe
even
someone
just
a
redundant
here
may
I
request
people
that
score
it.
Some
of
these
are
kind
of
out
of
the
regular
things
that
we
get
to
see
in
the
canary
saucer,
especially
point
number
five
is
if
anyone
is
looking
at
it,
I
got
you
scheduled
air.
Oh
there's,
some
requirements
with
some
pods
which
require
right
up
to
64
interfaces
in
a
container,
and
they
are
a
real
nice
kiss
and
it's
not
a
sign
for
web
traffic.
H
That
sounds
very
silly
before
anything
cases
is
just
sort
of
out
of
the
world
scenario
for
those
of
I.
Don't
know
how
we're
gonna
solve
them.
So
so
we
need
to
get
to
a
really
scalable
model
of
having
multiple
network.
Oh
the
case,
I
think
with
whatever
Georgie
I
describe
anything
most
agree.
I
did
not
show
on
how
do
we
solve
services?
H
There
is
a
use
case
where
one
may
want
to
create
a
service.
That's
let's
take
apart,
has
three
interfaces
inside
it
and
I
want
to
create
a
service
which
addresses
only
hit
in
the
face?
How
do
I
create
the
rain
points
to
the
service
that
will
require
changes
to
the
service,
or,
do
I
say,
hey?
Well,
that's
left
to
the
creator
of
those
endpoints,
which
means
a
regular
controller
which
creates
the
end
points
per
service
is
kind
of
disabled.
J
H
E
G
Basically,
what
we
said
was
for
backward
compatibility
purposes.
We
will
define
the
first
interface
as
the
primary
interface
and
sat
will
show,
as
the
part
I
p
the
status
of
this.
In
addition,
we
will
add
a
list
object
that
will
list
all
the
interfaces
and
they
irradiated,
so
you
have
backwards
compatibility
where
your
primary
IP
will
be
seen
always-
and
you
have
this
new
object
or
new
field
that
will
show
you
the
list
of
ips
the
cevennes
interface
sauce.
Okay,
are
we
locked
on.
G
Rating
so
from
based
on
the
discussion
in
the
document,
I
thought
that
was
that
is
no
non-controversial
item.
So
coming
back
to
the
issue
of
redirecting
service
traffic.
So,
as
I
pointed
out
earlier,
I
think
this
is
essentially
a
routing
issue.
Again.
The
problem
we
have
is
right.
Now
we
have
the
restriction
that
all
services
get
a
class
trip
is
going
to
single
subnet.
G
But
a
more
clean
solution
would
actually
require
that
we
make
the
service
plus
type
is
more
flexible,
where
there
is
a
way
to
specific
lustre
IP
as
part
of
the
service
definition
I,
think
it's
an
optimization.
The
rate
is
defined
right
now,
we're
all
service
IPS
cluster
I.
Please
have
to
come
from
a
single
subnet.
If
you
really
think
about
it,
there
is
no
need
for
it
to
be
from
a
subnet,
but
it's
a
to
me
it's
more
like
a
date
about
optimization
where
you
can
use.
F
I
have
a
question
for
yogi.
Oh
so
you
mentioned
that
there
was
a
discussion
where
the
configuration
of
the
network
should
live
like
it
shouldn't
stay
on
the
API
server
and
it's
part
of
SDN
controller
so
and
cannot
stay
on
the
individual
host.
So
what
do
you
mean
by
when
an
external
SDN
controller?
So
it
is
like
another
Cuban
IT
service
and
it
has
its
own
database
back-end.
F
J
G
Topic
raid,
so
there
are
three
ways
to
do
this:
one
is
the
put
required
configuration
in
the
Cuban
is
a
PA
object,
but
in
so
in
that
case
the
issue
is,
there
will
be
people
who
will
go
on
to
have
some
configurations
that
specific
to
each
host
right
once
you
define
it
as
an
API
object,
it
has
no
concept
of
a
host
or
a
node
right,
regardless
all
the
nodes
are
equivalent.
From
that
point
of
view,
so
you
will
get
the
exact
same
configuration
on
every
single
node,
so
other
adverse
there
are.
G
G
Obviously
that
has
its
disadvantages,
that
you
have
to
replicate
on
every
node
and
you
have
to
have
this
concept
of
you
know
having
node
specific
configuration.
So
this
is
why
we
wanted
to
take
a
flexible
approach
in
where
you
know
you
can
have
no
specific
configuration
if
you
have
to
or
you
can
have
a
cluster
work
inflation
if
you
have
to,
which
is
why
we
said
we
can
move
this
object
into
the
network
implementation.
So
when
we
say
LDN
do
you
really
mean
the
network
elimination?
G
G
F
G
G
F
Isis,
so
the
scenario
that
I'm
UPS
is
like
the
different
host
in
a
cluster
are
not
identical
and
there
are
like,
let's
say
at
first
one-
has
only
one,
each
Bureau
host
network
and,
let's
say,
host
to,
has
two
different
or
three
different
networks.
It
belongs
to
so
now
a
network
to
Akemi
doesn't
know
about
all
these
interfaces.
So
at
this
point
you
know
there
is
an
information
which
is
a
face
to
the
host,
and
so
how
do
we
capture
this
kind
of
the
situation
so.
G
I
don't
know
I
mean
I
would
I
would
let
some
of
the
more
given
it
as
people
to
comment
on
this,
but
I
think
in
general
the
assumption
is
that
we
shouldn't
be
bringing
in
details
of
the
node
into
this
api
object.
If
there
are
such
you
know,
ho
specific
things
that
should
be
hidden
by
the
network
plugin
from
the
you
know
from
the
cluster.
Why?
G
Because,
as
far
as
the
api
objects
are
concerned,
all
the
nodes
are
considered
equivalent,
but
now,
if
you
cannot
schedule
a
particular
part
because
of
photo
restrictions
of
the
node,
that
should
be
made
known
to
the
scheduler
or
the
Cuban
artists
itself,
by
applying
official
restrictions,
for
example,
this
is
no
different
than
you
know.
Yes,
you're
saying
that
a
node
does
not
have
sufficient
memory
to
run
a
particular
pod.
How
do
you
make
that
visible
to
the
scheduler
so
that
it
does
not
scheduled
about
there?
So
to
me,
it
is
that
kind
of
a
problem.
B
G
G
D
I've
came
a
question,
so
there
is
some
ongoing
discussions.
D
D
D
B
Cool,
so
we
have
a
few
more
topics
on
the
agenda
today.
Christopher
you
gathered
up
some
test
coverage
results,
I.
I
So
could
be
that
I
missed
some
more
added,
some
that
were
not
directly
quite
a
virus,
but
in
a
lot
of
the
other
cigs
well
in
six
scheduling,
they
were
creating
issues
like
a
master,
Xu
series
and
then
kind
of
partitioning
them
out
in
order
to
get
each
package
over
eighty
percent,
there's
a
few
in
there
that
or
not,
but
I.
Think
mostly,
we've
done
pretty
well
in
this.
So
if
everyone
can
comment
on
the
dock,
with
any
packages
that
I
missed
and
also.
B
I
believe
that
there
is
a
quote-unquote
master
list
floating
around
somebody
somewhere
about
which
packages
or
which
wich
tests
are
related
or
responsible,
the
responsibility
of
each
fig
I'm,
not
sure
if
that
is
just
eet,
EES
or
that
includes
unit
tests
as
well.
But
if
you're
not
aware
of
that,
then
I
can
try
to
find
that
for
you
and
compare
that
against
the
packages
you've
covered
that
box.
That
was.
D
D
I
I
C
I
B
I
mean
I
think
that
testing
is
one
of
the
areas
it
and
decide
to
something
that
we
wanted
to
look
into
right.
We,
we
didn't,
have
a
good
sense
of
what
our
coverage
was.
So
I
think
what
you've
done
is
a
good
first
step
in
identifying
where
we
should
be
adding
tests
and-
and
I
would
say
if
somebody
has
or
multiples
will
have
time,
to
try
and
chip
away
at
some
of
those
numbers
and
one
that's
six
and
that's
something.
You'd
want
to
do.
K
B
J
K
K
B
K
L
L
K
K
B
B
Kind
of
matches
my
experience
with
it
as
well.
It's
not
nothing
sure
that
users
have
asked
for,
but
I
remember,
Tim
was
adamant
about
it
being
included
in
the
original
spectrum.
So
I
think
we
probably
need
to
speak
with
him
to
see
if
he
had
particular
users
that
he
was
aware
of
that
we're
using
it
or
if
it
was
just
a
while
services
support
this.
So
network
policy
should
do.