►
From YouTube: [SIG Network ] Weekly Meeting 20201112
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
All
right,
I
loaded
up
the
first
page.
If
we
get
through
the
first
page,
we
can
go
to
the
second
page.
I
did
start
assigning
a
few
of
them.
So
as
a
reminder
for
people
who
are
here
and
generally
quiet
when
we're
doing
triage,
we're
asking
people
to
help
triage
the
issue
not
help
solve
the
issue.
So
please
don't
be
shy
about
signing
up
to
help
triage.
That
really
means.
B
B
C
No,
I
just
I
was
talk.
I
was
just
I
just
filed
this
issue
today.
We
don't
have
a
pr
for
it,
but
rob
was
mentioning
in
the
review
of
andrew's,
got
it
patch
for
this,
so
yeah.
I
I'll
take
a
look
into
it.
Like
probably
next
week,
sometime.
B
All
right-
and
did
I
sign
it
to
you-
is
it
assigned
to
you
yep
cool
next
up
gce
issue,
I
pinged
bowie
minahan
and
pavitra.
Oh
there
she
she's
working
on
it
already
awesome.
B
B
Athena
got
us
here,
part
of
the
way,
but
the
triage
tag
being
universal
is
awesome.
Now,
service
topology
doesn't
reject
traffic
when
there's
no
matching
endpoint
this
one.
So
this
is
about
the
beta
traffic
service.
Topology
api.
My
feelings
robs
my
feeling
is:
we
don't
need
to
fix
it.
I
agree
just
like
yes,
okay,
so
rob
do
you
can
you
what
should
we
do
with
this?
Should
we
just
close
it?
Should
we
open
a
new
issue
that
is
assigned
one
for
internal?
I
don't
know
andrew.
Are
you
here,
yep
cool?
B
If
you
consider
this
enough
of
an
fyi,
we
can
just
go
ahead
and
close.
This
issue.
B
Thank
you
double
counting
in
ipam,
this
looks
like
a
fun
one.
Someone
wants
to
help
reproduce
basically
saying
most.
Apparently,
most
users
have
enough
of
an
ipam
space
that
they
don't
notice
that
we
have
extra
accounting.
Sometimes,
if
that's
true,
then
it's
a
bug.
B
B
Network
policies
are
blocking
a
loud
traffic.
I
saw
some
discussion
on
here.
E
E
They're
trying.
Can
you
explain
what
does
it
mean
they're
trying
to
use
it
so
like
they
want
to?
They
want
to
block
communication
from
certain
namespaces
and
allow
from
other
namespaces,
but
they're,
not
using
labels
they're
just
trying
to
use
that
with
ciders,
just
like
looking
at
the
ip
addresses
in
their
cluster
and
then
making
policies
based
on
those
ciders.
E
B
Strange,
do
you
feel,
like
you
guys,
have
it
under
control,
or
do
you
want
somebody
else
to
jump
in
like
I
can
take
a
look
if
you
want,
but
probably
not
today,.
C
Did
they
respond?
I
think
I
think
we've
got.
There
was
one
issue
that
chris
helped
us
with
which
we
were
a
little
confused,
about
which
I
think
we
disintegrated
so
we're
fine.
I
think.
B
F
B
H
C
G
G
To
you,
oh
so,
there's
a
depleted
ip
pool.
The
problem
is
the
new
pod
gets
stuck
in
container,
creating
okay
sure
just
leave
it.
There.
C
C
D
H
D
B
B
C
G
Yeah,
I
mean
it
was
ostensibly
user
error
right
and
it's
like.
So
the
problem
was
that
there's
when
the
allocator
detects
the
error,
it
just
like
crashes
and
the
option,
is
we
detect
it
and
just
put
like
an
error
in
the
logs
right?
We
could
do
that,
but
that's
not
very
useful
either
because
you
just
get
an
error
in
some
log
somewhere
and
your
cluster
is
still
broken.
B
B
B
Me,
okay
and
the
rest
all
seem
to
be
at
least
assigned,
so
let's
call
it
there
anybody
who's
here!
Please
do
look
at
your
assigned
issues
sometime
in
the
next
week
or
so,
and
try
to
make
some
progress
on
them
so
that
next
week
we
can
come
back,
and
hopefully
we
had
38
open
today
when
I
searched
so
be
nice
next
week,
if
we
have
less
next
time
all
right
I'll
hand
it
back.
A
Thanks
tim,
I
think
the
next
one
was
probably
pretty
quick.
Bridgette
wanted
to
ask
about
needing
schedules
for
holiday
season.
A
A
A
Yes,
so
m
guo.
K
K
So
nobody
knows
who
I
am
I'm
here
with
a
couple
of
people
from
my
team
and
group,
we
we've
been
primarily
working
with
big
windows,
kind
of
discussing
the
topic
that
I
added
here
on
the
agenda
today,
so
to
everyone's
knowledge,
to
kind
of
give
a
brief
overview,
and
we
have
some
people
on
the
call
from
kind
of
the
team
working
on
windows,
privileged
containers
or
just
you
know,
a
solution
for
us
to
address
a
lot
of
the
different,
privileged
cases
that
are
used
privileged
containers
in
linux.
K
So
far
in
the
windows,
kind
of
os
and
containers
we've-
and
you
know
some
people
can
kind
of
answer
further
technical
questions
as
we
kind
of
get
into
the
meat
of
it.
But
I'm
here
to
kind
of
present
a
first
overview.
K
So,
as
we
all
know,
there's
a
lot
of
different
privileged
scenarios
for
privilege.
Containers
in
linux,
where
you
know
you
have
demon,
sets
and
in
kind
of
all
these
cases
where
you
need
access
to
different
host
resources
or
networking
or
storage
or
so
on
so
forth.
In
windows,
we've
had
kind
of
a
model
of
using
server
silos
or
other
isolation
type
like
solution
for
contracts
to
work
with
our
server
containers
or
our
hyper-v
containers.
K
But
it's
kind
of
not
been
able
to
allow
us
to
do
these
type
of
privileged
access
behaviors
with
containers
before
to
kind
of
address
this
issue,
we've
come
up
with
a
new
solution
using
job
objects
which
is
kind
of
detailed
in
the
cap
that
I
linked
on
the
agenda,
and
we
can
also
dive
into
that
cap.
If
it's
easier
for
me
to
to
bring
it
up
or
help.
K
People
visualize,
but
essentially
windows
drop
objects,
have
existed
in
windows
all
time,
but
they're
they're
kind
of
being
able
to
run
like
a
process
on
the
host
in
the
form
of
a
job
object,
we're
doing
different
experiments
and
different
kind
of
prototypings
to
enable
enabled
to
be
kind
of
brought
up
into
the
container
level
and
using
it
with
container
d
and
then
exploring
options
to
kind
of
get
that
working.
K
Kubernetes
we've
done
a
bunch
of
different
prototypes
and
tests
so
far
to
see
if
it
covers
the
different
scenarios
covered
again
in
the
cap
as
well,
and
we've
had
some
success.
The
reason
that
we've
kind
of
brought
this
to
sig
networking
is
that
there's
a
couple
of
different
restrictions
with
windows.
Job
objects
that
have
you
know,
caused
some
interesting
behaviors
with
certain
kubernetes
scenarios.
K
The
main
kind
of
overview
is
that
for
these
job
objects
we
have
decided
on
so
far
that
it's
best
to
have
host
networking
enabled
always
for
these
different.
This
privileged
container
solution.
K
K
Also,
like
csi
proxy,
I
think,
might
be
another
version
of
this
behavior
with
a
mixed
pod
and
I'm
sure
there's
many
others.
Beyond
this,
we
would
not
be
able
to
align
a
windows
privilege
container
with
the
pod
namespace.
K
K
For
windows,
then,
there
might
be
kind
of
a
break
in
the
assumption
that
all
pods
and
kubernetes
have
the
same
ip
address,
because
one
would
have
host
networking
another
one
might
have
like
upon
name
space.
So
that's
kind
of
the
main
case
in
this
row
and
the
way
I
typed
it
up
in
the
agenda.
There's
kinds
of
two
things
that
we're
looking
to
see
in
the
pros
and
cons
and
some
discussion
around
in
sick
networking.
K
One
option
we
could
have
is
we
want
to
see
kind
of
what
the
community's
openness
is
to
windows.
Privileged
containers
having
this
kind
of
difference
of
requiring
or
allowing
for
host
networking
and
a
pod
name
space
within
the
same
pod
or
the
other
option
of
you
know
requiring
these
privileged
containers
to
be
in
their
own,
privileged
pods
and
and
separate
kind
of
in
that
form.
K
The
benefit
of
allowing
the
mixed
case
is
that
we'd
be
able
to
address
basically
the
wider
swath
of
these
different
privilege
scenarios
that
people
have
been
looking
for.
But
there
might
be
implications
due
to
kind
of
that
assumption
that
I
mentioned
before.
B
Thanks
for
that
overview,
can
I
ask
a
clarifying
question,
so
let
me
try
to
restate
and
see
if
I
get
it
right.
If
I
have
a
pod,
which
has
one
privileged
container
and
one
non-privileged
container,
the
privileged
container
will
run
in
the
host
network
namespace
and
the
non-privilege
will
run
in
a
private
network.
Namespace.
K
F
To
be
clear
right
I
mean
we
are
not
recommending
that
I
mean
what
we
are
saying
is
for
demon
set
scenarios.
We
want
all
containers
that
are
running
in
that
pod
to
be
privileged.
F
If
if,
in
the
other
case,
where
we
really
don't
want
to
support
a
scenario
where
the
privileged
container
is
run
like
when
you
have
a
part,
we
really
want
to
abide
by
the
limit
saying
that
the
part
has
only
one
ip
one
namespace
or
multiple
namespace,
whatever
the
part
has
all
the
containers
should
see
it
right.
So
that's
where
we
are
getting
violated
by
if
we
do
the
model,
where
we
have
one
container
privileged
and
other
containers
non-privileged.
So
we
really
don't
want
to
do
that
model.
F
B
F
B
B
The
validation
requires
that
if
you
set
privileged
on
one
container,
you
set
privilege
on
all
containers
and
at
least
then
it'll
fail
and
force
the
user
to
acknowledge
that
this
is
a
requirement
of
course,
if
you're
injecting
a
sidecar
you're
going
to
work
your
users,
unless
your
sidecar
also
sets
everything
to
privileged
right.
B
The
other
approach
would
be
to
just
assume
that
somewhere
deep
down
the
stack
and
you
know
throw
an
event
or
a
condition
or
something
on
the
on
the
pod.
That
says:
hey
just
so
you
know
I
promoted
you
to
privileged
and
I
hope
that's,
okay,
that's
kind
of
terrifying,
and
I
can
hear
the
security
people
revving
up
their
chainsaws.
B
F
B
H
H
F
So
in
windows,
the
network
name
spaces
are
implemented
as
tcpap
compartments
on
the
host.
So
we
have
different
compartments
for
every
network
namespace
in
windows,
and
then
we
have
interfaces
that
we
add
into
that
specific
compartment
and
then
exp
like
basically
the
the
silo
that
is
getting
created
for
every
argon
container
or
like
what
this
windows
container
is
attached
to
a
network
compartment
and
on
that
network
compartment.
We
add
the
interfaces
and
that's
what
that
pod.
F
H
F
World
we
have
something
called
as
default
compartment
or
we
call
it
the
the
stack
compartment,
which
is
basically
for
every
application
that
is
running
in
session.
Zero
will
see
the
default
compartment.
So
if
you
have
multiple
physical
physical
links,
they
all
show
up
in
the
default
compartment.
So
with
the
privileged
container.
What
we
are
doing
is
we
are
trying
to
use
a
job
object
which
is
run
on
the
host
default
compartment,
so
by
default
it
sees
all
the
nics
on
the
host
compartments.
F
F
F
No
future
all
objects,
respect
compartments,
I
mean,
as
of
now
no
plans.
I
mean
this
is
a
fundamental
change.
Yeah.
B
But
it's
fine
to
say
no,
I'm
just
asking
it's
not
the
first
or
worst
windows
thing
that
we've
done
right.
B
Yes-
and
I
agree
fundamentally
with
sidecar
stuff-
I
don't
know
like
sidecar-
is
the
only
hammer
we
gave
people
so
obviously
they've
been
treating
everything
like
a
nail,
but
maybe
in
the
future
we
can
do
better
than
that.
B
So
I
haven't,
I
haven't
looked
at
your
cap.
In
fact,
why
don't
I
pull
it
up?
We
can
discuss
the
options
there,
but
my
feeling
from
a
network
point
of
view
would
be
if
you
have
containers
in
different
name
spaces,
it's
going
to
be
bad
and
so
some
other
mode,
whether
it's
to
require
them
to
be
all
privileged
or
to
automatically
make
them
all
privileged.
I
think
that
would
be
the
better
answer.
L
B
L
To
think
that
use
case
out,
though
tim
every
workload
would
be
bound
to
the
primary
neck
and
consume
you
know
the
or
whatever
the
neck
is
in
on
the
host.
It
would
consume
those
ports
on
the
host,
so
you're
going
to
have
a
heck
of
a
time
scaling
because
the
ports
are
going
to
be
taken,
so
you
can
only
run
one
one
pod
per
node.
B
L
Yeah
one
part.
B
Right
pathologically,
yes,
I
don't
think
the
sidecar
model
works
here,
but
it
doesn't
work
in
split
mode
either.
Right.
L
F
No,
the
site
car
sidecar
still
continues
to
work
right,
I
mean
whatever
we
are
doing
see.
The
sidecar
on
y
is
still
gonna
continue
to
run
inside
the
network
namespace.
For
the
part,
the
only
thing
we
are
saying
is
the
the
the
programming
that
you
are
doing
in
the
ip
tables
to
redirect
the
traffic
in
windows.
It's
a
different
call,
the
indeed
container,
whatever
you're
doing
that
cannot
run
inside
the
workload
container
right.
The
workload
port
spec
has
to
be
pulled
out
so
light
car
scenario
works
in
this
model.
F
All
we
are
saying
is
instead
of
doing
that
in
it
container,
which
is
basically
in
linux,
way
of
doing
things
where
they're
just
running
one
small
container
and
in
a
privileged
thing
to
program
the
ip
tables
to
redirect
the
traffic
and
then
die
down
and
then
they're
launching
all
the
other
parts,
all
the
other
containers.
In
that
part,
we
are
saying
in
windows,
world
the
init
container
has
to
be
a
separate
privilege
container
or
a
separate
part
that
programs
for
that
part,
does
it
make
sense.
B
F
B
Okay,
so
you
can
traverse
down
into
so
and
obviously
whatever
setting
up
that
traffic
capture
is
aware
that
it's
on
windows,
it's
not
trying
to
exact.
I
t
tables
correct,
that's
not
so
bad
if
it's
only
in
it
containers,
because
those
are
serialized.
F
B
L
Spin
up
in
a
nick
container
that
can
you
attach
and
knit
containers
to
services
and
then
expect
them
to
be
on
one
network,
interface
and
they're
actually
on
the
host
network
interface.
Or
you
can't
do
that
because
I
don't
know
if
anybody
spins
them
up
waits
for
a
message
to
come
in
process
die.
Then
start
the
rest
of
the
stack
up,
the
rest
of
the
stack
being
the
rest
of
the
pod.
A
F
F
B
Another
option
was
to
not
call
them
privileged
and
just
use
windows
security
context
to
turn
this
type
of
functionality
on.
Is
that
better
or
worse,
I
don't
know
the
windows
isms
enough.
F
So
the
if
we
do
that
don't
the
demon
said
use
case,
see
the
security
service
measures,
one
use
case,
but
the
other
use
case
is
demon
set
with
the
privileged
and
host
network,
which
is
what
most
of
the
the
calico
oled
even
demons
that
run
today,
and
that
was
a
another
reason
why
we
did
this
feature
right.
E
So
the
the
reason
I
was
just
saying
that
was
because
the
pod
spec
itself,
when
we
try
to
do
validations
on
on
for
for
specific
os.
We
don't
know
what
os
that
pod
is
for,
and
so
it
gets
a
little
tricky.
And
so,
if
we
put
that,
we
have
something
like
the
gmsa
contact,
gmsa
spec,
which
lives
on
the
security
context
and
it's
under
a
windows
specific
level
there.
E
So
you
can,
you
can
do
one
for
both
and
that's
where,
if
you
have
a
multi-arch
image
manifest,
then
you
can't
determine
if
you're
running
on
windows
or
linux
at
the
api
level.
E
Is
that
what
we
do
so
we
do
have?
This
came
up
over
the
last
month
or
two.
There
was
a
few
cases
where
customers
were
applying
linux,
specific
security
constraints
and
they
were
failing
to
schedule
the
windows
pods
because
of
that
kind
of
scenario,
yeah.
C
B
I
I
I
truly
have
no
idea
what
the
done
thing
is
because
I
don't.
I
just
don't-
have
any
experience
on
the
windows
side
of
this
right
now,
so
we
should
probably
move
forward
to
whatever's.
Also
on
the
agenda.
A
little
yeah,
there's
a
bunch
of
things,
I'm
gonna
I'll,
put
the
cap
in
my
queue
and
read
it.
I
would
suggest
other
people
do
the
same.
C
Thanks
for
bringing
this
up
by
the
way,
this
is
important
to
a
lot
of
us.
So,
thanks
to
you
all
for
coming
today
for
sure.
B
Then
we
can,
we
could
bring
it
back
to
the
mailing
list,
though
I
mean
you
could
we
could
open
a
thread
on
the
mailing
list?
I
assume
that
sig
windows
has
already
weighed
in
on
the
cap
or
will
weigh
in
on
the
cap
or
are
they
looking?
Are
you
guys
looking
to
us
to
tell
you
which
decision
to
make.
K
We've
we've
discussed
this
with
sig
windows,
we're
continuing
to
kind
of
bring
it
up
with
them,
as
we,
you
know,
explore
the
different
scenarios
and
implications,
I'm,
I
think
we're
looking
for
your
guys's,
like
guidance
and
a
lot
of
these
networking,
the
networking
implications,
but
there's
kind
of
a
lot
of
different
stakeholders
here
that
we're
we're
getting
some.
You
know
input
from
to
see
what
the
best
choice
is
to
move
forward.
M
B
K
All
right
that
is
good
input
and
yeah.
I
guess
I
guess
it
would
be
good
to
get
some
input
on
the
cap
and
we
can.
We
can
chat
again
in
a
couple
weeks.
B
C
K
M
A
C
A
Cool
thanks,
everybody
bridgette.
I
think
you're
up
next.
J
Okay,
even
though
I
wrote
a
bunch
of
stuff,
it
is
actually
a
very
short
question,
which
is
just
as
I
started.
Looking
at
updating
a
very
old
cap.
I
thought
to
myself
and
in
discussion
with
lockheed
I
was
like.
Maybe
I
should
just
be
reforming
this
to
the
new
format,
but
then
I
noticed
some
people
from
this
call
have
been
updating
that
old
one,
so
maybe
there's
a
reason
to
keep
it
the
old
way.
Obviously
I
wouldn't
like
destroy
the
paper
trail.
J
B
J
Will
do
awesome.
That
is
a
very
easy
answer
and
then
the
other
question
I
had
is
anthony
gave
me
a
helpful
feedback
and
a
link
to
an
a
different
old.
You
know
hey
moving
this
to
beta
and
that
kind
of
looked
like
a
single
standalone
thing,
but
as
far
as
I
can
tell,
we
currently
are
now
just
keeping
everything
in
the
one
like
563-ipv4,
ipv6
whatever
and
then
putting
the
like
alpha
to
beta
graduated
in
that
in
just
the
one
is
that
the
card.
A
Okay,
you
were
the
last
item.
C
Oh
yeah,
that
policy
thing
so
I
went
over
with
some
of
the
red
hat
folks
and
antonio
helped
us
a
lot.
We
tested
it
on
ovn
and
found
some
issues
there.
This
is
the
the
policy
truth,
tables,
validation,
stuff
and
so
matt
did
matt
kind
of
took
over
a
lot
of
the
code
finished
cleaning
it
up,
and
so
I
I
think
we
just
need.
I
think
antonio's
review
is
mostly
done.
Is
that
correct
antonio?
C
B
So
let
me
ask
the
the
question
of
the
day:
is
this
something
that
I,
if
I
don't
look
at
before
the
end
of
today,
we'll
miss
the
boat
for
20
and
then
everybody
will
be
really
mad
at
me,
or
can
it
wait
till
light
next
week.
H
Yeah,
this
is
four
tests
freeze,
so
I
think
that
we
have
one
week
more.
C
H
B
All
right,
then,
I
will
tell
you
it's
already
in
my
flagged
in
my
queue
I
just
de-prioritized
for
this
week
because
of
code
freeze.
C
C
Merry:
okay,
cool
yeah,
the
one
quick
thing
about
it
that
I
wanted
to
kind
of
bring
up.
As
a
group
I
know
casey
you
and
you
and
me
talked
a
little
bit
about
this
on
slack
once
matt.
C
I
don't
know
if
you
still
here,
he
had
an
idea
of
re-expressing
all
the
policies
as
because
again
we
have
like
10
or
15
more
tests
that
we're
going
to
be
putting
in
that
are
related
to
udp
and
stuff,
like
that
as
yaml
as
a
way
to
make
them
extremely
readable,
and
we
have
like
one
test
that
does
that,
and
it's
really
nice
to
look
at,
and
so
you-
and
I
think
you
had
made
the
point
once
that,
like
making
a
new
dsl
up
for
defining
these
things
is
extra
extra
work
and
it
creates
a
third
way
of
expressing
policies
other
than
the
normal
two
that
normal
people
do
so
curious.
G
A
good
path
forward,
or
are
you
saying
yaml-
is
opposed
to
constructing
the
object
directly
or
is
there
json
in
the
test
somewhere
like
what
form.
C
All
we're
ghost
trucks,
okay,
and
so
you
know,
because
one
of
the
things
one
of
the
things
that
you
know
there
have
been
issues
filed
about
people,
not
really
knowing
what
certain
policies
do
and
stuff
so
one
of
the
nice
things
is,
we
can,
in
addition
to
being
able
to
understand
the
tests
better.
We
can
actually
point
people
at
canonical.
C
It's
not
done,
though,
right
now,
I
don't
think
we
embed
yaml
in
a
lot
of
go
code
right
now
in
the
end-to-end
framework,
so
we
wouldn't
go
doing
this
unless
there
was
at
least
somebody
told
me
told
me
it
wasn't
a
totally
crazy
idea.
It
works,
though,.
C
C
Yeah,
okay,
so
that's
it
I
mean
you
don't
have
to
that's
enough
yeah
if,
if,
if
you
could
give
us
a
look
next
week,
that'd
be
that
would
be
really
great
because
I
know
rich
has
a
bunch
of
other
udp
related
tests
that
he
wants
to
add.
I
B
Okay,
so
it
looks
like
we're
at
the
end
of
our
agenda,
then
I'll
put
out
one
more
time
code
freezes
today,
I'm
working
through
all
the
prs
that
I've
been
working
on
in
the
last
two
weeks.
B
We've
made
great
progress.
Obviously
I
feel
like
we
have
a
bunch
of
pr's
that
have
been
sitting
around
for
a
long
time
are
actually
making
it
through.
If
anybody
here
feels
like
they
need
me
to
me
specifically
to
look
at
a
pr
today
before
the
end
of
the
day.
Please
ping
me
on
slack
and
let
me
know
asap
so
that
I
can
try
to
take
a
look
at
it.
B
Yeah,
no,
that
requires
too
many
eyes
from
too
many
places
and
everybody
is
in
the
same
boat
as
me.
They
there's
a
lot
of
important
things
to
review
before
the
code.
Freeze,
that's
just
not
going
to
happen
so
that
that
we
can
queue
up
for
21.
A
Well
so,
like
tim
said,
that
was
the
end
of
the
agenda.
Unless
there
are
any
last
minute
topics,
I
think
we
are
done.