►
From YouTube: Kubernetes SIG Network meeting for 20200920
Description
Kubernetes SIG Network meeting for 20200920
A
First
item
on
the
agenda
also,
please
add
any
other
agenda
items
to
the
bottom,
just
a
note
that
we
will
be
enabling
passcode
on
the
sig
meetings
starting
next
meeting.
That's
a
request
from
for
all
sigs
for
kubernetes
to
deal
with
moderation,
issues,
zoom
bombing,
that
kind
of
thing
there's
a
link
there
with
a
little
bit
more
information.
A
B
A
All
right,
yeah
I'll,
take
a
look
at
that
somehow
tim
casey
and
I
will
figure
that
out
and
let
everybody
know
all
right
next
up
issue
triage.
Do
we
have
somebody
lined
up
for
triage
today.
A
E
G
C
All
right
I've
been
having
some
trouble
with
zoom
web
client
and
getting
audio
glitches.
So
please
scream,
if
you
can't
hear
me
if
it
glitches
out
also,
my
kids
are
about
to
get
online
for
school
again,
so
I
might
start
fighting
for
bandwidth.
So
let's
do
it
all
right,
starting
at
the
top
support
multiple
cluster
sliders
in
cube
controller
manager
by
our
very
own,
anthony.
A
So
that
ties
in
with
the
issue
below
that
we
could
talk
about
that
jay
brought
up
and
that
syria
both
brought
up.
A
H
C
Yeah,
I
mean
all
right:
let's
push
this
one.
Let's
leave
this
issue
we'll
come
back
to
it.
Next
drop
packets
in
invalid
state
drops
packets
from
outside
the
pod
range
and
cube
proxy
is
configured
with
a
cluster
side
range.
A
A
C
But
we
wanted
we,
we
took
a
bpr's
at
the
end
of
last
year
to
make
cube
proxy,
not
look
at
the
cluster
cider
right,
the
goal
being
that
human
proxy
shouldn't
need
to
be
aware
of
the
cluster
cider
for
almost
anything,
and
so
the
this
was
the
multi-uh
multi-cider
work.
That
satya
was
doing.
If
you
guys
remember
that
which
goes
to
the-
I
guess
the
previous
issue,
but
I'm
trying
to
understand
what
this
one's
about.
H
So
we
added
a
bug
fix
that,
sometimes,
if
you
lose
packets
and
then
you
get
an
incoming
packet,
it
will
cause
because
of
ip
tables
and
contract
weirdness.
It
will
cause
your
long-term
connection
to
suddenly
be
dropped
like
a
long-term
connection
from
outside
the
cluster
to
a
service,
or
maybe
the
other
way
around
anyway.
H
We
added
a
fix
for
it,
but
it
turns
out
that
it
breaks
people
who
are
doing
crazy
things
with
multiple
interfaces
and
stuff
like
that
asymmetric
routing,
yeah,
there
we
go
so
they're
saying
we
need
to
limit
the
the
rule
in
q
proxy
that
drops
invalid
packets,
but
it
gets
hard
to
do
that.
Well,
because
q
proxy
doesn't
necessarily
know
which
packets
it
is
and
isn't
supposed
to
be.
C
H
What's
the
admissions
if
a
packet
gets
dropped
and
then
contract
sees
a
packet
with
the
wrong
sequence
number
it
ends
up
like
not
getting
natted
or
unnatted
correctly
or
something,
and
then,
when
it
sends
back
the
response,
the
peer
sees
the
the
packet
you'll
have
to
read
it
and
get
all
the
details,
but
it
ends
up
causing
the
pure
to
send
to
reset
and
kill
the
connection,
because
ip
tables
and
contract
are
are
doing
the
wrong
thing.
Basically,.
A
G
B
C
G
C
I
E
C
J
C
For
broader
context,
there
was
some
discussion
with
say
test
folks
that
I
had
last
week
about
whether
it
makes
sense,
in
the
long
term
to
have
e
to
e
the
project
e
to
e
tests
that
are
being
exercised
via
the
cloud
provider
logic
or
whether
it
would
make
sense
to
write
a
fake
ingress
controller.
That
just
came
back
and
said.
Like
yes,
end
to
end.
This
did
in
fact
do
the
right
thing,
but
not
actually
waiting
for
provisioning.
E
A
C
No,
it
was
email
eric
feta
brought
it
up
as
a
as
an
idea
around
the
test.
Flakiness
mess.
C
Okay,
it
was
very
casual
discussion,
it
wasn't
like
plan
or
kept
or
anything.
It
was
just
like
what,
if
we
did,
this
dns
ed
test
cases
failed
with
ipv6
traffic.
C
You
got
it
thanks
to
everyone
else,
who's
watching
the
triage.
Remember
we're
not
asking
you
to
fix
the
bugs
we're
just
asking
you
to
help
triage
them,
so
don't
be
don't
be
shy,
feel
free
to
sign
up
cube
proxy
configuration.
Api
tests
are
not
validating
errors
correctly.
The
q
proxy
config
api
validation
test.
That's
a
long
phrase,
cube
proxy's,
config,
api
validation.
I
think
we
can.
K
Yeah
but
tim,
I
guess
if
you
could
look
at
that
at
some
point,
that's
it
it's
kind
of
an
open
pr.
It's
for
the
test
is.
C
There
open
pr
I
so
I
apologize
to
everybody.
I
am
about
three
weeks
behind
on
everything
right
now,
because
google
just
went
through
our
process,
so
I'm
just
coming
out
the
other
side
this
week.
K
C
C
H
The
move
to
get
rid
of
docker
shim,
because
cryo
is
currently
vendoring
this
code,
and
so
it'd
be
good
to
have
it
problem
is,
is
that
it
depends
on
ip
tables
and
contract
in
package
util.
So.
D
C
Oh
well,
I'm
that's
that's
a
bug,
but
the
rest
of
it
for
the
most
people
don't
set
host
ip
right
so
for
most
people
it
works.
D
C
Yeah,
so
I
don't
object,
I
don't
object
to
the
idea
of
moving
it
out.
I
question
whether
it's
I
don't
know
I
mean,
like
you
said
it
depends
on
ib
tables,
which
takes
a
big
dependency.
L
So
question:
what's
wrong
with
relying
on
the
cni
plug-in
port
mapping.
Instead
of
this
implementation,.
H
C
C
Okay,
you
know
what
I'm
going
to
assign
this
one
to
myself,
because
I
want
to
think
about
it.
Anybody
else
wants
to
participate
in
the
discussion.
Please
speak
up
now
and
I'll.
Stick
your
name
on
it
or
just
jump
in
and
follow.
C
A
Yeah
we
started
triage
a
bit
late,
so
let's
give
another
couple
of
minutes
and
then
move
on
to
the
rest
of
the
agenda.
Maybe
two
more
issues:
okay,.
C
C
C
C
B
C
Okay,
okay,
I
will
yield
the
floor
then.
A
All
right,
thanks
for
triage
tim,
the
first
or
the
next
item
on
the
agenda,
is
feature
stability
plan
for
dual
stack
dan
you're
up
yeah.
H
H
C
G
G
H
H
Now
capability
will
move
to
beta
when
the
following
criteria
have
been
met.
Kubernetes
types,
finalized,
cri
types,
finalized,
pods,
support,
multiple
ips
nodes,
support,
multiple
ciders
service
resource
supports
pods
with
multiple
ips
cubenet
supports
multiple
ips,
so
I
guess
kubernetes
types
finalized
is
the.
C
I'll
try,
the
I
think
the
criteria
here
is
that
we
can
convince
ourselves
and
someone
like
jordan,
that
the
api
machinery
changes
are
sufficient.
C
H
C
A
F
Antonio
was
just
saying
he
filed
a
right,
you
are
against
it
so,
but
we
should
still
probably
clarify
in
the
cap.
He
filed
a
pr
against
the
cap.
Okay,
proposing
got
it
one
answer.
A
All
right
so
that
at
a
minimum
needs
to
happen,
we
need
continued
review
and
rework
on
the
cals
dual
stack
pr.
Is
there
anything
else,
besides
those
things
and
besides
convincing
ourselves
that
the
api
will
no
longer.
A
A
A
Okay,
well
we'll
leave
it
at
that
and
that's
some
good
stuff
to
work
on
before
next
time.
So
please,
everybody
who
has
a
stake
in
dual
stack:
do
these
things
and
help
out
so
next
up,
load,
balancer
ips
for
dual
stack.
M
I
have
volunteered
to
implement
it
when
the
the
big
pr
is
merged.
But
sorry
it's
already
plural.
You
just
want
to
rename
it.
No
it's
it's
singular
and
it's
a
string.
I
would
like
to
have
a
parallel
load.
Balancer
eyepiece,
as
a
string
array.
M
M
M
This
is
no
no
code
change
in
kubernetes,
it's
just
for
cloud
providers
and
the
metal
lb,
which
I
will
update.
A
D
D
H
It's
a
convenient
way,
but
it's
nice.
If
the
cloud
provider
can
know
that
it's
getting
a
consistent
request.
Like
you
know,
the
cloud
provider
has
to
double
check
to
make
sure
that
the
the
requested
load
balancer
id
are
the
same
as
the
the
target
ips
that
it's
going
to
be
load
balancing
it
to
then
that's
just
creating
extra
work
for
for
every
single
cloud
provider.
M
My
quest
in
in
all
of
this
is
basically
if,
if
the
cap
should
be
updated,
or
if
this
should
be
another
cap
or
something
that
comes
after
the
dual
stack.
G
F
C
G
C
M
Right
yeah,
I
believe
the
validation
is
very
recently
added
because
I
could
set
it
to
a
comma
separated
list
of
addresses
in
1.19,
but
now
in
in
the
master
branch.
I
can't
do
that
anymore.
C
So
I
think
I
I
think
change
same
cap.
Changing
it
to
a
plural
list
seems
reasonable.
Adding
the
valid
the
same
validation
seems
reasonable,
but
I
guess
I'm
a
little
bit
worried
now.
C
Now
I'm
answering
myself
you're.
C
Yeah
I'm
worried
like
lars
says.
You
know,
I
mean
a
comma
separated
list
feels
like
it
was
a
little
bit
out
of
bounds,
but
I
didn't
look
at
the
pr
that
changed
the
validation.
I
probably
would
have
flagged
that
as
potentially
breaking
changing
it
so
that
it
has
to
be
the
ip
addresses
the
families
that
match
like.
If
it's
not
families
that
match,
I
don't
see
how
it
would
work
right.
C
Proxy
pr
on
the
cap-
and
we
can
discuss
it
there.
C
A
K
C
K
C
So
they
are
the.
I
don't
think
we
should
expose
them
because
they
are
implementation,
details
of
the
default,
but
not
exclusively
only
controllers.
Right,
like
we
know
it's
possible
for
services,
service
ips
to
be
allocated
out
of
some
other
ipam
system
or
not
service
like
these
pod
ips
to
be
allocated
out
of
some
other
ipam
system,
and
so
we,
like
the
those
fields,
may
have
no
meaning
in
some
systems.
A
K
A
A
I'd
be
really
curious.
What
people
want
to
do
with
these
values
if
they
had
them,
it
seems
like
it's,
like
tim,
said,
an
implementation
detail
of
how
the
network
plug-in
sets
things
up,
and
I
guess
I'd
expect
that
the
network
plug-in
implementing
this
would
already
know
those
values.
So
maybe
there's
a
better
way
of
doing
what
components
that
want
to
know
these
things
that
do
as
opposed
to
reading
the
cluster
citers.
K
Yeah,
I
think
it's
the
cni
people,
I
guess
in
general-
that's
where
we
hit
it
was
you
know,
we've
got
to
see
an
eye
and
it's
like
needs
to
know.
If
the
service
ips
are
cuz,
it
wants
to
set
up
some
rules,
and
you
know
the
way
it
gets.
Those
right
now
is.
We
have
to
plumb
them
through
the
install
process.
K
G
So
I
I
agree
around
cluster
cider
like
the
cni,
should
know
that
ahead
of
time.
But
I
think
there
are
some
use
cases
specifically
for
service
sider,
where,
like
the
cni,
might
need
to
know
what
that
ranges
to
route
traffic
in
different
ways
and
there's
no
way
to
like
easily
query
the
service
sider
from
the
api
server
or
anywhere
so,
like
the
users
have
to
configure
server
sider
into
the
cni
as
well,
which
is
inconvenient.
C
So
if
I
can
rant
for
a
second
like
one
of
the
medium
term,
objectives
still
is
to
get
rid
of
those
places
that
have
one
sider.
So
it
seems
completely
inevitable
to
me
that
eventually
we
end
up
with
multiple
service
siders,
so
we
have
to
build
that
into
the
schema
right
away,
but
also
that
that
could
change
dynamically.
So
we've
had
customers
who
have
you
know
built
a
cluster
with.
You
know,
x,
space
for
services
and
realized
that
that
wasn't
enough
and
then
they
want
to
know.
C
C
C
Citer
is
interesting
because
we
currently
do
allocate
it
through
the
api
machinery
right
there
isn't
a
there
isn't
a
way
to
plug
in
a
different
ipam
system
for
service
siters,
which
frankly
feels
wrong
like
it
feels
like
it
should
be
possible
to
implement
services
in
a
different
way,
but
I
haven't
really
put
a
lot
of
thought
into
that,
but
the
the
the
bigger
thing
that
I
want
to
get
to
is
this
comes
back
to
one
of
my
long-standing
grumbles
that
I
I
wish.
C
I
had
enough
time
to
spend
thinking
about
it
and
and
doing
something
about
it,
which
is
our
networking.
Implementation
is
like
shards
of
glass
all
over
the
system
and
people
don't
know
they're
there
until
they
step
on
them
and
then
they're
grumpy
that
they
cut
themselves.
C
Cubelet
knows
some
about
this,
see
the
bug
about
hostport
manager
and
cubeproxy
knows
some
of
this
and
the
api
server
knows
some
of
this
and
the
controller
manager
knows
some
of
this
and
the
network
policy.
Implementation
knows
some
of
this
and
each
of
them
kind
of
wants
to
know
information
about
each
other,
because
they're
not
completely
mix
and
matchable,
and
so
I
keep
coming
back
to
like,
I
feel
like
the
model
is
fundamentally
broken
and
we
need
to
if
we're
going,
to
try
to
make
these
things
more
aware
of
each
other.
C
We
need
to
make
them
more
closely
coupled
in
the
implementation,
so
it
would
be
completely
reasonable
to
say
that
you
know
this
driver.
C
Let
me
back
up,
imagine
imagine
an
omnibus
driver
that
covered
all
of
the
various
networking
topics
right
and
you
pick
one
right.
So
I'm
going
to
pick
the
fubar
network
driver
and
that
includes
which
cni
I'm
using
which
services
implementation,
which
network
policy
implementation,
which
ipam
etc,
and
those
things
can
all
know
about
each
other
because
they're
all
highly
coupled
but
from
the
outside
it's
effectively
a
black
box.
A
A
So
I
think
it's
a
great
path
forward.
There
have
been
some
discussions
around
this
already.
I
think
dan
and
chris
luciano
have
had
some
side
discussions
about
this
kind
of
thing.
Perhaps
we
should
formalize
it
more
inside
signet.
C
Work
so
that
doesn't
help
anybody
in
the
short
term
I
mean
I'm,
I'm
totally
on
board.
I
would
love
to
to
start
brainstorming
about
this
right
after
we
get
dual
stack
in
the
in
the
meantime.
How,
like
how
much
trouble
are
we
making
by
not
having
any
mechanisms
or
anything
to
offer
here.
A
It
means
that
your
install
process
is
a
little
bit
more
convoluted
because
you
have
to
sprinkle
these
kinds
of
configs
in
a
couple
of
different
places
and
like
substitute
values
into
perhaps
demon
sets,
or
things
like
that.
G
Yeah-
and
I
don't
think
we
need
like
a
full
api
to
get
some
benefit,
like,
I
think,
maybe
a
low-hanging
fruit
is
like
an
api
server
endpoint
that
just
gives
you
service
data.
G
Server,
I'm
personally,
I'm
not
sure
like
how
much
progress
that's
making.
Last
time.
I
checked
it
stalled
a
bit.
A
F
H
C
So
at
risk
of
contradicting
myself,
would
it
make
sense
for
us
to
offer
like
a
an
optional
internally,
scoped
crd
or
something
that
that
was
very
specifically
like?
This
is
an
implementation
detail
and
if
you
depend
on
it,
you
are,
you
know
in
inherently
depending
on
the
implementation,
but
here's
some
information,
you
probably
shouldn't
know.
A
C
A
C
A
Well,
yes,
but
what
I
mean
is
like
there's
a
difference
between
the
current
way
that
we
do
cluster
cider
pod
cider
allocations
in
network
plug-ins
with
their
own
ipam
versus
what
we
do
for
service
citers,
where
the
service
controller
is
the
thing
that
does
it.
You
can't
swap
that
implementation
out,
so
you
either.
C
For
better
or
worse,
the
only
way
that
we
are
currently
offer
to
do
service
cider
is
through
the
api
server
and
it
is
going
to
track
currently
a
bitmap
of
all
the
service
ips
that
it
has
allocated
now.
I
know
some
customers
have
put
like
an
admission
controller
in
front
of
services
and
they
do
their
own
allocation,
but
the
api
server
still
is
going
to
validate
it
against
the
range
and
a
bitmap
right.
C
A
C
C
G
A
Okay,
next
up,
syria.
N
Yeah
so
dan-
I
already
spoke
about
this,
and
a
few
of
you
have
also
spoken
about
this.
Basically,
I
came
across
a
bug
where
the
user
was
having
multiple
cluster
siders
and
our
plug-in
was
trying
to
call
cryptroxy
for
the
local
traffic
detection
and
it
wasn't
working.
Obviously,
I
was
pointed
to
the
specification
that
I've
linked
in
the
in
the
agenda.
N
I
thought
it
was
implemented,
like
antonio
helped
me
verify
that
it's
not
so
the
proposal.
As
per
the
proposal,
we
should
be
supporting
cluster
side
and
outsider
interface
or
bridge,
but
today
I
think
we
support
only
the
cluster
and
node
and
nodesider,
I'm
still
not
sure.
If
it's
fully
supported
or
not,
I
am
quite
new
to
the
project,
so
I
might
need
help
verifying
that
and
also
the
cli
part
according
to
the
implementation
there's
also
cli
part,
I
don't
really
care
about
the
cli
part.
N
C
C
Okay,
so
yeah
he
wrote
it.
If
I
recall
he
wrote
it
so
that
there
were
lots
of
possible
avenues
to
do
it,
but
only
implemented,
which
is,
I
guess,
par
for
the
course.
The
I
think
node
cider
is
done,
but
we
should
probably
double
check,
so
is,
would
node
cider
be
sufficient
for
what
you're
doing.
N
I
don't
think
so.
I
mean
we
discussed
this
and
our
plugin
does
not
really
leverage,
because
nodesider
is
actually
using
the
node.spec.podsiders
or
cider,
and
we
don't
leverage
that
as
of
today,
so
we
were
actually
looking
to
either
have
the
interface
thing
implemented,
which
is
what
dan
vinshave
suggested,
or
maybe
even
the
bridge
for
that
matter.
C
Okay,
yeah,
I
think
I'm
I'm
sure
that
he
did
not
implement
the
interface
prefix
or
bridge.
C
N
I
can
I'm
I'm
okay,
either
ways
I
can
take
some
time
to
implement
this.
It's
just
that,
I'm
quite
new,
so
it
might
take
some
time
and
I
might
be,
I
might
be
bugging
people
a
lot.
But
if
that's
okay,
I
can
take
this
out.
That's.
A
All
right
thanks
next
dns
for
cluster
ip
services
and
srv
records.
J
Hey
this
is
harry,
so
I
talked
about
this
problem
too.
I
think
bowie
and
rob
the
problem
we
have
is
we
have
a
cluster
ip
service
and
an
inc
in
cluster
component
wants
to
route
traffic
directly
to
pod
ips,
so
the
endpoints
of
the
services,
but
this
and
we
cannot
use
any
of
the
cube
api.
J
So
you
cannot
talk
to
the
api
server,
so
we
want
to
solve
this
problem
using
dns
the
way
to
solve
it
is
not
use
a
cluster
ip
service,
so
I
think
that's
called
headless
service,
which
creates
a
survey
records
for
each
named
for
port
on
the
service
on
on
the
pods
right.
J
So
what
I
wanted
to
know
was,
if
is
there
scope
for
addition
of
survey
records
for
cluster
ip
services
so
that
any
any
component
running
inside
the
cluster
can
route
traffic
directly
to
an
end
point
of
the
service.
J
C
J
J
C
H
C
J
C
Don't
you
might
be
right?
I
don't
recall
that,
but
my
memory
is
terrible,
so
the
harry,
the
issue
that
I
am
worried
about
is
the
service
author
specifically
said.
I
want
this
to
be
a
vip
and
if
we
offer
any
record
at
the
services
name,
that
is
different
than
that.
It
seems
counter
to
what
the
user
asked
for.
J
Directly
so
like
we
are
asking
users
to,
you,
know,
configure
a
headless
service,
and
you
know
that
in
that
route
you
know
the
anyone
can
look
up.
The
srv
recording.
M
C
Dan
remind
me:
doesn't
the
open
shift
implementation
also
expose
the
naked
end
points
at
like
dot
ept.cluster.local,
or
something
like
that?
Maybe
that's
what
I.
H
C
C
Yeah
and
it's
certainly
not
part
of
the
dns
specification,
but
maybe
it
should
be,
I
mean
it
doesn't
sound.
It
doesn't
sound
egregious
on
its
face.
We'd
have
to
think
about
scalability
we're
effectively
multiplying
the
number
of
records
coordinates
is
going
to
manage.
C
C
I'm
not,
I
would
be
okay
to
explore
the
idea.
I
don't
have
any
bandwidth
myself
to
try
to
write
it
up
or
or
do
some
exploration.
Do
you
harry,
or
are
you
looking
for
somebody
here
to
step
up.
J
No,
I'm
I'm
happy
to
add
in
whatever
I
can
do
on
my
part,
but
what
would
be
a
good
way
like
start
a
google
doc
and
sort
of
socialize
that
amongst
network.
C
Either
start
a
doc
or
start
a
mailing
list
thread
that
explains
what
you're
trying
to
do
and
what
doesn't
work
currently
and
then
we
could
talk
about.
We
could
brainstorm
ideas
and
trade-offs
about
performance
and
resource
footprint.
J
C
O
Yeah,
I
will
try
to
make
it
quick,
so
there
is
a
very.
I
don't
think
it's
complicated
issue
like
small
pr
that
I
would
love
somebody
from
this
group
to
review.
It
requires
couplet
approvers,
but
it's
clearly
sig
network
related.
The
reason
I
came
here
is
I
wanted
to
ask
what
the
best
practices
and,
like
maybe.
O
Some
standards,
how
like
kublet,
operates
with
cni
in
this
particular
case,
the
duplication
of
not
port
methane
happens
in
kublet
and
then
pass
to
cni.
So
I
was
wondering:
do
we
need
to
have
the
duplication
logic
at
all
on
google
on
complete
side,
or
we
need
to
pass
it
to
cni
and
like
forget
about
it?
C
O
Yeah,
this
is
a
restart
of
pr
that
got
rotten,
so
I
just
restarted
it
and
I
cleaned
up
some
logic
around
this
pr
as
well.
So
there
were
a
lot
of
questions
in
previous
pr.
How
name
is
used?
How
like,
like
how
kubot
configures,
node
port
mapping
and
I
removed
all
those
legacy
codes?
So
it's
much
easier
to
review
now.
C
Okay,
so
my
response
is
the
same
as
with
the
other
pr
which
I
think
it
seems
okay,
but
I
don't
know
the
internals
of
how
the
cri
implementations
use
this
string,
and
so
I'm
a
little
bit
afraid
of
that.
But
you
know
cri
is
internal
better
than
I
do
at
this
point.
O
So
this
name
doesn't
leak
from
this
function
at
all,
so
this
function
goes
through
node
port,
making
a
co
in
a
port
definition
and
then
can't
like
duplicate
ports.
So
if
not,
port
mapping
already
exists
and
it
wouldn't
add
it's
not
port
mapping
for
cni
to
be
configured
to
configure
it.
C
Okay,
I
see
that
does
seem
really
simple.
Compare,
I
seem
to
recall
the
other
pr
being
more
complicated
than
that.
Did
I
just
make
a
mistake.
O
O
Lot
more
approachable,
yeah
and
the
question
is:
is
it
normal
that
deduplication
happens
on
kubelet
or
we
need
to
move
this
duplication
logic
somewhere
in
cni
and
yeah?
I
don't
know.
O
So
this
is
a
host
port
mapping,
so
mapping
node,
not
port
to
container
port
and
if
port
definition
has
multiple
mappings
that
are
matching
that
exactly
the
same,
then
only
one
will
be
used.
Only
one
will
be
passed
to
cni
my
question.
My
end
warning
will
be
reported
to
customer
like
it's
a
warning,
clog
message.
So
the
question
is:
do
we
need
this
logic
at
all
or
we
just
pass
all
the
port
mappings
to,
as
is,
and
cni
will
figure
out
how
to
do
this
application
better.
A
Okay,
I
guess
we
should
continue
this
in
the
issue.
I
mean,
I
don't
see
a
particular
problem
with
in
cni's
and
cris,
and
I
was
just
looking
at
the
cryo
implementation.
I
don't
think
it
would
have
a
problem
with
having
multiple
host
ports
mapped
to
the
same
container
port.
O
Yeah
but
then
like
this
is
fine.
This
is
addresses
this
vr,
but
then
next
question
would
be
do
we
need
this
duplication
logic
at
all?
Would
the
cri
and
then
I'd
be
fine
if
same
mapping
could
be
report
like
we'll
be
asked
to
configure
twice.
A
O
C
C
O
So,
first
one,
if
you
find
with
that,
I
would
appreciate
review
and
lgtm
on
pull
request
and
for
second
one
I
can
I
mean
I
can
just
leave
it
as
this
or
you
can
keep
pursuing
it
and
maybe
file
a
box
to
discuss.
C
L
A
Okay,
we
are
way
over
time.
So,
let's,
unless
somebody
has
something
absolutely
important-
let's
punt
anything
else
to
next
time,.