►
From YouTube: Antrea Community Meeting 11/23/2020
Description
Antrea Community Meeting, November 23rd 2020
A
Recording
on
now,
as
you
can
hear
from
the
very
loud
voice
from
zoom,
this
meeting
is
now
being
recorded.
So
for
today
the
agenda
that
was
expected
was
we
were
expecting
a
presentation
around
cloud
native
cnfs,
for
you
know,
container
containerized
network
functions,
but
arun
kumar,
which
was
supposed
and
saidulu,
which
were
supposed
to
give.
This
presentation
are
at
least
for
the
time
for
now
are
not
in
the
call
and
zhenjun.
I
believe
you
were
in
contact
with
them.
Do
you
think
they
are
going
to
attend
today's
meeting.
B
Actually,
I'm
not
sure
I
pinged
a
wrong
on
github,
but
he
didn't
reply,
so
maybe
they
will
escape
this
one.
A
Perfect
so
let's
say
that
we
can
probably
keep
keep
waiting
for
them,
but
in
the
meanwhile
you
know
we
need
to
go
ahead
with
our
meeting.
A
We
don't
have
any
other
topic,
officially
official
topic
in
the
agenda
for
today,
and
so
what
do
we
have
is
a
request
from
marcus
which
is
instead
is
on
the
call
to
provide
an
update
on
their
current
status,
of
which
I
have
to
be
honest.
A
I
don't
know
exactly
what
that
refers
to,
but
if
there
is
a
no
other
pressing
issue
pertaining
the
next,
the
next
entry
release
or
any
other
feature
under
development
or
being
the
currently
designed.
Perhaps
we
can
we
can
let
marcus
introduce
what
what
you
would
like
to
talk
about.
So
I
don't
know.
Is
there
any
urgent,
relatively
urgent
request
to
discuss
about
any
other
topic,
because
I
did
not
receive
any
requests
to
put
any
topic
in
the
agenda
for
today.
C
Talk
about
ipam,
but
then
I
was
assuming
that
we
would
have
to
reopen
this
issue
again
once
we
have
the
discussion
with
regarding
the
network
function
subject,
which
also
is
going
to
require
some
ipam
as
far
as
I
got
it
from
our
emails
for
it.
So
if
you
think
that
there
is
value
to
discuss
ipam
today,
then
we
can
do
it.
A
What
what
do
you
say
is
truth,
kobe?
The
thing
is
that
the
fact
that
the
cnf
proposal
also
includes
let's
say
a
subtopic
around
the
ipam
does
not
necessarily
mean
that
we
should
not
discuss
ipam
until
we
hear
from
will
this
proposal.
A
So
I'm
considering
probably
copy
that
if
that's
okay
for
you,
you
can
introduce
the
progress
that
you've
made.
I
mean
not
the
progress
in
terms
of
coding,
of
course,
that
the
progress
in
terms
of
design,
ideas
that
you
made
so
far
and
then
maybe
that
should
not
take
long
and
then
maybe
we
can
let
marcus
introduce
his
topic.
C
C
Okay,
so
the
the
initial
reason
which
the
whole
discussion
around
ipam
began
is
that
eks
iks,
we
cannot
rely
on
control
to
run
node,
ipam
service,
so
controller,
so
we
would
have
to
find
some
other
mean
for
supplying
ips
to
the
nodes.
As
of
today,
each
node
is
getting
the
cidr
when
it
initializes
from
node
ipam
and
from
there
on
on
it,
uses
host
lock
on
to
allocate
ips
from
this
ip
block.
C
C
However,
we
had
several
requests
for
an
upgraded
ipad
mechanism
in
andrea.
One
of
them
was
to
use
smaller
blocks
and
reallocate
more
ip
blocks
to
a
node
whenever
it
needs
them.
The
other
one
was
to
do
something.
C
More
something
different,
such
as
allocating
a
cider
to
a
namespace
other
cni's,
have
that
functionality
such
as
calico
they
have
their
own
ipam,
which
is
quite
capable
yet
building
such
mechanism
would
affect
the
the
whole
andrea
design.
C
We
won't
be
able
to
allocate
a
gateway
on
on
the
node
itself,
as
a
cidr
doesn't
belong
to
a
node.
It
is
could
be
like
spread
around
several
nodes
and
we
kind
of
have
to
consider
how
to
handle
this.
C
I
came
up
with
two
different
proposals:
we
kind
of
decided
to
focus
on
a
location
of
smaller
siders
to
the
to
the
node
meaning.
Initially
we
allocate
a
small
block
of
like.
C
Less
than
slash
24,
which
we
allocate
today
by
default
and
then
once
the
the
node
is
running
out
of
sighters,
we
would
allocate
more
more
eyepiece
to
to
that
node.
Any
questions
so
far.
C
Three
two
one:
okay
I'll
go
ahead
so
salvatore,
and
I
spent
some
time
considering
this
like
several
ways
to
implement
this.
We
decided
to
leave
aside
for
now
the
the
her
name,
space
cider
allocation
and
focus
on
something
which
would
be
possible
to
implement
like
in
a
in
the
next
release,
or
maybe
like
two
releases
for
from
now.
C
So
obviously,
unlike
now,
where
the
controller,
either
the
the
cube
controller
or
the
android
controller,
allocates
a
cider
for
for
node,
whenever
it
initializes
now,
it
will
have
to
happen
like
multiple
times
in
the
lifetime
of
the
node.
C
So
the
the
decider
would
be
have
to
allocate
it
to
the
controller
to
the
android
controller,
as
it
does
today.
C
We
would
have
to
do
some
work
on
the
agent
code
to
be
able
to
cope
with
multiple
ciders.
The
only
effect
that
we
concluded
so
far
was
that
for
each
new
cider,
which
would
be
allocated,
we
would
have
to
configure
another
gateway
ip
on
the
gate
report
or
the
gateway
interface
on
an
open
view,
switch
meaning.
If
we
had
cyber
ericsson,
we
decided
why
we
have
to
add
another
gateway,
ip.
C
We
would
be
better
if
we
allocate
more
ciders
before
the
the
ipool
exhausts,
meaning
if
we
we
have
like
the
the
node
consuming
more
ips
once
it's
ipool
is
empty.
We
might
be
in
a
situation
where
it
is
unable
to
spin
up
new
pods
due
to
a
connectivity
issue
or
anything
like
this
plus
it
would
cause
some
latency.
C
As
for
the
whole
design
itself,
we
considered
two
different
options.
One
of
them
is
that
the
controller
has
a
big
block
of
ips
as
pretty
much
as
it
has
today.
It
would
divide
it
to
subsiders,
as
also
as
it
does
today
like
today.
It
takes
a
like,
for
example,
16
and
divides
it
into
slash
24s,
which
just
spreads
around
the
nodes,
so
it
would
divide
it
into
smaller
sizes
and
would
expose
them
as
a
as
via
custom
cidr.
C
Each
agent
would,
which
would
which
requires
ips,
would
mark
one
of
those
cidrs
as
its
own.
It's
owned
by.
It
obviously
would
have
to
have
some
concurrency
handling
mechanism
here.
So
if
two
different
nodes
consume
the
same
cldr,
one
of
them
should
fail
and
retry
to
take
something
else.
C
When
an
agent
concludes
that
a
cidr
is
not
required
anymore,
it
doesn't
have
any
other
nodes,
any
other
pods
which
have
ips
from
that's
the
idr,
and
it
has
so
some
spare
ips
on
a
different
crdr.
C
Then
it
would
be
the
agent's
responsibility
to
release
this
cidr
back
to
the
controller.
C
What,
where
are
the
difficulties
in
this
proposal
when
the
node
goes
down
as
the
node
pretty
much
manages
the
the
the
the
ib
consumption
when
it
goes
down?
The
controller
is
unaware
of
what
ips
it's
it's
holding.
I
mean
it.
May
it
could
like
query
it,
but
it
has
to
be
managed
by
by
the
agent
like
a
node
failure,
and
the
the
agent
should
decide
when
when
to
re.
This
node
is
not
going
back
and
release
those
ips.
C
C
The
agent
should
know
like
it's
a
ip
status
today,
it's
solely
being
managed
by
by
host
local
icon
plugin,
which
has
its
block
of
it's
like
state
management,
and
the
agent
is
unaware
of
the
the
the
ips
that
ip
status
the
ip
address
pool
status.
C
So
the
agent
should
have
visibility
to
to
this.
C
And
I
think
that
that
I
covered
everything.
B
C
Reason
to
to
create
multiple
ports,
I
think
that
assigning
multiple
ips
to
a
single
port
is
is
good
enough
right.
B
A
I
mean
thanks
thanks
kobe,
for
for
your
introduction.
I
think
that
there
are
three
follow-up
to
what
you
just
discussed.
One
is
regarding
the
design
ideas
themselves
about
how
to
allocate
additional
subnets
to
nodes
upon
request,
how
to
collect
them.
How
to
you
know
how
to
manage
the
ip
space
for
the
nodes.
A
The
second
is
about
what
genjun
was
just
asking:
the
impact
on
literature
for
wording
and
not
only
forwarding
whether
there
could
be
impacts
also
on
other
anterior
functionalities.
A
A
A
I
believe
the
best
next
step
should
be
to
put
pretty
much
the
same,
the
same
design,
ideas
that
you
have
expressed
here
on
on
github,
so
that
you
know
everybody
can
comment
on
those
and
the
third,
the
third,
the
third
aspect
of
determining
the
third
area
of
discussion
around
this
proposal
is
around
the
requirements.
A
As
you
said
you,
together
with
me,
that
was
probably
that
was
probably
my
idea
to
make
things
a
little
bit
simpler
was
to
stick
to
the
idea
of
the
subreddit
per
node
item,
which
I
believe
fits
nicely
and
tria
and
on
the
entire
network
model,
and
it
requires
it
is
something
that
any
idea
of
flexible
ibum
as
long
as
we
keep
the
subreddit
per
node
allocation
idea
is
going
to
be
relatively
easy
to
implement,
in
my
opinion,
at
the
moment,
a
subreddit
per
namespace
approach,
it's
something
that
will
require
more
profound
changes
in
the
entry
architecture.
A
My
legal
experience
with
the
cloud
native
networking
tells
us
that
tells
me
that
ip
allocation
is
a
sort
of,
I
cannot
say
secondary
problem.
It's
more
important
service
discovery,
for
instance,
is
way
more
important.
So
in
my
opinion,
I
don't
think
that
having
each
name
space
with
a
distinct
ip
allocation
range,
as
is
something
which
is
a
a
very
important
requirement,
but
we
know
that
opinions
are
different,
very
different
on
this
topic,
so
I
would
like
to
hear
your
feedback
as
well.
B
D
B
For
example,
turning
some
ci
commands
when
you
plug
node,
you
don't
specify
any
source.
If
you
will
assume
the
ip
will
be
the
gate
with
zero
ip,
for
instance,
you
have
multiplying
psi
on
gateway,
zero,
then,
probably
just
think
about
what
will
be
that
default.
Ip
now
and
just
just
v1
is
example
here
I
think
another
change,
probably
we
need
to
consider
is
for
the
flooding
for
the
local
port
part,
because
today
only
for
the,
I
think
only
for
the
traffic
to
the
remote
node.
B
We
assume
it
goes
through
routing
and
we,
for
example,
decrement
the
ttl
for
the
packet
and
for
the
local
port,
it's
always
low
to
forwarding
only
for
the,
since
you
have
multiple
sublet's
pronouns
and
even
for
leo2.
Actually,
it's
it's
nothing,
and
maybe
we
also
need
to
at
least
decrement
the
ttl,
and
then
I'm
saying
that.
C
Actually,
the
ttl
management
is
something
which
is
missing
in
my
okay,
like
note
that.
B
C
Yeah
but
but
but
if
we
kind
of
go
towards
a
design
where,
like
we
allocate
an
ipo
which
is
not
related
to
a
node
like
per
namespace
or
for
anything,
then
we
can
have
much
larger
challenges
as
we
don't
have
like
a
gateway
port
which
is
local
to
the
node,
unless
we
kind
of
allocate
an
ip
for
each
node,
which
would
be
a
gateway
out
of
the
cyber
block.
But
I
don't
know
if
you
want
to
do.
B
C
C
We
don't
like
want
to
route
one
note
by
another
node
that
doesn't
make
much
sense
either,
so
we
kind
of
have
to
come
up
with
something
creative.
I
don't
know.
B
Right,
I
think
if
we
do
this
flagship
ipam,
probably
we
should
first
consider
we
want
to
do.
We
still
want
to
do
layer,
two
folding
or
we
just
do
layer
three
like
you
just
just
do
some
slice,
32
dots
for
every
port
and
mine
is
a
local,
remote.
E
I
I
think
in
general,
I
I
know
openshift
used
to
do
some
kind
of
a
pot.
I
think
they
used
to
do
a
cider
for
namespaces,
but
it
seems
to
me,
like
that's
really
hard
like,
and
I
agree
like
I'm
not
sure
why
why
people
want
to
do
it
like
that,
may
have
been
something
they
carried
along
from
from
the
openshift
to
two
dot.
Whatever
days
yeah.
B
D
Yeah,
I
definitely
think
with
routable
pods,
it's
a
very
valid
use
case
that
we
see
a
lot
with
ncp
as
well
the
per
name
space
when
it's
routable,
especially
in
hybrid
environments,
where
we
do
have
legacy
applications.
D
I
think
it's
a
very
big
use
case
there,
but
the
amount
of
use
cases
for
routable
pods
are
also
limited,
meaning
that
the
separation
of
the
ciders
is
definitely
more
important
for
smaller
ciders.
But
I
do
the
namespace
is
a
I.
I
have
multiple
environments
where
that
is
a
valid
use
case
today
and
they're
all
in
routable
environments.
I
don't
see
that
as
a
valid
use
case,
or
I
don't
see
the
reasons
for
it
in
a
non-routable
pod
environment.
A
A
If
we
do
that,
so
should
we
have
a
sort
of
distributed
router
approach
in
andrea,
where,
where
you
don't
really
have
local
gateways
at
all,
or
for
instance,
should
every
namespace
have
a
designated
gateway
node,
maybe
an
active
and
a
backup
node
for
providing
redundancy
and
therefore
we
need
to,
and
then
we
need
to
provide
traffic
steering
to
make
sure
that
every
node
will
always
that
every
namespace
will
always
will
always
have
a
predefined
node
as
a
gateway.
B
A
Yes,
yes,
that'll
make
sense
that
will
make
sense.
I
think
that
I
was
considering,
for
instance,
other
solutions
that
provide
container
networking
based
on
open
v
switch
and
like
ovn,
for
instance,
and
at
the
moment
they
don't
support
namespace
presider.
They
stick
with
the
idea
of
a
subreddit
per
node.
As
for
openshift,
that
jay
was
mentioning,
I
am
fairly
sure,
like
a
90
percent
sure,
so
just
one
line
that
the
idea
of
supporting
cider
per
namespace
has
been
dropped
in
openshift
sdn
for
openshift4.
A
So
at
the
moment
it's
even
there
is
just
a
subreddit
per
nodes,
but
I
think
that
as
a
follow-up
from
this
meeting,
it
will
be
interesting.
Apart
from
the
let's
say,
forwarding
problem
just
for
the
ipam
problem.
A
A
Otherwise,
the
alternative
will
be,
in
my
opinion,
from
an
architectural
perspective
to
have
an
ipam
con
and
three
alexandria,
ibm
controller,
which
is
potentially
separate
from
the
entry
controller
and
only
serves
ibum,
where
you
can
have
a
controller
was
strategy?
Is
a
distributing
subnets
for
node
nodes
and
another
controller?
Was
a
strategy
is
distributing
subnet
for
namespaces
in,
in
the
likelihood,
in
the
likely
case
that
we
will
not
find
a
design
that
can
accommodate
both
scenarios
kobe?
What
do
you
think
of
this
does?
Is
that
something
that
makes
sense
to
you.
C
One
of
the
names
which
were
dropped
on
that
email
thread
regarding
ipam
was
whereabouts
ipam,
which
distributes
ips
from
like
a
big
side
block
to
everywhere.
C
As
far
as
I
gathered
from
my
brief
look
at
it
and
it's
it's
kind
of
like
the
same
case,
so
if
whereabouts
ipam
is
relevant,
then
yeah,
then
we
kind
of
have
to
find
some
general
solution
for
for
for
traffic
management
for
forwarding
when
ipam,
when
ipam
doesn't
associate
an
ip
block
with
the
node.
A
A
Waiting
as
usual
30.
A
A
All
right
so
copy.
Do
you
think
that
we
have
enough
on
this
to
publish
the
proposal
on
github.
C
Yeah,
I
believe
so
we
have
a
basis
for
reference
for
a
wider
discussion.
Maybe.
A
Perfect,
so
we
should
definitely
do
that.
In
my
opinion,
and
and
perfect,
I
think,
is
a
conclusive
discussion
on
iphone
4
today.
So
next
topic
on
the
agenda
will
be
marcus,
introducing
his
or
their
work.
Marcus.
Are
you
all
still
online
still
still
with.
F
Yep,
that
was
great
yeah
yeah,
thanks
for
the
invitation,
I'm
marcus
from
the
company
class
nostic.
So
the
status
is
that
we
have
a
working
containerized
network
function,
that's
just
working
with
a
raw
socket.
We
set
that
up
with
andrea
011
on
the
kubernetes
118
cluster.
F
So
what
we're
basically
doing
is
we're
sending
wire
open
flow,
basically,
all
the
traffic
between
pots
to
another
pot
that
is
controlling
the
traffic
for
two
reasons:
either
like
disallow
a
lot
of
the
traffic
like
a
network
policy
or
to
slow
down
the
traffic
to
weight,
limiting
and
now
as
we're
running
with
a
withdrawal
socket.
So
the
problem
that
we're
obviously
having
is
the
performance
so
we're
very
interested
in
the
dvdk
support
and
the
other
problem
that
we're
having
is
because
the
customer
will
want
to
run.
F
This
is
using
kubernetes
115
and,
as
far
as
I
can
see,
andrea
is
now
supporting
only
116..
So
I'm
not
sure
what's
the
best
approach
to
deal
with
these
two
problems.
So
that's
why
I
basically
join
and
want
to
discuss
it
with
you
guys.
A
A
Perfect,
so
I'm
probably
not
in
the
best
position
to
comment
on
your
legacy
about
kubernetes
1.15,
but
let's
say
that
what
is
supported
by
andrea
it's,
what
is
the
support
matrix
that
is
actively
tested
in
the
entry
ci
environment
as
you?
A
As
you
perfectly
know,
there
is
no
reason
why
entry
actually
not
support
kubernetes
1.15
is
just
that
andrea
declares
as
supported
what
is
actually
tested
within
the
ci
pipeline
now,
whatever
packaging
that
is
made
for
andrea,
that,
in
my
opinion,
depends
on
the
packager,
whether
they
won't
extend
or
restrict
the
support.
Metrics.
A
So,
let's
say,
for
instance,
company
foo
decide
to
make
a
commercial
product
based
on
andrea
if
they
in
their
environment,
they
have
you
know
if
their
environment,
they
have
the
ability
of
testing
andrea
with
kubernetes
1.15,
and
then
they
should
be
able
to
declare
support
for
a
foreign
1.544,
kubernetes
1.15.
A
I
think
that
your
question,
however,
is
and
which
deserves
discussion
here-
is
that,
let's
assume
that
you
find
a
bug
in
antria
which
is
specific
to
kubernetes
1.15
integration,
will
you
get
an
upstream
fix?
Will
you
be
able
to
propose
an
upstream
fix
on?
A
Will
you
be
told
sorry
we
don't
fix
it,
because
we
don't
support
kubernetes
1.15,
in
my
opinion,
in
my
opinion,
any
fix
as
long
as
it
does
not
break
support
for
extinction
release
as
long
as
it
comes
with
proper
testing,
it's
it
can
be
accepted,
but
I
will
also
like
the
or
hear
the
opinion
here
from
the
other
maintainers
on
the
call.
G
Yeah,
I
agree-
and
I
think
currently,
even
until
0.11
can
support
kubernetes
1.15.
If
just
some
young
file
needs
to
be
updated,
we
are
using
some
some
resources,
some
api
that
I
introduced
after
kubernetes,
1
2016.
F
Yeah
jen,
I
think
that
was
actually
the
problem
with
some
update,
you
updated
the
other
resources.
So
what
would
be
the
strategy?
Then?
Let's
say
like
we
installed
with
115
and
we
sent
you
guys
as
a
fix
those
115
compliant
resource.
G
I
think
it's
possible.
Maybe
we
could
add
a
we
could
publish
a
legacy
young
file
for
older
communities
versions,
but
maybe
we
cannot
guarantee
that
we
cannot
test
or
always
test
all
the
old
versions
in
our
ci,
because
there
are
too
many
versions.
A
Yes,
I
mean,
personally,
I
think
dpdk
support
will
be
welcome
into
andrea.
The
follow-up
question
for
me
is:
is
this
something
that
you're
already
working
on
you're
planning
to
work
on.
F
The
thing
is,
we
are
just
finished
that
the
this
cnf
is
working
with
raw
sockets.
So
what
we
tested
is
we
tested
maltose,
because
we
need
that
for
dvdk,
so
we
also
made
that
running
with
a
new
version
with
andrea
that
didn't
run
with
115.
That's
why
I'm
asking
so
that's
the
current
status,
but
we
didn't
continue
with
the
dvdk
support
yet,
but
we
are
very
happy
to
help
with
that.
F
So
if
you
point
us
there
in
the
right
direction,
that
will
be
the
next
thing
we
will
be
working
on.
A
Yes-
and
I
don't
know
what
is
the
feedback
here,
especially
from
zhenjun,
but
in
my
opinion
the
next
step
will
be
to
sync
up
with
the
other
team
that
was
supposed
to
present
today
about
cnfs
with
andrea,
because
I
do
believe
they
also
have
a
proposal
around
gpga.
B
I
think
they
call
the
multiple
network
support.
Basically,
they
propose
to
use
some
crds
we're
still
discussing
whether
we
use
smart
task
spec
or
some
some
other
crd
to
define
network,
and
then
you
can
put
allotation
on
a
port
and
to
say
I
want
a
segment
interface
on
a
specific
network,
it's
very
similar
to
martha's
way
of
defining
a
secondary
interface
and
separate
networks,
and
doing
that
we,
I
think
we
also
told
we
want
to
support
osd,
bdk
and
network.
B
B
Gendry
sure
could
we
could
we
probably
we
can
start
some
designing
on
this
issue
right,
github
issue
so
wrong
and
other
guys.
Other
interviewers
can
can
also
comment
on
that.
Yes,.
A
And
also
since
all
the
in
interested
parties
are
on
the
slack
channel,
perhaps
the
conversation
can
can
go
ahead
there.
I
noticed
marcus
that
you're
there,
the
other
the
people
working
on
this
proposal
are
also
on
the
slack
channel.
Perhaps
we
can
we
can
move
the
conversation
there
to
make
sure
that
all
your
requirements
are
either
already
covered
by
the
proposal
that's
being
prepared
or
whether
we
might
want
to
extend
that
proposal
to
make
sure
that
your
requirements
are
covered
too.
B
A
F
True,
okay,
sorry,
I
just
didn't
get
that
100,
so
the
discussion
will
be
done
on
the
select
channel
on
andrea's
selection.
About
that
feature.
A
So
let's
say
that
we
have
a
we,
we
have
an
open.
We
have
two
places
where
you
we
can
continue
the
discussion.
Let's
say
that,
for
I
will
say
that
for
strictly
topics
strictly
related
to
designer
requirements,
I
will
use
github
because
there's
already
a
proposal
being
discussed
on
github
for
this
for
intel.
For
let's
say
for
more
human
interaction,
I
will
consider
using
zlac.
A
F
A
That's
for
sure
all
right
and
is
there
anything
else
to
add
on
this
topic.
B
A
E
E
Currently
when
I
run
it
I'm
what
I'm
wondering
is
is
there
I
I
mean
I
it
doesn't
at
a
high
level.
It
doesn't
support
certain
types
of
node
traffic.
So
for
that
we
fall
back
to
the
user
space
coupe
proxy.
E
So
does
that
mean
that
if
that's
the
case
feel
free
to
interrupt
me
if
I'm
going
the
wrong
way
here?
But
if
that's
the
case,
does
that
mean
we
have
like
some
kind
of
redundant
coup
proxy
rules
like
we
have
some
ovs
rules
that
are
programmed
that
are
also
existing
in
the
user
space
coupe
proxy?
E
And
then
what
about
the
other
routing
rules
right,
because
if
you're
running
the
user
space
coupe
proxy,
presumably
isn't
it
writing
rules
for
all
service
to
pod
traffic?
So
does
that
mean
we
have
redundant
rules
all
there?
I
mean
I
don't
even
know
if
the
redundant
rules
is
bad
or
anything,
I'm
not
saying
it's
bad,
I'm
just
curious,
whether
that's
something
we
should
expect.
B
F
H
H
Okay?
Okay,
let
me
repeat
your
question,
so
you
are
asking
if
there
are,
if
there
are
redundant
rules
in
entry
process
and
the
cool
process
right.
H
Yes,
so
if
you
are
both
enable
the
cool
proxy
there
are,
there
will
be
some
resultant
rules.
It's
because
we
we
still
depend
on
the
user
space
cool
proxy
start
up.
So
so,
if
you
disable
the
entry
proxy,
there
will
be
no
redacted
rules.
E
Okay,
if
you
disable
andrea
proxy
on
windows,
what's
the
I
mean,
so
I
guess
my
my
the
question
behind
the
question.
Is
I'm
just
trying
to
think
in
terms
of
production
windows
deployments
like
what's
the
right
thing
to
do
right?
So
selfishly
I
like
I
all
I'm
really
thinking
is
like
well,
okay,
so
we're
you
know,
of
course,
on
our
end
and
the
cluster
api
side,
we're
going
to
be
supporting
entry
on
windows,
and
I
just
want
to
have
a
net.
E
I
want
to
be
able
to
tell
a
networking
story
that
makes
that's
as
simple
as
possible.
That
makes
sense,
and
so
should
I
be
just.
Should
I
be
using
coop
proxy
or
should
I
be
using
andrea
proxy,
and
just
because
I
mean
it's
one,
less.
B
So
I
think,
for
for
now
you
you
need
to
use
both
could
be
proceed
and
share
proxy
okay.
B
It's
like
we,
you
can
say
it's
smaller
for
users.
I
am
not
sure
we
should
say
duplicate
words.
Further.
You
are
right,
the
best
code
process
and
the
entire
process.
They
were
trying
to
implement
the
water
implant
load,
balancing
for
water,
water
service,
traffic
yeah.
We
don't
really
say
since
could
be
processing
where
they
handle
the
load
port,
for
example,
and
we
escape
some
some
zeros,
sorry
I'll
pull
it
all
the
way.
It's
like
even
a
jump.
E
B
B
Not
being
said
by
the
grip
proxy
in
the
usbs
and
for
the
traffic,
the
load
port
traffic,
specifically
since
it
enters
the
load
from
the
node
default
namespace.
B
B
So
that's
the
kind
of
situation
right
for
the
longer
term.
We,
I
think
we
try
to
implement
load
the
port
with
andrea
proxy
too,
and
then,
when
that
is
done,
you
can
you
can.
You
can
store,
could
be
proxy
on
at
least
on
the
windows.
Note.
E
Okay,
cool,
so
then
my
next
follow-up
I
wanted
to
ask,
was
you
know
in
the
this
is
more
of
an
idea
that
I
was
just
wondering
whether
it
fits
in
or
not
I
mean
in
in
sync
windows.
Some
of
you
all
may
have
heard
of
csi
proxy,
which
is
a
a
it's
like
a
it's
like
wins
right,
except
that
what
they
do
is
they.
They
have
specific
calls
specific,
specific
api
calls
that
are
grp
that
have
grpc
wrappers.
E
So
what
that
means
is
that
only
very
specific
windows
functions
can
be
called
right.
So
what
I
was
wondering
was
in
andrea
in
the
we
currently
in
andrea.
We
start
up
the
coop
products.
We
started
up
in
user
space.
There
is
no
other
need
for
us.
If
we're
not
running
the
coup
proxy
in
user
space,
then
we
would.
What
other
do
we
have
any
other
requirements
on
wins
on
rancher
wins
like
do
we
need
it.
H
Oh,
I'm
repeating
your
question
so
yeah
yeah.
So
if
you're
asking,
if
the
entree
need
more
functions
or
requirements
for
ranger
wiz
right.
H
Yeah
yeah,
we
indeed
have
some
more
requirements
because
you
need.
You
know
that
the
current
implementation
of
the
instruments
have
many
limits.
For
example,
we
we
just
can
draw
the
binary
from
host,
but
we
cannot
run
a
script
from
the
host.
E
H
Just
need
to
run
around
the
actual
agent
of
the
current
policy
binary.
E
Yeah,
exactly
yeah
yeah,
okay,
cool
yeah.
No,
that
I
think
that's
that's
fine!
I
just
I
was
trying
to
see
whether
at
some
point
since-
and
this
is
more
of
an
idea
than
anything
else
since
there's
precedent
in
sig
windows
for
csi
proxy,
which
is
something
that
does
what
wins
does
but
does
not
completely
open
up
a
socket
where
you
can
just
do
anything
you
want.
E
You
know
like
rancher
does
when
we
start
going
into
more
production
deployments.
There's
you
know
people
may
start
asking
questions
about
well,
do
we
really
want
to
run
rancher
wins,
and
in
that
situation
I
was
just
wondering
around
whether
maybe
a
cni
proxy.
You
know
what
I
mean,
something
that
constrained
the
amount
of
calls
api
calls
that
you
could
make
that
basically
copied
what
rancher
wins
did,
but
only
worked
for
specifically
for
the
stuff
that
andrea
wants
to
do
and
nothing
else.
E
You
know
what
I
mean
so,
in
other
words,
literally
copy
paste,
the
socket
stuff
that
we're
doing
and
literally
have
that
thing
launch
android
age
and
nothing
else.
You
know
what
I
mean
via
grpc,
that
would
kind
of
reduce
the
security.
E
What's
the
buzzword
that
they
always
say
the
foot,
the
blast
radius
or
you
know
the
surface
area
right.
It
would
reduce
our
surface
area
because
ranchers
is
a
rancher.
Wins
is
a
pretty
big
security
hole
as
is,
and
the
way
they've
dealt
with
that
again
in
cigna
in
sig
windows
is
by
creating
csi
proxy,
which
is
a
constrained
version
of
of
what
rancher
wins
does.
So
I
was
just
kind
of
wondering
whether
maybe
having
a
cni
proxy
type
thing
might
be
an
analog
that
we
might
want
to
consider
but
no
strong
opinions.
B
Yeah,
I
think
it's
an
interesting
idea.
Father,
you
are
saying
something
generic
to
anything
like
or
just
for
entry.
E
I
mean
I
guess
it
really
would
be
specific
to
andrea,
but
I
mean
it.
There
may
be
some
way
that
it
might,
you
might
be
able
to
think
of
something
being
generic.
I
see
you
know
like,
maybe
if
you
assumed
that
every
cni
had
an
agent
that
it
needed
to
run
on
every
node
as
an
agent
process,
maybe
maybe
you
could
make
it
generic
in
that
sense
right,
you
know,
so
there
may
be
some
something
you
could
do
there
to
make
it.
E
You
know,
but
I
mean
at
that
point:
it
wouldn't
really
be
a
cni
proxy.
It
would
be
more
like
a
you
know
like
like
in
the
csi
proxy.
The
idea
is,
there's
just
generic
commands
that
every
csi
needs
to
run
that
are
powershell
commands,
but
every
csi
doesn't
necessarily
need
to
run
arbitrary
windows
commands
right,
and
so
I
think
it's
a
similar
thing
for
cni's
there's,
probably
a
small
space
of
powershell
commands.
You
might
want
to
run
and
they're
probably
net
sh
commands
and
but
then
there's
the
agent
restarting
stuff
that
you
know.
B
Okay,
yeah,
I
think
I
got
your
idea-
I'm
not
very
familiar
with
this
process
implementing,
maybe
ria
and
winning
the
small
thing.
How
about?
Let's
do
some
front
discovering
on
this
one?
Yes,
that'd,
be
a
something
I'm
interested
in
interesting
to
to
to
learn
more
actually.
E
Yeah
definitely
yeah.
What's
the
timeline
for
how
long
we
should
keep
the
windows
coop,
proxy
and
user
space?
Are
we
going
to?
Are
we
going
to
have
that
for?
Are
we
going
to
require
that
for
another
year
or
another,
six
months,
sure.
B
Definitely,
no,
I
think
we
are
really
working
on
node
port
support
for
by
andrea
proxy.
I
think
we
already
have
a
full
request
a
week
here.
You
guys
know
when
we
plan
to
enable
the
portal
with
android
proxy,
then
we
can
remove
the
proteomics
yeah.
I
Yes,
we
have
a
plan
want
to
totally
replace
q
proxy
with
entry
proxy.
I
think
maybe
the
roadmap
is
but
two
release,
I
mean
maybe
thirteen
or
fourteens
and
three
or
zero
dot,
13
or
14.
We
want
to
totally
remove
q
proxy.
I
E
E
E
All
right,
I
recently
ran
the
windows,
sig
windows
tests
and
conformance
tests,
and
I
I
ran
them
on
a
windows
cluster
and
only
only
five
of
them
passed,
but
I
have
possibly
some
other
things
wrong
with
it.
A
Okay,
cool
awesome,
all
right
thanks
jay,
so
we
are
slightly
over
time.
I
have
won
30
second
announcements
to
make
some
community
members
are
asking
us
to
reconsider
meeting
times
to
make
them
more
friendly
to
attendees
in
the
united
states,
especially
those
in
central
eastern
area
of
the
united
states.
A
As
a
matter
of
fact,
we
had
kept
this
schedule
now
for
over
six
months
and,
as
you
know,
we
are
probably
it's
more
friendly
to-
let's
say
asian,
maybe
european
countries,
so
since
there
is
no
way
that
we
can
have
a
single
meeting
time
that
will
make
everyone
happy.
We
are
considering
again
a
proposal
for
alternating
alternative
meetings.
Lots
we
don't
have
to
make
a
call.
Now,
of
course,
according
to
the
last
survey
that
we
did
about
six
months
ago,
there
was
a
split
between
you
know.
A
The
the
I
mean
the
slot
that
we
have
now
was
likely
preferred,
because
many
most
community
members
prefer
to
not
have
a
rotating
schedule,
mostly
because
every
week
you
have
to
wonder
is
this:
the
am
mid,
am
one
or
the
pm
one.
So
that's
why
most
people
prefer
a
single
a
single
time,
but
anyway,
since
we
many
several
community
members
are
not
able
to
join
the
meeting,
we
will
now
reconsider.
It
probably
reopen
the
poll,
and
the
discussion
will
happen
on
slack.
A
So
if
you
are
interested
in
providing
your
feedback,
please
let
us
know
say
that
it's
so
I
think
that
is
all
for
today.
We
are
unfortunately
already
three
minutes
over
time
and
I
would
like
to
apologize
with
that,
for
that
is
there
any
final
question
comment
waiting,
15.