►
From YouTube: Antrea Community Meeting 08/10/2020
Description
Antrea Community Meeting, August 10th 2020
A
B
D
C
Yeah,
I
think
people
agreed
that
talking
about
the
progress
on
the
cloud
support
side
was
a
good
idea
and
raul
who's
been
working
on
this
a
lot
has
agreed
to
join
the
call.
C
B
That's
right,
that's
right!
That
was
a
looking
at
this
luck
channel
that
was
puja,
suggesting
it
right.
So,
let's
we
have
quite
a
few
people
on
the
call
we
have
20
people
on
the
call
right
now.
Recording
us
started.
So
I
guess
we
can
start
with
the
first
topic
on
this
agenda
and
I
guess
that
would
be
raul,
providing
an
update
on
andrea
cloud
support
if
I'm
not
mistaken,.
E
E
E
E
C
Yes,
so
here
I
I
I
let
raul
know
about
this
like
30
minutes
ago
right.
So
it's
just
like
an
informal
discussion
about
what
we
have
so
far
and
recent
changes.
E
Okay,
all
right,
so
with
respect
to
cloud,
we
started
the
effort
with
aks
engine
aks
engine
is
an
open
source
project
from
microsoft
and
which
helps
in
deploying
a
k-8
cluster
in
azure.
G
E
So
with
that,
as
part
of
that
effort,
we
integrated
entria
as
the
supported
cni,
which
can
do
both
cni
chaining
mode,
in
which
azure
cni
can
be
changed.
We
can
be
chained
with
entria
and
andrea
can
enforce.
Network
policy
only
and
azure
will
take
care
of
the
networking
part.
E
We
do
support
the
regular
and
cap
mode
in
aks
engine
2.,
so
we
started
with
the
regular
encap
mode
in
aks
engine,
and
then
we
did
a
cni
chain
in
aks
engine
with
azure
cni,
so
that
is
just
used
to
enforce
network
policy
and
the
advantage
of
doing
the
chaining
was
we
wanted
the
networking
to
be
taken
care
by
the
underlying
cloud
cni
like
azure
cni,
so
that
the
pod
actually
gets
ip
from
the
v-net
rather
than
an
overlay
ip.
E
H
E
E
Azure
provides
something
called
aks
managed
service,
which
is
aks,
aks,
azure,
kubernetes
service,
sorry,
and
and
that
you
could
use
the
azure
portal
or
azure
cli
to
deploy
a
cluster
and
the
cluster
would
be
managed
by
microsoft.
E
So
it's
a
managed
service
so
with
starting
with
0.9
relays,
which
is
happening.
I
think
this
week
we
do
support
cni
chaining
with
azure
cni,
which
means
basically,
you
deploy
a
cluster
using
azure
managed
service,
which
is
aks,
you'll
choose
azure
cni
as
the
networking
plugin
and
today
they
support
calico
and
azure,
as
the
policy
plug-in
in
their
managed
service
like
aks
engine
is
a
different
project.
An
aks
managed
service
is
a
different
project.
E
Aks
engine
is
an
open
source,
so
we
can
contribute
and
add
our
cni's
in
that
project,
but
aks
manage
service.
It's
it's
managed
by
microsoft,
azure
folks,
so
they
base.
They
just
support
two
network
policy
cni,
which
is
azure
cni
itself
and
then
calico,
and
I
guess
cody
can
probably
share
more
details.
We
plan
to
add
entry
as
one
of
the
supported
cni
in
aks
engine
aks
manage
service
also,
which
requires
microsoft
support.
E
But
what
we
did
is,
if
you
just
deploy
a
cluster
using
azure
cni
and
don't
select
any
network
policy,
you
could
very
well
apply
entria
yaml
out
of
band
once
the
cluster
is
up
and
running
and
entria
will
do
the
will
enforce
network
policies
only
it
will
operate
in
a
network
policy
mode.
E
So
that's
where
we
are
with
0.9
release.
We
would
support
aks
managed
service,
which
would
be
an
which,
which
would
require
an
out
of
band.
E
A
Somebody
community
wanted
to
do
this
today.
Is
it
possible?
Can
you
give
them
where
they
would
go
to
use
antrio
with
aks
engine,
for
example,
sure
I.
E
A
E
So
this
support
I
mean
like
this-
would
be
enabling
0.9
release
right
now.
It
is
an
amazing
master
branch.
So,
basically,
what
you
do
is
there
are
some
prerequisite
that
you
install
azure
cli
and
then
create
a
resource
group.
Sorry
create
an
aks
cluster
and
then
specify
just
the
network
plugin
as
azure.
E
This
will
deploy
aks
or
kubernetes
cluster
on
azure
cloud,
fetch
the
details
and
then
see
the
cluster
is
up
and
running
and
then
to
enable
entria
in
a
cni
chain
mode.
We
expect
customer
or
user
to
apply
this
entry
or
node
in
it,
yaml,
which
basically
preps
up
the
node
in
such
a
way
that
the
entry
cni
can
operate
can
operate
in
a
cni
chain
mode.
So
we
modify
some
settings
on
the
worker
node
like
change,
azure
cni
to
operate
in
transparent
mode.
E
Azure
cni
supports
like
two
more
bridge
mode
or
transparent
mode,
so
for
entry
or
cni
to
work
with
azure
cni,
it
has
to
operate
in
transport,
transparent
mode.
So
that's
what
this
node
in
yaml
does
it's,
basically
a
script
which
basically
toggles
the
mode
and
then
with
0.9.0
release.
Once
this
is
done,
you
can
apply
the
entry
yaml
and
once
it
is
applied,
it
will
operate
in
the
cni
chain
mode
because
entria
aks
in
entry,
aks
yaml.
We
have
set
the
traffic
mode
to
be
network
policy.
E
Only
more
so
user
doesn't
have
to
update
any
parameters
or
config
in
this
yaml
file,
and
this
just
shows
that
there's
a
node
in
it
and
then
the
entry
I
agent
and
the
controller
is
running
on
this
aks
cluster
and
one
last
thing
which
we
need
to
do
is
once
you
have
so
there
are
so
once
you
bring
up
the
aks
cluster
right.
There
are
few
pods
like
core
dns
dashboard
q
proxy.
E
They
are
basically
they
are
up
and
running
with
azure
cni
itself,
and
once
you
insert
entry
or
cni
entry
is
not
aware
of
this
pods
so
that
and
it
and
there
would
be
a
there-
could
be
some
deployment
also
which
entry
may
not
be
aware
of.
E
So
what
we
recommend
is
once
you
have
installed
entria,
then
you
should
restart
pods
in
all
namespaces,
be
it
the
cube
system
or
be
any
deployment
on
any
namespaces
so
that
the
parts
can
be
managed
by
entry
or
crew
and
entry
can
enforce
the
network
policy
right
now,
it's
a
manual
step.
We
expect
user
to
delete
those
ports
which
we're
already
running
without
entry
going
forward.
H
So
cody,
how
far
have
we
gotten
with
like
getting
an
official
support
with
microsoft?
Because
if
we
end
up
with
a
you
know,
we
we're
in
like
at
the
initial
stage
itself,
you
can
supply,
saying
that
you
know
entry
will
be
the
official
or
andrea
will
be
the
cni
for
this
cluster.
Then
we
don't
need
to
restart
these
pause,
because
nks
would
be
there
from
the
beginning.
A
A
We
have
a
similar,
maybe
not
exactly
related,
but
a
similar
situation
in
as
well,
and
is
something
that
any
type
of
a
managed,
kubernetes
service
provider
you
know,
needs
to
ensure
that
all
the
components
of
the
cni
have
been
installed
before
we
actually
start
the
you
know
start
immediate
pods,
so
that
we
can
make
sure
that
they
get.
You
know
plumbed
with
the
right
cni
and
we're
aware
of
those
pods,
so
we
can
apply
network
policy.
F
E
Okay,
this
is
with
respect
to
aks,
managed
service
and
there's
something
similar,
which
is
called
aks
engine.
Here
we
can
contribute
and
it
supports
a
lot
of
cni's.
E
Let
me
just
let
me
show
you
that
too
network
c
okay,
so
here
with
respect
to
network
policy,
they
support
calico,
celium
and
entria,
and
we
added
entry
as
a
supported
network
policy
cni
in
this
aks
engine
open
source
project,
and
if
you
deploy
a
cluster
like
if
you
are
deploying
a
cluster,
the
way
you
deploy
a
cluster
is
you
specify
a
json
file
in
which
you
select
your
network
policy
plugin
and
network
plugin
like
entria
or
azure
or
whatever?
F
Now
for
question
here,
so
you
modify
this
page
to
add
andrea
or
it's.
It's.
C
F
E
But
yeah,
so
this
is
how
you
deploy
it
and
then,
as
part
of
another
update,
we
are
planning
to
us
enable
windows,
support
in
aks
engine
2.,
louis
and
meaning
they're
working
with
me,
and
hopefully
we
can
get
windows
support
in
aks
engine.
F
So,
for
a
case
engine
reminding
me,
we
support
latter
points
only
mode
already.
Well,
we
just
support
we
we
have
to
to
both
the
policy
and.
E
F
I
assume
there
will
be
some
validation,
for
example,
you
cannot
do
that
policy
within
trial
for
network
plugin
to
be
a
category.
E
A
A
E
I
would
okay,
so
if,
for
policy
only
mode,
if
you
are
operating,
if
you
are
bringing,
if
you
are
trying
to
do
it
in
your
bring
your
own
cluster,
then
first
the
azure
cni
needs
to
be
installed
right
and
then
I'm
not
sure
if
azure
cni
can
be.
Let
me.
F
So
cody
for
your
question.
At
least
we
don't
support
anything
for
diy
cluster,
for
lighter
policy,
only
mode.
F
E
Don't
know,
hybrid
mode
cannot
work
in
that
case,
because
azure
is
l3
cloud
within
the
same
subnet.
The
packets
are
always
sent
to
the
default
game.
Okay,.
E
E
Caveats
by
the
way,
with
respect
to
it
might,
I
might
be
have
I
have
to
go
into
the
details
of
it.
But
if
there
are,
there
were
a
few
caveats.
If
we
have
to
support
it.
E
Okay,
was
there
a
question.
A
A
C
Next,
we
can't
hear
you
anymore
cody,
but
I
think
he's
saying
if
there
are
no
more
questions
on
azure,
we
can
probably
move
to.
E
Okay,
so
this
completes
like
the
aks
azure
support,
azure
cloud
support,
which
is
via
aks
engine
and
aks
manage
service.
Next
is
eks.
Eks
is
aws
cloud.
We
supported
this
for
us.
We
supported
cni
training
in
eks.
E
Yeah
0.5,
so
the
for
aws
cloud
eks
is
the
is
the
managed
service,
kubernetes
managed
service,
and
we
supported
the
network
policy
only
mode
wherein
we
we
do
cni
chaining
with
aws
cni
and
entry,
and
this
support
was
added
in
0.5.0
release
itself.
E
The
process
of
doing
the
cni
chaining
is
kind
of
similar
with
what
I
have
discussed
for
aks
cody.
You
want
to
talk
about
our
engagement
with
aws
folks,
so
I
just
go
into
what
was
supported
earlier
and
what
kind
of
enhancement
we
are
doing
for
0.9.0
release.
A
Absolutely
so
in
aws
or
or
amazon,
we,
there
are
two
different
approaches
that
we've
been
pursuing.
A
We
also
have
been
investigating
in
eks
capability
of
running
as
a
primary
cni
as
well
and
I'll
I'll
discuss.
Why
we're
pursuing
that
in
terms
of
the
advantages
it
would
have
for
a
user
in
just
a
second.
A
Additionally,
we
wanted
to
make
sure
that
if
you
built
your
own
cluster
in
amazon
on
ec2
that
we
could
support,
you
know
in-cap
mode
and
potentially
some
other
modes
as
well,
and
so,
let's
start
with
the
managed,
kubernetes
or
eks
in
amazon,.
E
Yeah
we
are
breaking
up
in
between.
A
Something
must
be
up
with
my
internet.
Let
me
let
me
let
me
move
to
a
different
room.
Maybe
it's
my
wireless
or
something.
Is
that
any
better
yep?
A
Okay,
let's
try
this
this
right
here,
so
we're
going
to
start
with
the
managed
eks,
so
policy
only
mode.
A
We
wanted
to
be
able
to
support
chaining
with
the
native
eni,
cni,
and
so
initial
tests
showed
we
had
a
a
race
condition.
That's
not
necessarily
a
a
problem
within
antria.
It's
actually
something
that's
common
to,
probably
all
the
cni's
that
would
need
to
chain
in
any
situation.
Basically,
the
the
issue
is
we,
like
many
other
cni's,
install
the
actual
cni
binaries
that
kubla
calls
using
kubernetes
as
a
deployment
mechanism.
So
basically
we
have
a
daemon
set
that
that
also
deploys
our
agent,
but
can
also,
as.
H
A
Of
the
the
deployment
as
an
init
container
can
deploy
additional
binaries
that
are
that
are
used
for
the
interface
for
cni.
A
If
those
binaries
aren't
present
and
registered
correctly,
you
can
actually
if
a
cluster
comes
up,
pods
can
get
scheduled
and
started
and
they
won't
be
registered
with
andrea
right.
Andrea
won't
be
responsible
for
some
of
the
provisioning
that
takes
place,
and
so
once
andrea
comes
up.
Things
like
network
policy,
for
example,
wouldn't
be
enforced
until
those
pods
are
restarted
in
and
plumbed
with
the
appropriate
network
interfaces
that
andrea,
then
can
enforce
things
like
network
policy
and
and
and
we've
called
this
out
to
amazon.
A
As
you
know,
this
is
a
potentially
something
that
we
actually,
as
both
amazon
and
and
the
entry
community
need
to
take
up
stream
to
discuss
as
a
potential
issue
with
the
way
cni
is
used
in
kubernetes
right.
We
need
a
way
for
the
kubernetes
framework
to
if
multiple
cni's
are
being
specified
or
maybe
a
way
to
specify
that
multiple
cni's
are
being
specified
in
terms
of
a
chaining
mode,
ensure
that
all
of
the
initialization
has
been
done
for
all
the
cni's
before
pods
actually
begin
scheduling.
A
So
it's
almost
a
an
update
to
you
know
whether
or
not
you
know
nodes
are
ready
for
for
scheduling
in
the
interim.
We've
worked
out
a
plan
with
amazon,
where
we
can
restart
any
pods
that
have
been
scheduled
once
the
the
cni
initialization
has
completed
right,
so
that
we
can
avoid
this.
This
race
condition,
if
you
will,
where,
where
a
pod
could
have
been
scheduled
before
andrea,
had
a
chance
to
initialize
all
of
its
components
and
be
able
to
enforce
policy.
A
What
we
want
to
ensure
is
that,
if,
if
we're
making
a
guarantee,
that
policy
is
being
enforced
by
antria,
that
we
don't
want
pods
having
an
active,
you
know
network
connection
to
the
cluster
to
the
world
right
without
for
any
given
period
of
time
without
that
protection,
and
so
this
is
a
mitigation
right
now
that
we've
put
in
place
that
we
can
go
ahead
and
begin
using
the
cni
with
amazon,
but
with
the
plans
to
you,
know,
fix
this
upstream
later
on.
A
Please
the
the
engineers
that
are
on
the
call,
please
correct
me.
If
I,
you
know
misstated
any
of
this
or
if
you
wanted
to
add
some
additional
details
to
what
we're
doing
in
terms
of
policy
only
mode
with
eks.
C
C
E
Yeah,
I
can
talk
about
it,
so
let
me
go
over
the
initial
support
and
what
enhancement
we
are
doing
in
the
guess,
ek,
sorry,
so,
yeah
with
0.5.0
release,
you
could
just
apply
the
entria
pks
yaml
file
and
we
expected
a
user
to
modify
a
few
parameters
in
entry
or
config
file
with
respect
to
default
mq
and
the
service
sider.
So
this
page
is
not
updated.
E
Before
applying
this
entry,
eks
yaml
in
0.9
release,
we
have
removed
this
extra
configuration
in
a
way
that
the
default
mtu
now
would
be
auto
discovered.
So
once
you
apply
entria,
it's
going
to
figure
out
the
m2
empty
of
the
primary
nic
and
will
set
the
default
and
I'll
adjust
the
default
mtu.
Accordingly,
user
doesn't
have
to
modify
this
argument
and
then
the
service
sider,
our
parameter,
is
not
needed
with
entry
or
proxy.
So
in
0.9
release
we
are
enabling.
E
F
E
F
Just
as
john
who
said
in
0.9,
we
will
switch
low
income
mode,
hybrid
mode
and
the
policy
only
mode
to
use
until
native
proxy,
and
it's
no
more
window
monitors
service
data
parameters
for
volts
modes
and
also
means
for
with
no
more
requirements,
parameter
and
because
gk
is
in
low
income
medicine
roughly
top
later,
and
the
foreign
sdks
is
probably
only
about
both
reuse
and
generative
proxy
from
their.
E
F
Then
we
should
fix
that.
I
think
shoes,
pilots
actually
remove
the
curry
project,
support
for
non-encampment.
E
Let's
take
it
offline,
okay,
so
yeah,
so
we
remove
these
two
limitation
and
then
we
can
operate
in
a
cni
chain
mode
with
the
aws
cni
and
inferior
cni
will
operate
in
just
the
network
policy
only
mode.
There
is
a
documentation
which
talks
about
the
cni
chaining,
which
is
the
policy
only
mode.
If
you
guys
are
interested
in
looking
at
how
we
did
the
cni
chaining
and
what
we
call
the
l3
mode.
This
is
a
good
start.
You
can
take
a
look
at
this
documentation.
E
Let's
go
back
to
eks,
so
there
were
two
things
which
we
remove
and
the
third
thing
which
cody
was
mentioning
about
a
rate
race
condition
in
which,
if
you
have
two
cni's,
like
aws
cni
and
an
entry
or
cni
chained
together,
then
it
may
so
happen
that
we
have
an
existing
issue
in
which
like.
If
there
are
some
parts
which
are
running
before
entria
is
deployed,
then
you
need
to
manually
restart
it
right.
Otherwise,
the
engineer
will
not
manage
those
sports
and
network
policies
won't
be
applied.
E
So
that
is
one
issue
and
we
documented
saying
that
once
you
install
entria,
you
delete
the
existing
parts
in
all
namespace,
but
the
problem
is
a
little
tricky,
maybe
that,
let's
say,
if
you're
doing
a
scale
up
of
your
worker
nodes,
then
it
may
so
happen
that
the
worker
node
comes
up
with
aws
cni
and
as
soon
as
it
comes
up,
a
pod
gets
scheduled
and
then
then,
when
entry
is
applied
after
the
pod
was
scheduled,
then
it
may
not
be
managed
by
entria.
So
this
is
little.
E
This
is
the
risk
condition
which
cody
was
talking
about,
and
here
we
can't
expect
user
to
monitor
for
scale-out
event
and
then
kill
pods
which
are
running
on
that
worker,
node
and
so
on.
Right,
so
what
we
are
doing
as
a
temporary
fix,
I
mean
like
the
proper
fix,
can
come
from
upstream
obvious
upstream
kubernetes,
but
as
a
temporary
fix
for
eks.
What
we
are
doing
is
we
are
adding
a
node
in
its
script.
E
Yes,
okay,
perfect,
so
what
we're
doing
is
we
are
adding
a
node
in
its
script,
which
will
basically
wait
for
entria
cni
to
be
present
in
our
cdni
config
file,
and
then
what
it
does
is
once
the
entries
dna
is
present,
then
we
fetch
the
existing
parts
from
aws
k8
agent,
and
so
using
this
call,
we
fetch
the
existing
parts
which
are
running
on
the
worker
node
and
then
kill
it
kill
the
container
running
on
that
worker
mode.
A
Are
they
generally
applicable
that
we
could?
We
could
take
those
upstream
as
part
of
the
conformance
test,
so
that
we
could
determine
if
other
c
is
have
the
same
problem
as
part
of
our
effort
to
drive
this
upstream
for
a
fix
is?
Is
there
any
way
that
we
can
use
what
we
know
now
to
determine
if
there's
problems
and
say
other
cni's
that
run
against
those
performance
tests.
E
Sure
so
at
least
we
know
from
ceiling
that,
if
you
don't
restart
the
existing
parts,
it
will
not
be
managed
by
an
ceiling
that
that
is
what
we
know
for
sure
that
they
also
recommend
restarting
the
existing
parts,
but
for
calico
I
have
not
seen
any
documentation
wherein
they
talk
about
this
problem.
They
don't
mention
that
you
would
have
to
restart
any
parts.
A
Sure
what
I'm
suggesting
is
that
would
be
the
information
that
we
have
here,
coupled
with
some
of
the
work
that
we've
already
collaborated
on
on.
The
conformance
test
would
be
a
great
addition
right
to
to
help
support
fixing
this
upstream.
E
Yeah
sure
we
can
use
those
yes,
we
can.
We
can,
at
least
for
the
race
condition,
would
be
little
difficult
for
us
to
reproduce
using
performance
test,
but
we
can
mention
about
these
problems
when
that,
when
we
do
cni
chaining,
we
would
have
to
restart
the
existing
parts.
E
E
E
Let
me
continue
this
like
another
for
another
two
minutes.
I
just
want
to
update.
H
E
C
Sorry,
it's
it's
twice
a
day,
yeah
not
twice
right.
E
So
this
we
had,
we
have
a
jenkins
job
in
which
we
run
conformance
test
on
eks,
gke
and
aks,
and
if
you
see
they
are
all
passing
and
as
this
test
we
do,
we
operate
in
network
policy.
Only
more
and
and
maybe
antonio
or
yang
can
talk
about.
There
are
a
few
tests
which
we
skip.
We
run
mostly
the
network
policy
test
and
then
for
basic
community
test.
There
were
some
things
which
we
skipped.
E
I
won't
go
into
the
details
of
it,
but
this
is
the
page
which
you
can
refer
to
to
check
on
the
cloud
ci
and
from
the
supported,
and
we
are
running
a
eks
ci
on
amazon,
linux,
2
os
and
for
gk.
We
run
it
on
ubuntu
and
for
aks
we
run
it
on
ubuntu
16.04.
E
E
Think,
and
also
one
another
thing
if
we
have
this
conformance
test.
So
if
you
want
to
deploy
a
cluster
in
aks,
eks
or
gk,
you
can
also
use
this
standalone
script,
wherein,
if
you
pass
in
your
credentials
like
azure,
app
id
tenant
and
password,
it
can
help
you
deploy
the
cluster
and
set
up
entry
or
two
in
network.
E
G
I
think
the
only
pitfall
right
now
is
that
for
for
gke
and
naked
ndks,
we
still
sort
of
you
know,
put
modified
an
mtu
and
service
cider
for
for
because
I
didn't
know
you
know
we
can,
we
can
omit
those
in
0.9,
so
this
probably
needs
some
little
bit.
Modification.
E
Yang
as
part
of
yes
eks,
we
we
expected
the
default
mpu
and
service
sider,
but
I
just
like
I
mentioned
with
0.9
release.
We
will
not
need
those
configuration.
C
A
A
B
A
I
was
going
to
give
a
quick
update
and
I
can
defer
mine
to
the
very
end
on
kind
of
what
we're
doing
in
the
network
policy
v2
working
group.
So
so,
let's,
let's
continue
and
try
to
finish
out.
I
think
what
all
the
good
work
that
rahul
is
talking
about
here
on
on
cloud
and
then
I'll
I'll
save
the
last
a
couple
minutes
for
a
quick
update
and
we
can
take
any
questions
from
there
perfect,
so
so
rahul.
A
The
other
thing
I
was
going
to
mention
is
you
know
this
concept
of
you
know.
We
talked
about
a
policy
only
mode.
We
could
also
be
a
primary
c
I
in
in
both
eks
and
in
your
build
your
own
cluster
on
ec2
and
the
reason
you
might
want
to
do
that
you
may
want
to
use
ip
spaces
that
are
orthogonal
to
you
know
what
was
provided
in
a
vpc
or
in
the
case
of
aks.
A
What
was
provided
in
azure
net
talk
to
us
a
little
bit
about
how
we
would
go
about
you
know,
setting
setting
that
up
and
how
it
would
differ
with
with
policy
only
mode,
and
maybe
what
some
of
the
the
caveats
we're
running
into
there
as
genji.
F
Yeah
hello,
could
you
hear
me
we
can
hear
you,
okay,
yeah,
just
as
suafu
said
for
untrained
to
be
department
thing.
I
we
require
load
ipam
to
be
enabled
for
combat
control
manager,
but
it's
not
the
case
in
eks.
That
is,
I'm
not
sure
about
the
aks
gker.
F
Well,
it's
not
the
case
for
eks
and
we
have
no
way
to
enable
load
and
pan
by
our
own
script
and
according
to
our
discussion
with
amazon
folks,
they
don't
have
plans
to
enable
load
plan
for
eks
either
that
is
starting
this
year,
according
to
them
so
sam's
the
only
choice
for
us
to
do
ipam
by
ourselves.
A
E
So
yeah,
so
this
limitation
is
only
for
eks,
which
is
amazon
managed.
A
So
what
rahul
was
I
had
some
audio
problems.
Hearing
your
whole,
I
think
what
you
were
trying
to
say
is:
if
you
build
your
own
cluster
right,
you
have
control
over
enabling
this
capability
in
kubernetes
right
to
turn
on
node
ipam,
and
so
in
that
case
we
can
be
a
primary
cni
right
now.
I've
also.
F
A
If,
if
I'm
running
my
own
cluster
in
ec2-
and
I
decided
to
put,
for
example,
nodes
in
different
azs,
which
would
effectively
put
them
in
different
subnets,
I
could
run
in
no
end
cap
or
or
run
you
know,
without
encapsulation,
between
the
nodes
in
the
same
subnet,
but
as
traffic
crosses
from
one
node
and
one
subnet
to
another
node
in
another
subnet.
A
So
therefore,
we
get,
I
think,
more
efficient.
We
don't
have
all
we
don't
have
the
overhead
of
encapsulation
for
for
every
for
every
traffic
path,
only
those
that
are
crossing
subnets,
yep
now
jinjin.
If,
if
we
turn
on
something
like
hybrid
mode,
it
does
disable
some
of
the
diagnostic
features
correct,
because
some
of
the
diagnostic
features
rely
on
underneath.
F
Sure,
I
think
you
mean
truthfully
beautiful,
trace
flow
in
the
contemplation
up
to
zero
downline.
We
require
to
leave
tunnels,
and
so,
if
you
do
low
in
cap
or
you
do
police
only
mode
or
if
you
do
hybrid
mode.
F
A
Thanks
for
clarifying
that
yeah,
so
that's
that's!
You
know.
If,
if
one
other
question
I
had
on
that,
jinjin
do
does
our
command
line
utilities
and
different
things
that
that
might
be
using
those
tools?
Do
they
inform
the
user
that
they
can't
work?
Because
of
that,
or
is
that
something
the
user
just
needs
to
know.
F
Let
me
see
what
I
I
think
we
at
least
we
we
don't
feedback
anything
into
the
idea.
I
think
there.
C
A
Great
well,
that's
good,
that's
good
information
to
know
if
you're
trying
to
run
trace
flow
and
you're
wondering
why
it's
not
working,
you
might
check
to
see
what
mode
you're
running
in
so
I
think
we've
covered
all
the
the
different
cni
modes
that
we
support
and
ways
of
running
in
amazon.
The
only
one
we
haven't
gotten
to
is
a
gke.
So
does
someone
want
to
talk
about
what
we're
doing
with
no
end
cap
and
in
gke
rock
classic
here.
E
E
H
So
before
we
move
to
gk,
I
have
a
question
only
case
you
guys,
I
guess
some
of
you
are
already
aware
that
amazon
is
working
on
the
next
generation.
Vpc
cni
right-
and
I
think
you
know,
apart
from
the
the
other
drawbacks,
that
they
have
over
the
pod
density
and
etc.
The
thing
that
is
relevant
to
this
discussion
is
that
I
think
they're
also
trying
to
solve
or
introduce
a
network
policy
controller,
along
with
the
ability
to
apply
security
groups
directly
on
the
on
the
paths
with
some
crds.
H
So
what
do
you
guys
think
is
the
strategy
there,
for
you
know
in
terms
of
like
the
collaboration
and
like
how
does
how
does
it
affect
our
road
plans
for
road
map.
A
So
we're
definitely
looking
at
that
in
terms
of
their
their
their
cni
implementation.
So
so
right
now,
if
we
go
as
a
primary
c,
I
we
do
have
an
advantage
in
terms
of
you
know,
pod
density
right.
We
don't
have
to
necessarily
scale
hardware
to
match
the
number
of
enis
that
are
needed
for
additional
pods.
If
a
if
a
user,
for
example,
wants
to
run
a
lot
of
small
pods,
there's,
there's
definitely
a
cost
advantage
of
running
something
like
antria
as
as
a
primary
cni.
A
Security
group
affiliation
as
something
that
we
can
enforce
with
with
network
policy.
It's
definitely
something
that's
on
our
roadmap,
and
so
we
will
be
addressing
that.
I
do
think
that
there's
still
going
to
be
some
additional
advantages
with
antrio
proxy,
for
example,
and
the
way
that
we
handle
vips
et
cetera,
that
are
going
to
give
some
performance
advantages.
A
But
you
know
it's
certainly
you
know
something
that's
healthy
in
the
in
the
ecosystem,
right.
We.
We
fully
expect
that
everybody's
going
to
be
pushing
you
know
either
either
directly
from
the
the
cloud
supply
the
cloud
providers
or
from
other
cni's
right
to
keep
pushing
this
envelope
forward.
A
A
H
H
I
see
a
lot
of
people
like
you
know,
waiting
for
that,
at
least
I'm
following
that
issue,
and
I
see
a
lot
of
people
like
following
that
issue
and
waiting
for
these
developments
to
happen
directly
within
the
amazon
vpcc.
Instead
of
you
know
them
relying
on
calico
or
andrea
for
policy
only
mode.
So
so
I
just
wanted
to
share
that
with
the
team
and
see
if.
A
H
A
Now
that
that's
good
insight-
and
the
other
question
is
also
going
to
be-
is
like
how
much
extension
from
upstream
kubernetes
network
policy
they're
going
to
support
right,
there's
also
a
lot
of
work
that
this
team
is
doing,
and
this
community
is
doing
right
now
on
extending
you
know
the
capabilities
with
like
cluster
scope,
network
policies
and
other
things
to
that
nature.
That
still
may
be,
you
know
very
different
than
than
what's
offered
you
know,
natively
by
by
amazon,
for
example,
in
in
their
policy
provider
engine
that
they're
working
on.
H
Yeah
that
was
going
to
be
my
next
question.
So
how
much
effort
do
you
guys
think
that
is?
It
will
be
to
enable
the
cluster
network
policy
crds
and
like
the
native
ntr
network
policy,
crd
that
we
are
trying
to
implement,
or
we
have
already
done
that
as
an
alpha
feature
in
0.9.
H
So
how
much
effort
do
you
guys
think
that
it
will
take
for
us
to
enable
them
across
all
the
clouds,
not
just
amazon,.
H
F
It
usually,
we
enable
our
policy
crd
right,
at
least
for
aksgas
gk.
Here
we
can
easily
change
the
yammer
file
to
enable
the
feature
gate
for
unshared
policy,
and
you
can
have
a
generality
policy
if
your
question
is
for
eks
engine,
because
in
your
case
engine
it's
a
supposed
thing,
I
I
think
for
that
one.
F
If
we
want
to
enable
trucks
policy,
we
need
to
add
option
to
to
aks
engineer.
H
F
A
E
Okay,
just
just
to
let
you
know
that
this
is
where
you
set
the
the
traffic
and
cap
mode,
and
these
are
the
supported
modes
like
end
cap,
no
amp.
E
And
this
is
present
in
the
yaml,
so
this
is
how
you
can
in
which
mode
the
fjse-
and
I
would
get
going
back
from
gke.
E
So
gk
is
for
the
google
cloud
and
the
support
is
pretty
limited
here
in
a
way
that
we
support
it
only
on
ubuntu
nodes.
They
have
container
optimized
images,
also
like
cos
where
we
don't
support
it.
We
support
it
on
ubuntu
and
they
have
concept
of
vpc
native
enabled
and
disabled.
E
E
Okay,
and
if
you
disable
this
more
so
we
can
operate
in
both
the
mode.
So
this
is
one
specific
concept
of
vpc
native,
enable
and
disable.
You
can
read
about
it
on
google,
but
we
can
open.
E
You
know
that
we
can
operate
in
both
templates.
This
is
the
prerequisite
that
this
is
how
you
set
up
the
gke
node,
and
then
you
basically
create
a
gk
cluster
using
this
command.
E
I
think
when
we
enable
entry
of
proxy
this
documentation
needs
to
be
updated
to
not
have
this
parameter
mandatory
engine
I'll
talk
to
you
offline.
So,
basically,
how
you
do
cluster
on
gke
then
similar
to
the
way
we
have
node
in
it
for
aks
eks.
We
also
have
a
node
in
its
script.
E
The
way
the
clusters
are
deployed
in
gke
is
the
cubelet
parameter
for
the
cni
actually
points
to
cubenet,
okay,
and
even
if
you
have
some
config
file
in
the
cni
directory,
it
won't
be
honored
because
the
cni
is
pointing
to
cubenet.
E
So
what
we
do
with
this
node
in
it
is
we
modify
the
cubelet
parameter
from
cubenet
to
cni
so
that,
if
we
add
our
cni
entries
cni
in
the
config
file,
the
cubelet
can
honor
it.
So
that's
what
we
do
that
this
node
in
it
does
it
basically
updates
the
qr
parameter
from
cubenet
to
cni,
so
that
entryway
can
be
inserted.
E
E
And
then
insider
thingy
we
can
once
we
enable
and
proxy
for
gk.
We
can
move
this
too
and
then.
Lastly,
the
common
problem,
which
we
discussed
for
all
engineering
mode,
is
once
you
have
the
entry
up
and
running.
You
need
to
kill
the
existing
parts
or
restart
the
existing
parts
so
that
they
can
be
managed
by
entry.
E
One
note
is,
we
are
actually
not
doing
any
cni
chaining
for
gk,
okay,
like
for
aks,
we
change
it
with
azure
for
switching
it
with
aws
for
gk
is
there
is
no
chaining?
What
we
do
is
we
just
intrigue,
becomes
the
primary
cni
and
then,
and
we
operate
in
no
end
cap
mode,
which
means
there
is
no
encapsulation
required.
C
A
In
this
mode,
the
the
ip
addresses
like
who,
how
is
ipam
working
with
with
gke?
Can
you
talk
a
little
bit
about
that?
I
know
we're
at
the
very
top
of
the
hour,
so
we
may
have
to
make
this
pretty
quick.
E
Let's
say
if
you
are
operating
in
a
b
c,
disable
mode,
so
they
allocate
extra
cider
for
that
vpc.
What
I
mean
by
that
is,
it
is
all
gke
magic.
Maybe
that
let's
say
you,
you
deploy
a
vpc
in
10.0.0.
E
right
and
once
you
have
a
dpc
native,
more
disabled,
they
allocate
a
secondary
sider,
called
twenty
zero
zero
zero
and
carve
out
and
carve
out
ciders
subnets
for
each
fault
each
node.
The
way
we
do
it
in
entry,
so
as
part
of
the
node
ipam
controller,
the
gke,
whatever
submits
they
have
allocated
for
a
particular
worker
node.
That
information
is
passed
down
to
the
worker
not
through
through
that
node
controller
itself.
H
E
Yeah,
I
think
the
google
tweaked
it
in
such
a
way
that,
if
they,
if
they
are
allocating
some
xyz
subnet
for
a
particular
worker,
know
that
information
is
propagated
down
to
the
kubernetes
construct
itself.
You
don't
have
to
call
any
cloud
apis
to
figure
out
what
is
the
subnet
a
worker
node
is
working
on.
They
can
fetch
this
information
from
kubernetes
like
from
the
north
spec
right.
A
Excellent
well,
since
we're
at
the
top
of
the
hour
salvatore,
I
will
hold
and
and
and
bring
a
update
from
the
working
group
for
our
next
session
and
let
you
know
what
we're
working
on.
But
if
you're
interested
in
participating,
please
check
out
sig
network
github,
there's
actually
a
link
to
what
we're
doing
in
the
network
policy
v2
working
group.
If
you'd
like
to
participate
between
now
and
then.
H
A
H
So
curry,
actually
I
wanted,
you
know
more
than
an
update,
because
I
also
wanted
to
probably
we
can
share
the
the
user
stories
and
the
excel
sheet
with
the
group
before
the
next.
H
From
the
community
from
our
community
as
well,
so
that
we
have
a
fair
idea
of
what
to
emphasize
in
the
next
meetings.
B
Yeah
make
sense,
I
mean
I
guess
you
can
share
them
on
the
on
the
vlog
channel,
so
we
can
have
a
look,
and
I
guess
there
will
be
also
more
conformance
tests
coming
as
a
part
of
this
activity.
Is
that
correct.
A
To
determine
what
makes
sense
for
a
v2
iteration.
B
I
see
perfect,
and
it
seems
then
that
we
are
done
for
today's
meeting
and
we
also
have
an
agenda
for
the
next
meeting,
which
should
be
a
conversation
on
the
network
policy
v2
and
and
a
discussion
about
a
potential
entry
operator
which,
in
light
of
what
we
discussed
today
for
public
cloud,
becomes
a
little
bit
interesting
because
you
know
there
is
going
to
be
some
overlap
and
anyway,
that's
something
that
we
can
discuss
in
two
weeks
time
anyway.
I
think
that
is
all
for
today.
B
Is
there
any
last
topic
that
the
community
would
like
to
bring
up
going
five,
four,
three:
two
one:
zero:
all
right,
I'm
going
to
stop
the
recording
right
now,
and
I
would
like
to
thank
everyone
for
attending
this
instance
of
the
andrea
community
meeting
have
a
nice
one,
everyone
and
goodbye.