►
From YouTube: Antrea Community Meeting 06/29/2020
Description
Antrea Community Meeting, June 29th 2020
A
Three
two
one
go
so
good
morning,
good
evening,
good
afternoon
or
good
whatever.
This
is
a
the
entire
community
meeting
for
Monday
June
29th.
If
you
are
in
the
u.s.
or
Tuesday
June
30.
If
you
are
everywhere
else
in
the
world
and
the
agenda
for
today
is
the
following,
we
will
start
with
a
release
update
provided
by
Antonin.
Then
we
have
two
topics
for
today.
A
One
is
the
no
deport
local
proposal
where
Sudipta
should
give
us
a
presentation,
and
then
there
was
a
request
from
Jay
to
discuss
the
little
bit
of
the
I,
think
architectural
details
behind
to
figure
out
our
Calico
and
Andrea
deferred
so
from
what
I
gather
it
from
the
slack
conversation,
it
was
more
a
discussion
on
pro
and
cons
of
the
routed
approach
versus
the
only
approach
that
is
a
adopting
of
yes
and,
of
course,
I,
would
add.
Also
the
other
differences
in
the
implementation,
for
instance,
of
when
it
comes
to
network
policies.
B
Like
last
one
I
think
the
main
items
we
can
mention
is
service
proxy
implementation
in
OBS,
which
is
going
to
be
enabled
by
default
on
Windows,
because
it's
required
to
increment
network
policy
for
services
correctly
and
it's
gonna
be
alpha
feature
for
Linux,
so
it
will
be
disabled
by
default,
but
people
can
enable
it
by
updating
the
manifest
and
try
it
for
themselves.
Another
feature
which
is
gonna
be
in
alpha
is
going
to
be
cluster
network
policy,
so
this
is
the
first
time
we
introduced
entry.
B
A
specific
CRD
is
to
define
some
specific
policies
which
go
beyond
the
standard,
kubernetes
network
policy,
so
in
this
case
cluster
network
policies
or
policies
which
apply
to
to
the
entire
tools
the
namespaces
in
the
cluster,
so
kudos
to
Abhishek
and
especially
yang
who
worked
on
that
feature
which
present
will
presented
I.
Think
at
the
last
community
meeting
and
the
last
feature
is
Trace
flow,
on
which
a
team
has
also
been
actually
were
actively
working.
B
So
I
encourage
everyone
to
contribute
reviews
to
those
pull
requests,
I
think
at
least
some
two
of
them
trace
flow
and
and
entry
a
proxy
were
pretty
close
to
being
able
to
merge
and
merge
them.
So
those
are
the
new
features
in
the
upcoming
release
and
I.
Think
one
announcement
that
that's
worth
making
here
is
Gen.
John
has
opened
the
patch
to
change
the
default
general
type
in
an
area
which
is
used
for
the
overlay
Network,
so
were
changing
it
from
VLAN
to
jean-yves.
B
You
can
have
some
you're
gonna
have
some
disruption
of
communications
between
pods
across
different
nodes.
So
that's
something
that's
worth
taking
into
account
for
people
for
users
doing
the
upgrade.
Of
course.
We're
gonna
include
the
information
about
this
in
the
release
notes
for
entry
for
people
running
large
clusters
in
production.
We
do
not
want
to
experience
this
downtime,
but
they
still
want
to
pick
up
the
new
release.
It's
very
easy
to
just
update
the
yamo,
the
manifest
channels
that
we
release
with
Andrea
and
just
set
the
tone
type
explicitly
back
to
Vega.
B
B
B
All
right
so
genuine
also
has
another
patch,
maybe
worth
mentioning
to
change
the
default
gateway.
Name,
that's
created
to
create
it
by
Andrea.
We
we
figured,
it
would
make
more
sense
for
those
interfaces
we
create
for
entry
and
networking
to
have
entry
prefix
in
them
and
so
I
think
the
new
default
name
is
and
is
gonna
be
entry,
k2,
a0
and
I.
B
Think
Gendron
can
confirm,
but
I
think
he
wrote
the
patch
in
such
a
way
that
if
entry
I
was
already
running
on
the
node-
and
there
is
already
a
gateway
interface
that
was
created
previously
and
that
is
stored
in
the
OBS
database,
then
we
will
log
a
warning,
but
we
will
not
change
the
existing
gateway
to
the
new
name.
Is
that
correct
general?
That's.
D
B
It
will
only
affect
new
clusters,
which
will
see
the
new
name
or
new
nodes
that
you
join
to
the
cluster
to
an
existing
rest,
yeah,
so
I
think
that's
all
I
had
for
the
release,
announcements,
yeah.
The
most
important
change
is
a
change
in
the
default
tunnel
type,
and
that
will
be,
of
course,
documented
for
users
in
the
release.
Notes.
E
Okay,
so
can
I
share
my
screen
and
go
ahead,
please
with
okay,
so
we
wrote
up
a
document
for
this.
Essentially,
what
we're
trying
to
say
here
is
that
the
present
Northport
implementation,
which
is
done
through
proxy,
has
some
limitations,
I
think
they're
all
aware
of
the
limitations,
but
for
for
the
sake
of
it,
let
me
just
go
through
it.
The
node
port
presently
is
about
exposing
a
set
of
nerves,
reserving
them
and
then,
whenever
a
service
is
created
with
type
node
port
queue.
E
Proxy
basically
takes
one
of
the
ports
out
of
that
pool
of
pores.
It
has
resolved
on
our
port
and
then
exposes
that
that
on
each
of
the
kubernetes
nodes
now
is
that
it
certain
node
did
not
have
that
port
free
in
that
case
also
know
the
node
for
command,
or
rather
by
which
you
can
diagnose,
that
and
and
and
and
the
same
port
is
exposed
for
a
single
service
in
all
the
all
the
kubernetes
nodes.
E
If,
then,
then,
then,
the
session
so
so
far
or
for
for
external
load
balancers,
which
are
trying
to
send
traffic
to
you,
know
which
are
trying
to
send
traffic
into
the
kubernetes
nodes
into
the
kubernetes
applications
they
would.
Typically,
if
this
send
traffic
to
note
port,
then
they
they
are
the
mercy
of
keep
proxy
to
actually
take
care
of
the
load
balancing
which
happens
after
the
traffic
lines.
E
You
know
lands
on
the
node,
so
if
the
external
balancers
would
like
to
do
something
like
a
sim
persistence,
it
breaks
because
because
once
the
traffic
arrives
on
the
node,
it
is
at
the
hand
of
cube
proxy
to
deal
with
the
session
from
there
on.
It
also
has
a
drawback
integer,
of
course,
that
I'd
like
to
be
reserved-
and
the
reason
is
that,
as
you
grow,
your
kubernetes
cluster
and
you
add
more
and
more
modes,
you
would
typically
like
to
add.
E
You
know
a
bigger
range,
because
these
ports
on
each
of
these
services
are
just
number
of
services
are
growing.
You
need
to.
You
need
to
equally
reserve
range
across
all
the
nodes
so
that
the
pods
are.
You
know
the
services
that
it's
for
so
actually
what
I'm
trying
to
say
is
that
with
no
port
you
would
be.
You
would
be
requiring
your
port
range
also
to
pro
as
your
kubernetes
size
increases,
as
in
the
number
of
nodes
or
all
right
number
of
services
and
increased
kubernetes
deployment.
E
E
The
proposed
solution
that
we're
trying
to
present
here
is
called
rounded
node
port
local,
but
essentially
what
it
means
is
that
you're
able
to
program
each
of
these
services-
and
you
know
you
you're-
basically
taking
the
odds
of
constituting
you
know,
constituting
a
given
service
and
then
expose
the
individual
pods
on
the
on
the
node
on
which
they
are
reciting.
So
you
basically
figure
out
how
note
apart
is
scheduled
to,
and
then
you
take
that
you
you
basically
ensure
that
that
pod
can
be
accessed
by
a
port
forwarding
rule
on
that
node.
E
So,
essentially,
that's
what
the
premise
or
the
basic
idea
is
and
and
and
and
then
this
this
basically
is
proposing
that
this
can
be
done
in
an
tria
and
and
the
code
code
code
can
recycle
early
in
the
entry
and
node
agent.
There
is
at
this
point
in
time
the
design
doesn't
consider
anything
that
is
required
in
the
entry,
a
controller
and
and
just
wanted
to
go
through
some
of
the
options
for
that
there
there
can
be.
There
can
be
few
options
given
to
the
user.
E
One
of
the
options
could
be
that
the
in
risk
class
is
used.
So
if
you
let's
say
had
a
particular
ingress
class,
the
entry
agent
would
listen
to
the
ingress
objects
and
figure
out
a
the
constituent
services
which
are
being
pointed
to
by
that
impress,
object
and
then
and
then
take
the
corresponding
pods.
E
And
if
the
agent
finds
out
that
one
of
the
pods
resides
on
on
the
node
where
it's
running,
then
it
could
program
the
denied
rule
for
that
pod
and
correspondingly
exposed
to
that
on
the
node
for
external
connectivity
using
the
node
IP
they.
If,
if
that
is
not
an
option,
that
is
if
there
is
a
broader
option,
there
is
another
option
where
we
could
have
the
user
annotate
a
given
service
with
a
predefined
label.
E
You
could
say
that
a
given
service
with
this
annotation
would
be
exposed
as
no
food,
local
and,
in
which
case
the
the
entry
ID
would
be
able
to
read
that
service
label
and
figure
out.
If
a
constituent
part
of
that
service
is
residing
on
the
on
the
agent
or
not,
and
if
it
happens
to
find
that
particular
pod,
then
it
would
just
go
ahead
and
program
in
that
particular
part.
There
is
there's
also
like
a
broader
level.
E
You
know
based
on
interest
object,
which
I
don't
know
if
it's,
if
it's
really
something
that
we
would
like
to
explore.
But
this
is
like
a
blanket.
You
know.
It
basically
means
that
whenever
you
have
an
ingress
object
created,
you
just
parse
ingress
of
the
to
figure
out
what
are
the
services
being
used
in
that
ingress
object,
and
you
just
take
an
action
from
the
entry
agent
on
basically
figure
out
how
to
expose
the
pods
of
that
service.
Each
of
the
agents
actually
populate
a
CID
schema,
which
looks
like
this.
E
Essentially,
the
CID
schema
takes
an
inspiration
from
the
infant's
life
object
and
basically
each
of
the
node
agents
will
control
the
destiny
of
the
the
CRD
so
that
they
don't
really
have
an
overlap
with
other
other
node
agents.
But,
for
example,
in
this
case,
if
you
have
a
node
ID
of
a
node
and
reagent,
which
is
running
on
this
particular
node,
and
if
it
figures
out
that
for
service,
which
is
my
service
here,
there
are
certain
pods
which
are
on
it,
which
runs
which,
which
are
essentially
running
on
that
particular
node.
E
Then
it's
going
to
populate
the
pod
name
and
the
curse
pan,
and
it
will
also
program
a
corresponding
port
on
the
host
on
which
this
pod
will
get
exposed
and
yeah.
So
so
so
so,
if
the
user
wants
to
query,
let's
say
the
user
has
a
my
service
and
that
you
know
is
my
service,
the
name
of
our
community
service,
and
let's
say
they
want
it
to
query.
Where
all
are
these?
E
Where
all
can
I
find
find
the
move
and
the
poor
combinations
where
the
service
is
exposed,
they
could
they
could
run
a
query
like
this
and
that
would
output
something
like
this,
where
each
of
the
Constituent
node
would
slice
or
in
points
in
PLN
point
slices
would
display
the
node
IP
and
end
up
port
information
on
which
on
which
is
these
parts
are
exposed.
So,
in
order
to
in
order
to
do
this
thing,
there
is
basically
the
the
document
discusses
about
more
details
of
how
how
we
would
like
to
do
this.
E
But
if
folks
are
interested,
we
can
actually
go
through
how
the
IP
table
rules
are
configured,
how
we
want
to
envision
a
agent
reboot
if
an
agent
reboot
happens,
and
then,
let's
say
the
part,
so
let's
say
you
know
if,
due
to
some
reason,
the
host
is
evacuated
and
the
pods
are
evacuated
out
of
it.
How
do
we
tackle
those
kind
of
scenarios?
E
E
And
if
the
bank
fails,
then
basically
it
moves
on
to
the
next
port
in
the
range,
because
the
number
of
pods
on
a
given
node
are
are
usually
constant
and
not
in
high
numbers
like
if
you
like,
like
the
highest
I
saw,
was
probably
around
250
parts
and
it
could
be
even
like
thousand
if
at
all,
but
but
but
it
would
certainly
mean
that
the
port
range
isn't
as
exhaustive
as
its
expel.
You
know
expected
in
case
of
note
port.
You
could.
E
You
could
have
for
ranges
which
are
limited
in
in
sizes
as
well
and
allocated
to
a
given
node.
So
that's
what
essentially
the
proposal
is
that
probably
certainly
did
not
slow.
You
know
follow
a
certain
flow.
I
was
a
little
jumping
from
here
in
there,
but
I
hope.
I
was
able
to
convey
what
what
we're
trying
to
propose
here.
F
General
question
this
is
Jay
I'm.
You
know
not
at
all
the
and
tray
up
on
all
the
latest
stuff
going
on
in
a
tray
all
the
time,
but
I
I
was
just
curious,
so
yeah
it
looks.
It
looks
everything
you
said.
It's
sort
of
kind
of
makes
sense,
I'm
kind
of
trying
to
understand
what
the
overall
like
arching
use
use
case
was
like.
F
E
I
mean
there
are
quite
a
few
English
controllers
which
would
like
to
do
load
balancing
by
sitting
outside
the
kubernetes
cluster
and
if
they,
the
only
best
way
for
them
to
actually
control
the
service
traffic
is
by
node
port
currently,
and
that
would
possibly
that
has
some
disadvantages
at
this
point
in
time
that
it
does
not
really
guarantee
persistence
and
and
consistent
hashing
down
to
the
applications,
because
the
node
port
is
an
abstraction
which
is
controlled
by
cube
proxy,
so
cube
in
the
window
in
the
traffic
is
sent
on.
The
node
then
cube
proxy.
E
Basically,
it
takes
over
been
doing
like
redirecting
it
and
the
to
to
the
appropriate
application
part
that
application
pod
might
not
be
residing
on
the
node
on
which
the
traffic
landed
by
an
on
board.
So
it's
like
a
traffic
rumbling
that
happens
post
that
and
for
for
forefingers
controllers,
like
that,
this
is
going
to
be
helpful,
that
it
could
deterministically
figure
out
where
the
pod
resides
and
it's
going
to
send
traffic
to
that.
So
that's
one
one
of
the
primary
drivers
for
this
proposal.
C
F
D
D
E
Point
objects
have
like
these
scalability
limits
and
that's
why
you
have
the
endpoint
slice
object,
replacing
or
rather
even
recommended.
So
we
want-
and
we
also
wanted
to
not
have
the
controller
annotate,
the
pods
so
that
you
don't
touch
kubernetes
objects
and
if
I
don't
know,
if
you
require
extra
permissions
which
are
not
their
entry.
So
we
we
didn't
want
to
like
get
that
extra
privilege.
We
just
wanted
to
construct
a
CR
D,
which
is
which
is
an
in
like
a
in
PL
in
points
like
CID,
and
each
entry.
E
Ident
should
be
able
to
control
this,
and
it
should
be
able
to
populate
the
data
which
is
required,
and
then
we
would
purely
work
on
labels.
So
a
kubernetes
command
on
the
service
label
should
actually
tell
you
which
are
the
slices
and
and
where
are
all
they
expose,
so
that
we
remove
the
need
of
annotating.
The
pods.
G
E
G
G
G
D
E
E
E
So
let's
say
that
you
find
married
in
in
this:
in
terms
of
you
know
there
we
are
able
to
see
state.
What
exactly
are
the
differentiators
between
the
external
service
policy
was
true
and
this?
What
what
would
you
say
about?
How
do
we
proceed
with
this?
So
do
we
go
ahead
and
file
a
proposal
in
the
entry
it'll
be
positively
and
then
from
there.
D
D
If
we
want
having
glass
container
some
external
ingress
tension
or
some
Eastern
on
never
seven
abundance
of
hood,
which
ingress
post
the
rhetoric
of
the
sense
I
think
for
most
people
is
the
first
time
to
hear
about
his
ideas.
Maybe
we
gave
people
some
time
to
look
at
your
talk,
and
now
we
can.
Then
we
continue
with
this
car.
A
E
F
H
F
E
So
jay,
are
you
suggesting
that
we
kind
of
do
when
analysis
over
an
external
like
like,
like
let's
say
if
I
may
put
it
this
way,
if
some
external
external
english
controller
is
doing
no
port
based,
you
know,
load
balancing
and
then
they
switch
over
to
this,
which
is
more
like
localized,
no
for
local
stuff.
So
how
do
you?
What
are
the
performance
benefits?
Variegated
we
kind
of
document
that
is
that
what
you're
asking
for
yeah.
F
F
And
so
on
so
yeah,
just
just
the
you
know,
I'm
sure
it's
crossed
your
mind
and
I
know
you
all
do
some
performance
testing
already
so,
but
this
might
be
a
really
nice
use
case.
To
really
kind
of
you
know,
publish
some
of
those
like
if
that
could
be
synergized
with
other
performance
work,
that's
already
being
done.
Maybe
that
might
be
a
way
to
do
it
without
creating
extra
work
or
any
yeah.
D
B
E
The
the
session
affinity
a
session
affinity
for
services,
let's
say
to
a
client
IP,
so
the
problem
with
that
is
that
if
a
external
load
balancer
is
trying
to
use
that
and
external
to
the
external
load,
balancer
wants
to,
you
know,
use
this
net.
So
what
would
happen
is
the
let's
say
a
certain
with
our
virtual
IP
is,
is
being
shared
across
multiple
such
applications.
E
So
even
if
you're,
an
external
client
sending
a
request
to
the
virtual
IP,
you
wouldn't
have
a
direct
way
to
have
a
persistence
with
the
application
board,
because
when
it
reaches
Q
proxy
proxy
things
that
the
virtual
IP
is
essentially
the
one
which
is
which
is
which
has
a
session
affinity,
but
essentially
the
virtual
IP
is
actually
actually
getting
accessed
by
multiple
clients
simultaneously.
So
it
will
give
you
consists
like
you,
you
might
be
better
off.
E
If
you
have
one
client
and
a
bit
or
a
virtual
IP,
and
then
there
is
a
session
affinity
back
in
to
the
pods,
then
you
would
you
would
guarantee
it,
but
because
there
isn't,
because
there
you
would,
you
would
typically
not
have
that
you
have
multiple
clients
accessing.
So
if
the
load
balancer
could
control
where
it
wants
to
route
the
traffic
and
and
and
maintain
sticky
sessions,
that
would
be
more
useful
in
terms
of
making.
B
B
B
I
understand
yeah,
it's
no
girl
control
in
control
of
like
for
spreading
the
requests
across
older,
yeah
and
I.
Guess.
I
had
two
questions:
flash
requests,
the
first
one
is,
as
you
open
the
proposal
on
github
and
you
point
to
this
document.
Do
you
think
you
can
I
would
like
a
small
section
and
I
think
that's
on
our
issue
template
for
proposals
about
how
you
plan
to
test
this
within
Andrea
as
part
of
like
the
end-to-end
test
suite?
E
B
E
I
mean
for
from
an
initial
staff
point
of
view:
I,
don't
I,
don't
know
if
there
is
a
requirement
to
have
an
ingress
controller
to
really
test
this
functionality,
because
I
mean
hey,
I,
mean
English
controller,
from
the
sense
that
you
don't
probably
need
a
full
fledge
of
ingress
controller.
You
probably
need
the
ingress
objects,
and
it
you
basically,
if
you
are,
if
we
are
able
to
sort
of
parse
the
English
objects
and
do
the
rightful
thing
to
the
service
which
is
being
pointed
in
this
in
this
particular
ingress
fire.
E
B
I
need
to
give
it
more
thought
as
well,
so
it
would
be
good
to
put
it
in
writing,
but
yeah,
it's
possible
that
we
can
keep
it
like
simple
and
just
like,
also
like
monitor
for
the
CR
DS
created
by
the
agent
and
that
stuff
and
into
the
validation
exists.
Okay,
basically,
I!
Think
if
we
create
if
the
test
is
just
I,
create
a
service
object
with
a
current
label
or
whatever
you
I
think
label,
and
then
it
verifies
that
the
entry
agent
receives
that
I
create
a
few
pods
I.
B
B
B
A
And
done
so,
basically,
we
have
less
than
20
minutes
left
on
the
call
and
I
was
thinking
that
perhaps
we
can
move
on
the
on
the
next
topic.
It
was
a
brought
up
an
interest
in
a
conversation
between.
Let's
see
the
merits
of
BGP
based
approach
versus
the
approach
that
aintry
is
based
on.
It's
based
on
I,
don't
have
any
ready,
president
slide
deck
or
whatever
and
I
don't
know.
If
anyone
else
has
a
slight
ik
I
believe
the
topic
was
brought
up
by
Jay.
A
F
Sure
so
I
saw
so
I
just
was
curious
because
I've
never
really
done
anything
with
yes,
and
you
know,
I
wanted
to
like
start
playing
around
with
it
as
we're
starting
to
see
more
people
using
an
tray
and
stuff
and
I'm
used
to
kind
of
like
when
I
run,
calico
I'm
used
to
kind
of
like
just
troubleshooting.
F
If
something's
wrong,
like
you
know,
I
I
run
IP,
trou
and
then
I
look
at
the
V
devices
and
I
look
at
the
Cali
v
devices
and
stuff,
and
there's
a
pretty
linear
workflow
for
just
trying
to
figure
out
where
things
are
and
what
they're
doing
and
then
you
can.
You
know
you
see
the
BGP
stuff
being
installed
and
you
can
sort
of
poke
around
at
that
and
I
was
just
thinking.
Maybe
what
I
ultimately
wanted
is
obviously
I.
F
That's
kind
of
what
I
was
looking
for
right,
some
kind
of
nerdy
explanations
of
what's
going
on,
I
I
felt
like
there's
so
much
of
yes
stuff
but
OVS.
Isn't
the
I
couldn't
find
a
lot
of
like
beginner
here's
some
stuff?
You
can
do
with
OVS
to
get
started
and
like
stuff
on
the
internet
when
I
was
googling
for
stuff,
and
so
that's
pretty
much
where
I
that's
pretty
much
where
I
was
I,
I,
don't
think
I
don't
have
any
specific
questions
like
if
I
did
I
would
just
ask
them
but
yeah.
A
One
idea
could
be
to
probably
just
focus
on
maybe
on
open
with
vision
and
three
also
because
I
don't
believe
at
least
for
myself.
I,
don't
have
enough
knowledge
about
calico,
and
you
know
routed.
How
do
you
perform
routing
with
vgp
to
be
able
to
make
a
comparison?
I
don't
know
antonin
or
genuine
if
you
are
probably
more
up-to-date
on
the
subject,
I.
B
Well,
you
put
us
on
the
spot,
it's
hard
for
me
to
do
an
in-depth
comparisons
of
the
merits,
but
the
beach
but
J.
If
you
want
to
get
up
to
date,
I
think
early
on,
like
the
second
or
third
community
meeting,
we
had
a
novias
presentation,
a
presentation
of
the
OBS
pipeline
for
Andrea,
and
there
is
also
like
a
document
underdogs
in
the
entry
repository
which
shows
the
different
match.
Action
tables
of
entry
are,
and
pretty
much
like
was
the
life
of
a
packet
for
different
types
of
packets,
as
as
they
go
through
the
pipeline.
B
F
I
I
You
know
maybe
to
either
blog
about
this
or
just
add
some
additional
information
that
could
help
users
that
are
essentially
transitioning
from
another
CNI
to
give
an
tree
a
try,
I
think
it's
definitely
a
valid
question
and
I
and
I
appreciate
you
asking
a.j
who
there
are
some
very
different
approaches
to
how
we
handle
you
know
things
like
moving
traffic
from
one
pod
to
another
pod
in
the
way
that
calico
does
is
so
I.
Think,
there's
there's
several
things.
I
Another
thing,
I
think
is
your
question
is
somewhat
tactical
and
you
know
how
does
how
does
the
sea
and
I
actually
perform
these
functions
and
I
think
part
of
your
question
maybe
was
also
basis
as
what
Aston
was
saying
in
terms
of
the
merits
right,
like
you
know,
why
do
we
choose
the
different
overlays
that
we
choose,
or
maybe
what
is
the
use
case
for
using
an
overlay
versus
not
using
an
overlay?
Those
are
some
really
good
things
too.
That
I
think
we
could
provide
some
additional
documentation
on
so
only.
F
Yeah
like
if
it
could
be
an
active
style
like
if
I
could,
link
to
anything,
almost
I
would
say
if
there's
a
calico,
the
hard
way,
configuring
bgp
peering
article
that
you
can
just
google
and
it
kind
of
like
you-
can
kind
of
like
walk
through
the
like
stuff
that
you
know
stuff
that
you
could
do
to
like.
Look
at.
F
You
know
what
what's
going
on
with
the
BGP
routes
and
stuff,
all
that,
like
the
peers
and
stuff
yeah
like
something
like
that,
where
you
could
kind
of
get
in
there
and
hack
around
with
it,
I
think
I
think
you
could
create
some
good
buzz
that
way.
I
get
some
people
involved,
get
some
people.
Well,
you
know
I've.
I
Got
some
pretty
good
insight
on
how
calico
is
working
I
know?
Obviously
the
maintainer
and
committers
to
this
project
have
a
ton
of
inside
and
how
OBS
works
so
maybe
we'll
take
it
up.
As
a
you
know,
take
this
offline
and
and
and
those
that
are
interested
I
can
throw
a
a
doc
onto
the
on
to
the
channel,
and
now
we
can
toss
around
some
ideas
on
maybe
creating
some
documentation
or
a
blog
entry,
or
something
like
that.
It.
I
Some
of
these
details,
I
think
it'd,
be
I,
think
it'd
be
great
to
educate.
You
know
the
community
that
may
not
have
as
much
deep
insight
on
networking
and
you
know
bring
them
up
to
speed
on
on
some
more
the
intricacies,
and
you
know
the
dive
into
the
weeds
of
the
networking
a
little
bit.
You
know
where
they
may
not.
Typically
do
that.
So
I,
like
that
looks
like
Stevens
got
something
for
us
too
yeah.
C
I
just
wanted
to
bring
up
a
couple
things
first
of
all,
I'll
second
J,
because
I'm
curious
about
how
this
works
and
I'm
even
willing
I'm
a
newbie,
so
I'm
not
authoritative
on
writing
it
but
I'm
willing
to
help,
and
maybe
it
helps
to
have
a
newbie
there
to
make
sure
that
the
end
result
is
understandable.
So
if
we.
C
That
data
blog
or
whatever
I'll
volunteer
to
help
do
some
of
the
work.
Also
over
in
the
slack
channel.
I
did
discover
an
old
I,
don't
by
today's
standards.
I'd
call
it
ancient
because
it
was
written
in
2017,
but
it
was
talking
about
use
of
open
V
switch
with
kubernetes
and
J.
If
you
could
I,
don't
expect
you
to
do
it
in
this
meeting,
because
it's
a
long
deck
but
take
a
look
at
that.
I
suspect
that,
due
to
its
age,
it
obviously
predates
Andrea
might
not
be
accurate.
C
F
B
Is
pretty
accurate,
actually
I
just
took
a
quick
look,
they
don't
talk
about
network
policies
which
is
a
big
part
of
engine
yeah.
Obviously,
but
basically
it's
the
same
idea
of
a
part
of
our
network
and
the
same
idea
of
they
even
talked
about
implementing
service
load
balancing
inside
of
OBS.
So
it's
pretty
I
believe.
C
A
Documentary
I
mean
I
had
a
look
at
the
deck
during
the
meeting.
I
believe
that
it
must
be
some
custom,
written
CNI
solution,
I,
don't
know
if
it's
for
a
demo
apse
or
for
some
diddly-dee
use
case.
But,
generally
speaking,
it
is
true
that
there
are
many
who
yes
base
the
CNI
implementation,
the
most
known
of
which
is
obviously
ovn,
which
is
a
one
of
the
foundations
for
essence
of
a
shift
and
another
I
I.
A
Don't
think
there
are
many
other
well-known,
obvious
implementations
of
CNI
plugins.
You
can
find
probably
some
obviously
and
I
plug
in
upstream
ooh,
she's,
probably
not
used
anymore,
barely
used
or
only
used
for
demos,
other
kind
of
PLC
work.
There
are
probably
some
custom
implementations
that
leverage
OVS
I
think
but
I'm
not
entirely
sure,
for
instance,
that
digitalocean
provides
a
CNI
solution
based
on
OES
for
their
kubernetes
customers.
A
Anyway,
say:
let's
say
that
in
terms
of
documentation,
I
believe
that
everything
that
it's
needed
for
a
full
walkthrough
to
understand,
at
least
all
the
concepts
and
the
nitty-gritty
details
behind
in
the
entry.
The
way
and
three
are
leverage
is
obvious-
is
available
in
the
source
code
repository
starting
with
the
architecture,
dot,
MD
document
I.
Believe
that's
the
starting
point
and
from
there
one
can
work
into
all
the
details
and
in
particular
she
was
mentioning
into
the
ODS
pipeline.
Details
which
is
probably
I
will
say.
The
most
important
documents
to
understand.
I
want
rare
works.
A
I
F
C
F
That
would
be
for
me.
That
would
be
the
coolest
thing
to
have
is
something
that
was
pair
was
somewhat
parallel
to
some
other
existing
artifact
that
was
already
out
there
and
yeah.
Let's
do
that
so
Stephen
I
appreciate
ya,
I,
appreciate
it
and
Cody,
and
everyone
else
and
Salvatori
everything.
So
maybe.
F
With
this,
and
we
can
all
just
kind
of
huddle-
you
know
offline
about
this
and
should
I
file
an
issue
in
upstream
entry
about
this
too.
So
we
have
a
rally,
point
yeah.
That
sounds
like
a
good
idea:
yeah.
Certainly,
okay,
I'll
do
that.
If
that
makes
sense
to
you
all
and
then
that
way,
that
we
have
a
place
yeah
and.
C
F
F
A
Thank
you,
I.
Don't
think
that
do
we
have
anything
else
to
bring
up
for
today
for
the
meeting
I
was.
I
Going
to
mention
one
thing
for
the
next
meeting
that
we
have
Salvatori,
there's
been
a
working
group
and
that's
been
looking
at
Network
policy,
v2
and
I
think
it
would
be
advantageous
for
our
community
to
at
least
look
at
a
summary
of
some
of
that
work.
That's
being
done
and
give
us
some
feedback
on.
I
I
I
One
other
one
other
point
before
you
terminates
call
tonight,
we
will
be
taking
some
steps
to
hopefully
present
prevent
what
happened
in
terms
of
zoom
bombing
tonight.
I
apologize
to
all
the
attendees,
we're
going
to
have
to
beef
up
the
security
a
little
bit
in
terms
of
password
and
probably
some
type
of
a
meeting
lock
or
select
an
entry
and
registering
on
the
flag
channel.
D
C
H
C
Should
do
is
that
we're
advised
as
I'm
a
chair
on
one
of
the
kubernetes
snakes,
that
when
you
post
the
recording,
unfortunately
it's
work
but
edit
out
the
zoom
bombing,
because
it's
just
like
graffiti,
where,
if
the
person
wants
to
leave
their
mark
and
get
it
uploaded
to
YouTube
to
brag
about
it
later,
you
don't
want
to
give
that
opportunity.
So
if
it's.
F
I
So,
okay,
well
I
appreciate
everyone
joining
tonight
and
great
presentations
from
all
one
other
thing
that
who
did
had
mentioned
to
me
Salvatore
she
wasn't
able
to
join
tonight.
There
are
some
additional
members
coming
in
wanting
to
commit
I
believe
from
Intel,
and
she
had
wanted
me
to
raise
the
topic
of
potentially
again.