►
From YouTube: Kubernetes SIG Network meeting for 20230413
Description
Kubernetes SIG Network meeting for 20230413
A
This
meeting
is
being
recorded
hello
and
thank
you,
everyone
for
joining
the
April
13th
edition
of
the
Sig
Network
sync,
just
a
reminder
that
this
meeting
is
under
the
kubernetes
code
of
conduct,
which
boils
down
to
please
be
nice
to
one
another.
A
We
have
a
fairly
full
agenda
for
today,
so
we
will
be
kind
of
trying
to
keep
an
eye
on
time
and
keep
each
section
down
to
a
reasonable
amount
of
time.
So
we
can
get
through.
Hopefully
everything
and
with
that
I
think
we'll
just
go
ahead
and
get
started
since
we've
got
a
lot
to
do.
A
B
A
A
C
C
C
F
H
G
There
yeah
yeah,
yeah,
okay,
I,
think
the
you
know,
I've
looked
at
this
just
briefly
this
morning,
I
I
I'm
happy
to
help
triage
this,
but
it
is
troubling
that
they're
unable
to
reproduce
the
issue.
G
C
A
All
right,
thank
you,
Rob
and
then
yeah
they
if
I
don't
know
if
this
is
what
you
were
just
asking
a
second
ago,
but
they
did
link
that
other
thing,
where
they're
having
more
conversation
about
this
to
in
their
other
repo
right
cool,
all
right,
then
the
last
one
that
doesn't
have
anybody
triaging
it
Coupe
proxy
and
ipvs
mode,
TCP
connection
lost
when
node
recovered
ready.
A
D
A
A
All
right,
let's
not
waste
any
time,
then
let's
get
right
into
the
Caps
review
Jordan.
You
have
the
first
topic
here,
which
you
see
me
eager
to
share.
I.
H
I
am
I'm,
so
excited
can
I
share.
My
screen
is
that
possible
Mike.
D
H
All
right
can
everybody
see
that
awesome
all
right,
so
I
am
spitting
up
conversations
about
expanding
the
skew
that
we
support
between
control,
planes
and
nodes.
Very
slightly
right
now.
H
We
document
that
we
support
nodes
being
two
versions
older
than
the
control
plane
and
when
this
was
originally
created,
or
this
policy
was
originally
created
that
covered
the
oldest
supported
version
of
the
newest
supported
version,
so
the
oldest
node
would
work
with
the
newest
control
plane
because
we
released,
we
supported
three
minor
versions
at
a
time
when
we
switch
to
an
annual
support,
Cadence
an
annual
support
policy.
We
realized,
as
part
of
that
we
realized
that
users
actually
need
time
to
qualify
and
upgrade
like
people
don't
upgrade
one.
H
Second,
after
we
release
a
new
minor
version,
they
actually
try
it
out
and
it
takes
you
know
a
month
or
two
for
them
to
roll
it
out.
So
we
actually
now
support
four
binder
versions
for
a
period
of
time
following
a
new
minor
release,
so
we
just
released
127.
H
So
now
we
support
127,
126
125
and
for
a
couple
months,
124.,
and
so
what
this
means
is
that
the
N
minus
2
SKU
that
we
currently
have
no
longer
covers
all
those
node
to
newest
control
plane
during
that
window,
and
one
of
the
big
motivations
for
the
annual
support
Cadence
was
so
that
people
could
do
an
annual
upgrade
if
they
didn't
really
care
about
new
features.
H
I
have
a
diagram
that
shows
what
that
would
look
like.
So
they
imagine
a
future
they're
on
140
and
they
run
140
for
a
year.
They're
super
happy
and
then
we
release
141
142
143,
and
now
they
want
to
get
everything
onto
the
newest
version
today,
because
we
only
support
node
control,
plane,
skew
of
two
versions.
They
actually
have
to
hop
their
nodes
twice,
so
they
upgrade
the
control
planes
a
couple
versions,
but
then
to
stay
in
SKU.
H
Then
they
upgrade
their
control
play
in
that
last
version,
and
then
they
have
to
do
all
their
nodes
again
to
get
onto
the
newest
version,
and
so
the
goal
of
this
proposal
is
to
get
us
to
a
point
where
someone
who
is
happy
with
what
we're
currently
giving
them
feature
wise
and
just
wants
to
stay
on,
like
versions
to
get
security
fixes
and
stay
supported.
H
H
That's
the
motivation
I'm
here
at
signetwork,
because
signetwork
owns
Cube
proxy,
which
is
a
node
component
and
and
so
the
questions
I
have
for
signetwork
are
what
types
of
things
would
this
change
impact
and
originally
I
was
looking
at
what
we
document
like
for
supported,
SKU,
and
what
we
actually
document
is
that
Cube
proxy
must
be
the
same
minor
version
as
cubelet
on
the
Node
and
so
I
thought.
Oh
well,
that's
that's
awesome.
That's
simplifying
like
then.
H
The
only
thing
that
really
matters
is
the
apis
that
Q
proxy
is
using
and
we
kind
of
got
those
to
V1
when
endpoint
slice
went
to
V1
and,
like
I,
think
we're
pretty
good
now,
but
then
I
started
digging
and
noticed
that
we
mentioned
things
in
caps
about
like
skew
between
Cube,
proxy
and
cubelet
talking
about
like
iptables
ownership
and
things
like
that,
and
so,
as
is
often
the
case,
this
proposal
raises
other
questions
like
is
this
accurate
or
are
we
actually
expecting
Cube
proxy
to
be
versioned
with
the
control
plane
and
skewed
to
the
cubelet?
H
So
I
have
more
questions
now
than
when
I
started,
but
but
that's
the
context
and
the
questions
like
what
would
it?
H
What
would
it
cost
to
network
to
be
able
to
handle
one
newer
version
of
control
plane
and
what
would
it
cost
to
network
to
handle
one
more
version
of
SKU
between
Q,
proxy
and
cubely.
F
I
think
the
reason
why
we
used
to
claim
that
cubelet
and
and
Cube
proxy
had
to
be
the
same
version
was
because
there
were
just
these
these
weird
interactions
between
them
with
creating
the
same
iptables
rules
and
expecting
the
other
one
to
create
exactly
the
same
rule
with
exactly
the
same
value
and
after
it
kept
3178.
That
no
longer
happens
so
I
think
Cube.
Proxy
no
longer
cares
about
cubelet
at
all.
F
You
use
so
and
then
you
know
for
control
plane
stuff,
like
yeah.
If
we
add
new
apis,
we
may
have
to
deal
with
SKU
stuff
again,
but
you
know
that's
just
the
same
as
every
other
component.
Okay,.
E
H
I
mean
I,
yeah,
I,
think
accommodating
both
methods
of
deployment
makes
sense
like
immutable
nodes
where
you
put
the
stuff
on
the
Node,
and
you
have
cubelet
and
Cube
proxy
and
they
just
sit
there.
That
version
until
the
node
goes
away.
That
makes
sense.
Damon
set
I
think
also
can
make
sense.
We
probably
want
to
update
the
doc
that
yeah
currently
says
you
have
to
freeze
it.
H
Okay,
in
terms
of
apis
like
there
are
a
bunch
of
hands,
I
just
want
to
mention
one
thing
about
the
apis.
When
we
got
all
the
required
apis
to
GA
I
think
in
the
119-ish
time
frame,
we
actually
paid
a
lot
of
attention
to
avoiding
taking
new
hard
dependencies
on
not
yet
stable
apis.
So
there's
jobs
that
run.
You
know
a
ga
only
cluster
and
make
sure
that
all
the
components
can
like
function
properly
without
Beta
apis
turned
on
and
so
I
think.
H
J
I
just
wanted
to
say
that
we
also
have
to
have
a
rule
that
no
more
dependency
between
Q,
proxy
and
cubelet,
because
the
situation
now
that
we
had
we
had
dependency,
we
broke
the
dependency
so
just
to
ensure
that
everybody
looks
at
caps.
Anything
that
comes
between
proxy
and
cubelet
is
no
go
and
that
right
now
we
don't
have
that
documented
anywhere.
So
we
need
to
have
that
so
I
I
completely
agree
with
what
have
been
said
so
far.
I
just
wanted
to
at
that
point.
J
The
point
that
that
also
came
to
mind
is
the
beta
API
and
Alpha
API.
Will
we
ever
run
a
situation
where
proxy
depends
on
some
Alpha
apis?
That
does
not
exist
in
key
proxy.
In
a
situation
like
this
and
by
download
does
not
exist,
I
don't
mean
it's
just
I
mean
completely
does
not
exist
thing.
You
think
of
an
imaginary
situation
where,
let's
say
endpoint
slices
during
early
days
was
shipped
in
Q
proxy
and
Q
proxy
got
updated,
but
the
control
plane
did
not.
E
E
We
do
Antonio
just
said
in
the
chat
talking
about
beta
apis.
You
know
Gates
govern
everything
right
now.
So
if
you
were
to
turn
on
a
gate
in
Cube
proxy
and
not
turn
that
gate
on
in
API.
E
Then
again,
it's
sort
of
a
misconfiguration
I
think
we
don't.
We
don't
have
any
examples.
I
think
where
we
taste
taste
test
to
see
if
an
API
is
supported
and
if
not
go
around
it,
although
we
we
could,
we
just
haven't
really
had
any
need
for
it.
So.
J
I
F
H
A
I
actually
have
a
thing,
but
it
was
mostly
just
asking
a
little
bit
about
tests
but
we're
running
low
on
time
and
then
I
did
find
the
section
in
the
cap
where
we're
talking
about
e2e
test.
So
I'll
follow
up.
There
go
ahead,
Rob.
G
Yeah,
my
main
question
was
on
the
same
thing,
so
yeah
I
think
we're
thinking
like
there
glad
that
there's
a
test
plan
for
n
minus
three
nodes-
the
you
know,
I
just
I-
think
I'll
just
have
to
follow
up
on
the
cap,
but
I've
just
been
trying
to
think
of
little
things.
We
could
get
tripped
up
with
a
with
API
changes.
G
You
know,
I
think
the
most
recent
thing
would
be
the
terminating
condition
on
endpoint
slices
I
think
we
did
everything
correctly
there
that
it
wouldn't
have
been
an
issue
but
yeah
I'll
follow
up
on
the
cap.
I
just
want
to
make
sure
that
we're
not
missing
anything
around
and
I'm
sure
Jordan.
You
would
know,
but
I
just
want
to
make
sure
that
there's
no
small
little
detail
in
in
how
we've
done
previous
changes
that
would
get
in
get
us
into
trouble
here.
Yeah.
H
I
I
did
sweep
back
a
couple
years,
I
mean
we
do
a
lot
of
feature
rollouts
and
usually
we
don't
wait
for
the
oldest
node
to
support
a
feature
before
we
enable
a
new
feature,
and
usually
it's
like
an
opt-in
kind
of
thing
like.
If
you
want
the
new
feature,
you
got
to
upgrade
your
nodes
and
old
nodes.
Just
won't
be
aware
of
this
and
you
won't
get
the
behavior
like
that's
how
we
typically
design
things
and
they
fail
safe.
The
cases
I
could
find
where
they
didn't
fail
safe.
H
We
did
wait
until
the
oldest
node.
We
supported
understood
the
feature,
but
there
were
actually
very
few
of
those
that
I
found
I,
also
linked
to
a
kept
that
Daniel
Smith
is
working
on.
That
is
proposing
like
improving
how
we
roll
out
features
instead
of
just
waiter
release
and
then
enable
and
then
wait
another
release
and
then
wait.
H
Another
release,
like
a
more
active,
intelligent,
hopefully
way
of
managing
feature
rollout
I,
see
that
as
complementary
to
this,
this
is
really
saying
like
issues
we
already
have
with
like
n
minus
two
nodes,
don't
know
about
features,
we
would
also
have
within
minus
three
nodes,
but
if
you
know
of
cases
where
we
were
waiting
for
the
oldest
no
disappointed
thing-
and
this
would
actually
delay
feature
enablement,
that
would
be
helpful
to
know
the
ones
I
found
were
mostly
mostly
around
like
policy
security,
authorization
type
things
where
it
was
dangerous
to
relax
too
soon.
E
You
know
one
of
the
things
we
have
talked
about
doing,
but
never
really
instituted
because
it
seems
like
a
butt.
Ton
of
work
is
setting
up
our
own
ede
regimen,
like
node
ede,
that
just
exercises
Cube
proxy
against
different
API
server
versions
like
actually
run
the
tests
against
n
minus
one
n
minus
two
n
minus
three
or
plus
two
plus
three.
Maybe
we
should
come
back
to
revisiting
that
idea.
G
Yeah,
the
the
other
last
question
I,
know
we're
past
time.
Here,
I
is
there
you
know
I
know
there
has
been
some
pressure
to
avoid
perma-beta
apis.
You
know
we
need
to
get
these
apis
moving.
Is
there
a
chance
that
that
could
conflict
with
this?
If
there's
an
API
that
is
somehow
dependent
on
you
know
all
nodes,
understanding.
H
H
G
One
tiny
knit
there
for
endpoint
slices.
We
had
to
do
some
extra
delay
because
we
had
to
wait
till
like
when
you
upgrade.
You
have
to
wait
for
endpoint
slice
controller
to
write
using
the
new
API
version.
I
I
think
it
all
mostly
lines
up,
but
there
had
to
be
you
couldn't
just
I'll
follow
up
on
the
cat,
bye,
I'm.
H
A
Okay,
so
good
conversation.
We
are
kind
of
we
hit
about
15
minutes
with
this,
so
we
should
probably
move
it
more
into
the
cap,
but
we
have
several
action
items
that
look
like
things
that
we
can
follow
up
with
in
the
cup,
which
is
linked
at
the
top
of
the
dock,
that
is
Sue
number
3935
in
enhancements.
A
All
right,
akhil,
ping4
review.
D
Hey
yeah
I
just
wanted
to
draw
some
attention
to
this
cap.
I
know
we
discuss
this
in
the
previous
Network
meeting,
but
essentially
to
move
the
endpoint
slice
reconcile
or
parts
of
the
controller
into
a
staging
Library,
so
yeah.
It
just
would
appreciate
some
more
reviews.
D
I
think
Rob
already
took
a
look
at
it
and
approved
it,
but
yeah.
That's
it
cool.
E
Okay,
next
time,
I
scan
through
caps,
which
should
be
probably
next
week,
I'll
give
it
a
quick
pass,
but
if
Rob's
already
approved
it
like,
in
my
opinion,
that's
approved
cool
thanks.
B
B
Yep,
okay,
cool
yeah,
so
I'm,
gonna
time
box,
myself,
I'm
gonna
go
really
fast
because,
like
we
have
a
lot
of
stuff
to
talk
about,
and
so
I'm
gonna
start
my
timer
now
no
longer
in
15
minutes,
so
yeah
I
just
wanted
to
come
here
today
and
present
a
project
I've
been
working
on
for
a
little
bit
called
bpfd
and
basically
I'm
gonna,
give
a
quick
overview
of
what
it
is,
how
it
applies
to
kubernetes
and
like
kind
of
how
it
applies
to
this
thing.
B
So
if
you
could
hold
questions
till
the
end,
that'd
probably
be
the
best
awesome.
So
BPF
like
many
folks
here,
know
what
BPF
as
a
technology
is
I'm
not
going
to
dive
deeply
into
it.
There's
a
bunch
of
resources
out
there.
A
lot
of
folks
here
have
expressed.
Interest
are
expressed
like
wanting
to
use
it
in
their
applications.
Basically,
at
a
high
level,
it's
just
a
powerful
framework
built
into
the
kernel.
B
Now
that
allows
you
to
run
sandbox
program
programs
there
in
the
kernel
with
what
kernel
native
speed-
and
you
know
with,
albeit
with
a
lot
of
restrictions.
Okay
and
it
has
like
a
wide
array
of
uses.
So
those
include
networking
monitoring
tracing
security
among
others.
So
this
presentation
is
actually
one
I'm
planning
on
giving
the
Sig
node
and
stick
security
at
some
point
too,
and
we're
seeing
kind
of
the
demand
for
BPF
increasing
steadily.
Obviously,
everyone
in
this
group
knows
about
stellium
and
Calico,
two
cnis
using
vpf
as
a
technology.
B
They
have
been
for
a
long
time
and
I'd
say
they're
really
active
in
their
Downstream
development
of
BPF
on
kubernetes.
B
So
what
are
some
of
the
challenges
for
BPF
and
kubernetes?
These
are
kind
of
large
standing.
Essentially,
BPF
requires
like
extreme
privileges,
right
for
a
pod
to
load,
BPF
programs
and
kind
of
interact
with
the
with
BPF
Maps.
They
have
to
be
privileged
and
specifically,
they
need
cat
vpf
permissions
as
an
even
more
astute
example.
The
net
observe
operator
requires
all
these
permissions,
so
cap,
BPF,
perfmon
net
admin
assist
resource
just
to
kind
of
get
their
stack
up
and
running.
B
There
also
is
no
program
cooperation,
so
if
you
run
psyllium
on
your
cluster
and
they
deploy
an
xtp
program
and
then
you
have
a
customer
who
wants
to
deploy
an
app
with
an
XDP
program.
Well,
you're
out
of
luck,
something's
going
to
break
either
your
customers
app
or
psyllium,
and
the
behavior
is
very
undeterminate
at
this
point,
so
that
interference
is
really
undefined
in
the
kernel.
At
this
point
of
time,
debugging
and
preventing
problems
when
you
deploy
BPF
and
kubernetes
is
really
hard.
B
There's
no
great
tools
to
do
it,
help
you
do
it
in
kubernetes
and
you're
kind
of
like
picking
through
a
node
by
node
basis,
and
then
also
today,
each
BTF
enabled
kubernetes
application
is
kind
of
duplicating
a
lot
of
functionality,
they're
single-handedly
compiling
their
BPF
program
into
their
application,
loading
and
managing
different
maps
and
kind
of
doing
all
that
on
a
per
application
basis.
When
I
say
application,
I
also
mean
like
infrastructure
component
again
like
silly.
So
these
are
some
of
the
challenges
for
vpf,
especially
in
kubernetes.
B
So
that
brings
us
into
what
is
bpfd.
It's
basically
a
project
we've
been
working
on
in
Red,
Hats,
emerging
Technology
Group,
it's
a
system
Damien
for
managing
BPF
programs
and
their
life
cycle.
Specifically,
we
manage
loading
and
unloading,
the
BPF
programs
across
the
kubernetes
cluster,
and
it
allows
us
to
separate
our
privileged
capability
concerns.
I.E,
our
Damian
is
privileged,
but
you,
as
a
user
of
bpfd,
does
not
have
to
be.
We
also
leverage
the
live
XDP
protocol
to
allow
multiple,
XDP
and
TC
programs
to
cooperate
on
a
single
interface.
B
So
this
is
really
important
for
our
Network
specific
BPF
functionalities
that
you
all
talked
about
here,
a
lot
and,
lastly,
we're
starting
to
focus
a
lot
on
policy
and
security.
So
you,
as
a
cluster
admin
like
today,
an
application
could
get
a
privileged
pod
and
break
the
whole
cluster.
I
mean
BPF
is
a
hot
knife
like
you
can
break
everything
like
very
easily.
If
you
don't
know
what
you're
doing
and
so
bpfd
is
going
to
help
cluster
admins
distribute
policy
around
who
can
and
who
cannot
load.
B
B
We
benefit
a
lot
from
the
memory
safety
guarantees
of
that
language,
and
then
it's
built
on
top
of
a
rust
BPF
Library
called
Aya,
which
is
again
just
using
raw
system,
calls
in
order
to
interact
with
the
BPF
subsystem
in
the
kernel.
It
includes
the
kubernetes
operator
and
all
that
is
written
in,
go
and
build
the
operator
SDK.
B
So
most
folks
here
are
going
to
be
really
familiar
with
it,
and
then
users
of
bpfd
can
still
use
whatever
they've
been
using
psyllium
xcp
Aya
Etc
to
get
their
apps
off
the
ground
and
to
deploy
it
on
kubernetes.
They
just
are
kind
of
using
it
in
a
little
bit
different
flow,
and
we
provide
some
client-side
libraries
to
make
that
a
lot
easier,
so
I
know
I'm
going
really
fast.
B
This
is
kind
of
just
a
pictorial
I,
like
pictures,
a
pictorial
overview
of
what
BPF
deployment
and
kubernetes
looks
like
today.
So
as
I
explained
before
you
have
each
application
or
infrastructure
component
kind
of
maintaining
their
whole
stack.
They're
calling
BPF
libraries
in
order
to
manage
their
program
manage
their
pinpoints,
manage
their
maps
and
they're
doing
all
that
kind
of
on
their
own
stack,
individual
basis,
and
all
of
these
agents
require
cat
BPF
and
other
really.
You
know
dangerous
privileges
right.
It's
not
really
segmented,
I
think!
That's
all
I
wanted
on
this
slide.
B
It's
bpfd
because
that's
the
implementation
we've
made,
but
via
apis
that
we've
designed
the
BPF
program,
configure
Rd
and
the
BPF
program
crd,
you
could
plug
and
play
implementations
at
your
leisure
one
day,
but
in
this
deployment
model
now
your
applications
don't
need
cap
EPF
and
they
can
run
unprivileged
and
deploy
their
vpf
programs
with
the
blessings
of
the
cluster
admin,
and
it
also
opens
up
the
door
for
users
not
just
getting
structure
components
to
deploy
application
safety
safely
without
breaking
the
entire
control
plane
of
the
cluster.
B
And
then,
as
you
can
see
on
the
right,
you
can
still
use
whatever
map
management
Library.
You
were
already
using
before
we're
just
kind
of
sitting
in
the
mode,
the
load
and
management
path,
we're
not
really
sitting
in
how
applications
interact
with
BPF
Maps,
okay
time
checking
myself
here,
I'm
at
7
minutes,
30
seconds,
nice,
okay,
so
running
vpfd
on
kubernetes.
These
are
some
design
details,
I'm
going
to
go
really
quickly
over
them.
I
had
already
mentioned
before
we
have
a
bpfd
operator.
B
If
you
want
to
test
it
out,
go
check
out
our
repo
there's
links
in
this
slide
deck.
You
can
spin
up
a
local
con
cluster
by
cding
into
the
vpfd
operator
directory
and
running,
make
run
on
kind.
It
all
comes
up
well,
it
should
come
up
if
everything's
working,
we've
added
some
new
apis,
like
I
mentioned
before
the
BPF
program,
config
crd
is
used
to
express
BPF
intent
across
the
entire
cluster.
B
So
it's
what
you
as
a
user
will
create
and
the
BPF
program
crd
is
actually
kind
of
an
internal
API
that
we
use
to
store
per
node
State
and
then,
lastly,
we
configure
our
bpfd
deployment
with
a
just
a
same
old
config
map.
One
day
that
might
turn
into
another
API
object.
One
other
thing
I
really
wanted
to
note
on
that
makes
bpfd
special.
Is
that
we've
written
a
bytec
image
specification
for
packaging
BPF
programs
in
oci
container
images.
B
Now
this
is
really
important
for
kubernetes,
because,
like
I
mentioned
before,
applications
traditionally
have
their
byte
code
compiled
directly
into
their
application.
So
you
can't
have
fine
grain
version
control
over
your
user
space
applications
against
your
bytecode
they're,
all
kind
of
packaged
together.
B
What
we've
done
is
allowed
you
to
reference
a
BPF
program
with
just
a
normal
container
image,
so
everyone
here
is
really
native,
really
used
to
it,
and
so
bpfd
essentially
acts
as
a
container
runtime
explicitly
for
these
types
of
container
images
to
make
sure
we
can
extract
the
program
and
run
them
correctly
on
the
Node
on
each
node.
It
also
opens
the
door
for
us
to
sign
these
in
the
future
kind
of
use
that
existing
image
infrastructure
cool.
So
let
me
just
share
this
is
going
to
be
a
really
quick
demo.
B
Is
the
kubernetes
cluster
with
bpfd
up
and
running
what
we're
going
to
do
is
deploy
two
BPF
programs
with
our
BPF
program.
Config
objects.
The
first
one
over
here
on
the
left
is
just
a
counter.
So
it'll
count
all
the
packets
going
by
an
interface
and
the
one
on
the
right
is
actually
just
a
pass
program,
but
we've
specified
what
happens
after
this
Pass
Program
executes,
and
that
is
that
it
drops
packets.
So
what
we're
going
to
do
first
is
deploy
these
two
programs
and
the
counter
program.
Oops.
Sorry.
B
B
B
E
B
B
Can
you
all
see
that
there's
more
details
on
the
slides
here
about
the
application
you
just
saw
and
how
to
run
it
and
then
so
for
everyone
here
like
what
does
the
future
look
like
for
BPF,
kubernetes
and
bpfd?
Some
of
the
broader
questions
we
had
the
Sig
is
kind
of
like
we
really
want
to
Pioneer
like
what
BPF
and
kubernetes
kubernetes
looks
like
as
a
technology
in
an
upstream
sense,
but
we
don't
know
where
to
do
that.
Obviously
the
technology
is
starting
to
become
prolific
in
the
ecosystem.
B
We
want
to
ask
you
all
like:
where
do
we
think
this
should
live?
Should
it
be
a
new
Sig?
Should
it
be
a
subgroup
under
one
of
these
cigs
and,
as
I
said,
we're
going
to
present
to
some
other
saves,
because
this
kind
of
reaches
across
different
boundaries?
How
does
cig
Network
want
to
play
a
role
in
it
and
then
can
we
see
one
day?
Kubernetes
API
is
being
endorsed
by
a
or
multiple
cigs,
so
that
we
could
plug
and
play
different
implementations.
B
So,
instead
of
bpfd,
someone
could
plug
their
own
thing
and
then
the
last
thing
is
a
short
road
map
we're
on
the
way
to
2.0.2
release.
We
are
moving
bpfd
out
of
red
hat
ET
into
its
own
independent
org
and
then
we're
looking
to
stabilize
and
refactor
some
of
our
core
apis.
So
sorry
that
was
as
quick
as
I,
possibly
could
be.
There's
a
lot
of
information,
maybe
take
like
two
minutes
for
questions
and
yeah
I
really
appreciate
it.
Cool.
J
I
can't
help,
but
thinking
you're,
you're
demo,
this
to
the
wrong
Cloud
the
wrong
wrong
crowd.
I
mean
the
your
the
cloud
that
would
be
mostly
interested
in.
This
are
the
folks
who
are
building
PBF
programs
and
I'll.
Ask
them
to
to
have
some
sort
of
a
compliance
to
a
certain
unified
framework
where
users
can
just
install
some
sort
of
a
watchdog
or
Uber
controller
that
controls
all
of
this
right.
I
can't
help
also,
but
thinking
if
we
like,
there
are
two
things
to
this:
how
it
works
right
and
the
API.
J
The
API
comment
you
made
around
like
hey
having
a
singular
review.
European
API
might
make
sense,
so
people
can
best
start
building
common
tools
and
then
blog
and
implementation
that
I
agree
on,
but
I
can't
help.
But
thinking
like
what,
if
I
want
something
else
like
yeah,
what?
If
I
just
want
to
install
so
because
the
way
I
look
at
like
the
way
that
customers,
at
least
the
ones
I
talk,
talk
about
talk
with,
they
usually
have
one
or
two
tools.
No,
nobody
is
in
the
business
of.
B
Right,
I,
I,
just
I,
think
I
I,
fully
agree
with
you,
I
think
like
there's
a
lot
of
interest
around
bpfd
or
BPF
as
a
technology
in
kubernetes
and
everyone's
doing
their
own
thing.
So
I'm
not
saying
we
here
need
to
do
anything,
but
I
would
like
to
Rally
all
those
folks
who
are
interested
in
a
central
kubernetesque
place.
So
we
can
talk
further.
To
be
honest,.
A
To
add
on
that,
and
just
so
everybody's
aware,
I'm,
a
contributor
to
this
project,
I
kind
of
said
that
I
thought
this
would
be
an
okay
place
to
bring
it
up
because
it
affects
people
that
do
Network.
I'm,
writing,
ebpf
programs.
A
It
is
obviously
very
big
in
the
networking
space
today,
but
it
is
not
meant
to
be
a
specific
networking
thing.
Necessarily
that's.
Why
he's
going
to
be
looking
at
like
a
couple
of
different,
cigs
and
stuff
like
that?
But
I
think
over
time,
especially
since
we
are
doing
things
like
at
least
in
kpng
having
an
ebpf
back
end
like
ebpf
is
going
to
be
a
part
of
Sig
networking
kind
of
already
is,
but
it
could
be
more
of
a
bigger
part
in
the
future.
A
Ton
of
time
so
go
ahead.
Tim.
E
I
think
it's
great
I
love
BPF,
except
that
I
don't
get
to
play
with
it
very
much.
So
the
the
I
don't
think
this
is
exclusively
a
Sig
Network
problem,
because
the
ebpf
is
not
exclusively
a
Network
Technology
I
am
just
glancing
at
the
apis
without
digging
really
deeply
I'm
a
little
worried
that
they're
still
very
flexible
and
there's
a
lot
of
it
seems
like
there's
a
lot
of
opportunities
for
people
to
blow
their
own
feet
off.
It's
not
a
foot
gun.
E
It's
a
foot
Cannon,
oh
yeah,
so
proceeding
here.
One
of
the
things
I
would
ask
to
look
for
is
how
do
we
provide
a
potentially
more
opinionated,
more
rigorous
guard
rail
system
that
lets
people
do
interesting
things
without
at
least
lowering
the
caliber
of
the
gun,
pointing
at
their
own
feet
agreed
and
I,
don't
know
which
they
would
really
own.
E
This
I
think
they
added
the
appetite
for
net
new
cigs
is
pretty
low
and
so
like
in
order
to
justify
a
new
Sig
we'd
have
to
show
some
critical
mass
right,
so
I
wouldn't
have
a
problem
with
Cygnet
sort
of
being
the
the
parent
Sig
for
exploring
this.
E
B
And
and
that's
the
Viewpoint
we
wanted
from
you
I
think
is
like
so
we
have
to
start
somewhere
like
this
group
is
the
best
group
at
making
kubernetes
apis
right
like
and
we
need
it.
We
need
all
help
like
I've
built,
an
Avi
API
here
and
I've
built
this
API
and
like
it's
just
my
brain,
it
needs
to
be
other
people's
brains.
A
J
J
Ahead,
I
just
want
to
leave
another
comment
for
context.
I
know,
PBF
is
big,
I
know
it's
important
and
I
know
it's
prolific,
but
I
just
want
to
bring
their
attention
in
terms
of
networking.
It's
not
not
the
only
cool
kid
out
there.
Maybe
Decay
is
still
thriving
right.
As
a
matter
of
fact,
most
of
the
things
I
have
seen
is
vbdk
base.
So
I
just
want
to
think
about
this
in
form
of,
if
you
want
to
think
about
it
in
front
of
API.
E
Ip
tables
we
never
laid
out
an
extensibility
framework
for
people
who
wanted
to
do
some
other
thing
with
iptables
where
to
drop
their
thing
in
and
what
the
semantics
they
could
assume
would
be,
and
this
was
a
pain
point
for
lots
of
people.
So,
let's,
as
ebpf
becomes
a
bigger
thing,
let's
not
make
that
mistake.
That's
one
of
my
biggest
concerns
here.
A
Makes
sense
all
right
thanks
again,
but
we
are
running
low
on
time.
So,
let's
move
on
match
today.
Do
you
want
to
go
ahead
and
talk.
I
Sorry
in
our
discussions
came
up
so
those
two
questions-
and
we
would
like
to
know
is:
do
we
have
to
design
for
those
cases?
So
basically,
two
cases
that
we
were
thinking
is
first,
one
is
creating
a
cluster,
because
this
is
possible
when
you
do
manually
creating
a
cluster
that
will
have
different
cni
per
node.
Is
that
something
that
we
have
to
design
for
right
when
we
in
our
multi-network
right
and
then
the
other
question
is
similar?
What
I'd
like
to
know?
I
What
is
the
percentage
of
usage
for
that
so
such
cases,
and
is
that
something
that
we
need
to
design
for
so
the
first
one
would
be
the
what
I
mentioned
like
each
node
having
a
completely
different
cni,
because
today
is
possible
right
if
those
cnis,
let's
say
somehow,
integrate
with
each
other
or
maybe
I,
don't
care
about
like
that.
The
cross,
node
communication
Works
in
each
of
them
individually,
do
I
have
to
have
a
case
where
an
address
for
that
IIT.
E
E
I,
think
that
is
behind
the
curtain
and
anything
that's
operating
in
the
cluster
scope
really
ought
not
be
aware
of.
What's
going
on
behind
the
curtain,.
I
All
right
and
I
see
dance
thing
down
for
you
as
a
response.
This
is
this
is
very
yeah
helpful
and
then,
probably
similarly
to
the
next
one
is
probably
connected
to
the
privacy
and
I
see
you're
even
done
your
response
probably
answers
my
question
right.
Are
we
supporting
a
manual
cni,
config
change
and
I,
see
that?
Yes,
we
do
so
what
one
thing
is
having
a
faster
would?
Yes,.
I
What
I
want
to
say:
I
don't
have
unless
I
quit
some
automation,
but
on
my
own,
but
then
go
to
the
to
the
node
and
change
my
cni
config
right
to
completely
something
else,
and
let's
say
I,
do
some
other
operations,
maybe
like
training,
the
note
and
and
do
the
whole
operation
should
I
restart
the
note
to
have
this.
You
see
a
nice
stuff
set.
That's
what
I
meant
here.
E
So
this
is
a
recurring
topic.
We
were
talking
about
it
just
this
morning
that
this
distinction
between
the
cluster
provider
and
the
cluster
operator-
and
you
know
for
for
self-managed
clusters-
that's
probably
the
same
person,
but
for
like
Cloud
providers
and
for
managed
clusters.
It
may
not
be
the
same
people.
E
E
If,
if
you,
when
you
in
your
statement,
matcha,
if
the
you
who
is
making
this
change
is
the
cluster
provider,
it
seems
clearly
inbounds.
If
it's
the
cluster
operator,
they
may
be
running
a
foul
of
assumptions
that
the
cluster
provider
is
making
right
like
I
know
on
the
gke
side,
if
you
log
into
a
gke
node
and
change
the
cni
config,
it
might
technically
work.
But
if
something
goes
wrong
and
you
call
our
support
people,
they
are
not
going
to
help
you
or
they're
not
going
to
be
able
to
help.
E
You
they'll,
try
so
and
I,
and
that
said,
I
also
know
that
it
has
been
an
important
thing
for
people
to
try
things
out
in
order
to
do
that.
But
I
think
we
as
a
project
need
to
be
clearer
about
what
the
guarantees
are
and
who
owns
Which
set
of
apis.
F
I
And
then
you
do
like
very,
very
well
designed,
well
tested
out
a
approach
for
so
basically
your
answer
done
done
is
is
for
basically
the
other,
the
the
other
part
of
my
the
other.
My
second
question
right,
your
use
case
from
you.
You
don't
create
a
per
se
cluster
with
different
cnis.
You
definitely
want
to
do
change
the
cni
during
the
life
cycle,
but
you
don't
create
on
purpose
cluster
with
a
different
cnis.
Is
that
true.
I
Basically,
you
don't
have
a
caseworker
with
different
cnis.
You
definitely
want
to
have
a
want
to
change
it
with
a
fully
tested
case,
but
okay,
but
still
even
from
that
Team
you're,
saying
that
even
the
difference,
cnis
per
note
is
something
that
we
would
have
to
address.
C
I
I
I
don't
know,
I
have
I,
have
black
boxers,
I
I,
don't
know
what
it
is
right.
They
all
have
their
own.
One
is,
let's
say,
mac
villain,
the
other.
One
is
Viet
interface.
The
third
one
is
the
colic
all
right
and
those
three.
Let's
assume
they
all
can
collaborate
in
my
and
they
it
can
communicate
between.
They
do
a
satisfy.
The
conditions
of
all
the
pods
can
talk
to
each
other
right
can
I.
I
E
I
think
we
should
not
build
things
that
assume
that
those
are
the
same
when
they
don't
need
to
be
so
like
the
simplest
example,
I
have
one
node,
that's
using
a
bridge
and
one
node,
that's
just
using
P2P
like
those
both
emit
IP
they're
compatible
at
the
network
level,
but
they're
different
cni
implementations
is
that
allowed
I
think
it
has
to
be.
C
E
Was
yeah
no
I
I'm,
not
I'm,
not
saying
like
if
your
cni
doesn't
encapsulation
format
that
you
have
to
be
compatible
with
the
cni
that
doesn't
but
I'm
saying
we
shouldn't
assume
that
they
are
the
same
implementation
or
the
same
version
or
the
same
anything
inside
the
node.
A
All
right,
so
we
are
four
minutes
so
I
don't
think
we're
really
gonna
get
through
everything,
but
Antonio
yeah.
C
C
F
Okay
and
then
my
item
after
that,
so
bundles
have
been
talking
about
like
refactoring
qproxy
API
into
something
nicer
and
eventually
supportable
by
other
people,
and
since
it's
all
written
out,
this
is
mostly
just
sort
of
a
road
map
to
a
bunch
of
PRS
that
I'm
going
to
be
filing
and
like
some
of
them
I
know,
Antonio
is
going
to
be
looking
at
them
and
being
like.
Why
are
you
doing
that
so
I
sort
of
wrote
out
the
the
whole
plan
so
that
so.
E
C
E
I
I
C
A
Cool
so
yeah
Dan's
working
on
a
refactor
of
cube
cuckoo
proxy
and
if
you
are
interested,
the
dock
is
here
in
the
meeting
notes.
Please
check
it
out
all
right
and
we
did
actually
get
through
everything
just
in
time,
so
cool
well,
for
those
of
you
who
will
be
at
kubecon,
see
you
there,
we
will
get
together
check
the
Sig
Network
channel
and
we'll
probably
make
some
noise
about
when
we're
doing
a
cig,
Network
lunch,
and
otherwise
we
will
see
everyone
in
a
couple
of
weeks
after
kubecon.
E
Have
a
great
time,
everyone
who's
going
to
be
there
serious
fear
of
missing
out.
Don't
don't
post
back
how
much
fun
it
is
because
I
don't
want
to
know.