►
Description
OpenShift 4.8 delivers many important new networking features and enhancements to many existing ones. This presentation will explain them all with an informative dive into each such as IPv6 support, new router configuration settings, Netflow support for tracking and monitoring OVS flows, as well as enhancements to existing features like CoreDNS and HAproxy upgrades, OVN migration tooling, SR-IOV NIC support, Network Policy, and more.
A
A
A
A
B
B
World
hello,
everyone
welcome
again
to
another
openshift
commons,
and
today
we
have
a
very
popular
topic,
and
I
know
we
all.
We
all
love
networking
and
this
continues
our
series
of
what's
new
in
openshift
4.8
and
we
have
lots
of
great
new
networking
enhancements
and
we
have
mark
curry
here
to
give
us
the
update
and
please
get
your
questions
ready,
put
them
in
q,
a
put
them
in
chat
if
you're
joining
us
through
openshifttv.
B
We
have
people
watching
those
channels.
So
put
your
questions
there
as
well,
and
let's
go
ahead
and
get
started
mark
curry
is
one
of
our
consulting
product
managers
and
if
you
know,
networking
and
openshift
you've
already
heard
his
name,
so
he
doesn't
need
much
of
an
introduction,
but
mark
I'm
so
excited.
Thank
you
for
joining
us
today
and
you
want
to
go
ahead
and
dive
right
in.
A
Screen
all
right,
hopefully,
if
that
has
not
already
popped
up
in
your
screens,
it
will
very
shortly
so
first
thank
you
karina
and
also
thank
you,
everybody
for
your
time
today
to
talk
a
little
bit
about
some
of
the
the
new
enhancements
and
the
features
that
you're
gonna
find
in
our
forthcoming
openshift
4.8
release.
A
So,
let's,
let's
first
take
a
quick,
big
picture.
Look
at
what
I'll
cover
today!
So
some
of
the
things
captured
here,
you
can
see
that
ipv6
going
to
talk
a
little
bit
about
new
router
configuration
settings
for
aj
proxy.
A
Some
of
our
observability
effort
in
the
form
of
netflow
support
some
upgrades
to
some
of
our
core
products
that
is
core
dns
and
aj
proxy,
and
some
of
the
advantages
that
are
brought
forth
from
those
upgrades.
I'm
going
to
talk
a
little
bit
about
the
oven,
migration,
tooling
and
some
an
enhancement
to
that
talk.
A
little
bit
about
sriv
nick
supports
and
one
big
thing:
that's
not
technically,
four,
eight,
it's
four
nine,
but
still
worthwhile.
A
Knowing
for
those
of
you
that
do
sri
lb
networking
and
then
we'll
talk
a
little
bit
about
network
policy.
Audit
logging
in
events
and
network
policy
of
mac
vlan
interfaces
a
couple
of
things
that
are
not
in
this
topic
list,
which
I
threw
in
fairly
last
minutes,
I'm
going
to
talk
a
little
bit
about
a
couple
of
new
dni
plug-ins
that
we
support
as
well.
A
So
there
are
many
networking
developments
that
we
did
during
the
4-8
time
frame,
but
really
these
are
probably
the
ones
that
are
most
requested
or
are
most
visible
changes
to
our
customers
and
so
that
that's
where
we'll
focus
the
topics
of
the
presentation.
Everything
is
a
fair
game
for
questions.
However,
so,
first
up,
let's
go
ahead
and
talk
about
one
of
the
bigger
things
to
have
happened
to
kubernetes
networking
recently
and
that
is
ipv6
single
and
dual
stack
enablement.
A
So
we've
supported
ipv6
for
secondary
interface
data
plane
interfaces
for
a
while
now,
but
as
a
4.8,
which
is
built
on
cube
121,
where
ipv6
dual
stack
first
went
beta
and
the
api.
Finally,
stabilized
openshift
now
fully
supports
ipv6
end
to
end
on
the
control
plane
for
of
a
networking
bare
metal
deployments,
so
we'll
add
other
platforms
as
we
can
that
are
ipv6
capable
in
the
future,
so
backing
up
a
little
bit
to
make
sure
we're
all
on
the
same
page.
A
Single
stack
of
course
means
that
the
pod
interface
has
either
one
of
a
v4
or
a
v6
address
on
it.
So
you
choose
one
of
those
and
all
of
openshift.
Networking
is
going
to
be
100
aligned
to
that
choice.
A
A
The
latter
configuration
v6
represents
the
vast
majority
of
our
customer
use
cases
mostly
because
you
know
single
stack
is,
as
I
said,
100
percent
single
stack,
100,
v4,
100
or
100
v6
need
to
connect
even
one
host
of
the
other
ip
family.
Then
you
need
dual
stack
and
we've
had
several
customers
that
were
affected
by
that
scenario,
and
so
most
times,
unless
there's
a
very
strong
need
for
signal
stack.
Most
customers
will
choose
dual
stack.
A
So
how
do
you
enable
v6?
It's
it's
actually
very
simple
at
install
time
you
do
that
typical
pause
of
the
install
to
modify
the
install
config
yaml
file.
You
specify
ipv6
subnets
in
addition
to
the
ipv4
subnets
and
so,
and
then
just
continue
with
your
install.
If
you're
trying
to
do
this
post
install,
you
can
still
do
it.
It's
still
pretty
straightforward.
A
You
simply
edit
the
network,
config
resources
to
add
secondary
you're,
going
to
need
machine
network
cluster
network
and
service
network
values,
and
then
those
will
get
rolled
out
by
the
cluster
operator
correctly.
So
it's
pretty
straightforward
to
do
common
question.
I
get
is
what
about
other
platforms.
A
You
know
so
I
mentioned
oven
bare
metal,
but
you
know
there
are
we're,
definitely
working
on
some
other
on-prem
platforms
that
support
ibv6,
for
example,
openstack,
but
there
are
no
near-term
plans
to
port
v6
on
most,
if
not
all
the
cloud
providers
at
least
end
to
end,
because
and
so
because
of
some
of
those
current
issues
on
those
various
platforms
where
we're
blocked
so,
for
example,
some
of
them
only
offer
their
apis
and
their
dns
over
v4.
So
you
can't
do
dual
stack.
A
At
least
one
of
them
has
a
load
balancer
that
accepts
v6,
but
translates
it
to
v4
internally,
so
it's
not
really
doing
v6
and
so
forth.
So
there's
other
issues
so
until
and
if
some
of
those
issues
are
ever
resolved,
this
will
likely
be
the
status
quo
for
a
while
platforms.
We
support
today
next
thing
I
want
to
talk
about
and
there's
there's
a
lot
of
content
on
this
one
side.
So
there
are
many
new
features
and
enhancements
to
open
shift
networking
for
it.
A
As
I
mentioned,
I'm
going
to
cover
some
of
them
here
and
then
there's
even
more
I'll
cover
in
the
next
slide.
A
very
similar
slide
as
a
sort
of
representative
cross
section,
roughly
divided
into
ingress,
egress
enhancements
and
general
networking
enhancements,
so
this
first
slide
kind
of
roughly
across
ingress
egress
enhancements.
A
A
So
we
get
all
the
new
features
that
are
detailed
in
a
link
that
I
believe
is
down.
The
student
notes
section
of
this
slide,
which
will
be
shared
with
you,
including
some
of
the
major
ones
listed
here
in
this
bulleted
section.
So
there's
going
to
be
increased
performance,
there's
all
kinds
of
enhancements
for
security,
harding
and
options.
There
are
health
checks
that
are
added,
there's
new
and
improved
observability
into
the
traffic
and
the
workloads
that
are
happening
so
that
you
can
do
better
debugging
another
big
one.
A
A
One
example
of
that
is
amazon's
elb,
so
we
have
a
number
of
customers
that
have
asked
for
that
functionality.
Another
one
router
backend
process
endpoint.
So
this
is
critical
to
shuffling
endpoints
for
proper
distribution
of
the
requests
when
you're
running
multiple
routers
that
have
a
load
balancer
in
front
like,
for
example,
if
you
have
an
f5,
so
this
will
help
balance
out
that
load
internally
on
those
on
those
end
points
a
little
bit
better.
A
couple
more
options
to
max
rewrite
tune,
buff
size.
A
So
some
customers
have
the
use
case
of
very
large
header
data
and
I'm
talking
about
on
the
order
of
about
48k
somewhere
in
that
neighborhood
48k
64k.
A
But
if
which
is
very
large-
and
maybe
it
was
a
bit
unexpected
on
our
part-
but
if
he
proxy's
buffer
for
that
header
data
is
not
large
enough,
then
that
data
gets
dropped.
So
we
now
support
the
configurability
of
these
parameters
to
allow
for
that.
We
don't
limit
the
value
that
can
be
used.
However,
keep
in
mind
the
larger
the
cluster,
the
more
memory
it's
going
to
consume
as
that
configured
value
goes
up
another
one,
a
customizable
number
of
router
threads,
also
known
as
nb
thread
parameter.
A
So
since
4.1
we
have
a
we've
supported
the
nb
threads
parameter
that
defines
the
number
of
threads
used
by
aha
proxy.
But
what
we
did
is
is
we
did
an
engineering
analysis
and
we
determined
a
best
practices
value
for
that
to
be
four
threads
for
for
many,
but
not
all
workloads.
So
what
we
did
right
out
of
the
gate
was
we
assigned
a
fixed
value
of
four
threads
to
envy
threads.
A
A
So
what
we've
done
is
we've
made
this
now
customizable
in
4.8
switching
gears
a
little
bit
ip
failover
keep
alive
the
support
so
to
keep
alive
the
image
it's
interesting.
It's
been
there
in
the
product
for
a
long
while
we
dropped
documentation
for
it,
but
we
never
dropped
the
keep
alive
image,
and
so
it
kind
of
was
this
a
bit
of
a
hidden
secret
within
the
product.
A
Customers
still
wanted
to
use
it
and
they
wanted
our
formal
support
of
that.
So,
of
course,
what
we
did
in
openshift
was
we
formalized
support
for
the
use
of
that
image
to
provide
high
availability
in
openshift
and,
as
part
of
that
support,
we
of
course
needed
to
introduce
brand
new
documentation
and
a
procedure
for
implementing
it
on
cluster
nodes,
so
not
a
brand
new,
but
rather
a
current
procedure
for
implementing.
So
that's
what
we've
done
in
48,
another
thing
gateway
api.
So
in
openshift48
we
will
present
a
developer's
preview
of
gateway
api.
A
So
this
was
formerly
known
by
several
names.
This
was
known
as
ingress
v2.
In
the
very
beginning,
it
changed
names
to
service
api,
but
now
it's
changed
to
gateway
api.
Hopefully
that
sticks,
I
think
it's
the
most
appropriate
name
for
it.
A
But
gaming
api
represents
a
unifying
technology
for
ingress,
so
we're
targeting
integration
of
it
with
the
likes
of
another
project
which
maybe
is
on
envoy
friendly,
like
contour
as
the
primary
ingress
controller
for
traffic
alongside
aj
proxy,
so
that
would
represent
an
enhanced
integration
with,
for
example,
open
openshift
service
mesh
with
its
envoy
project
involvement.
A
Of
course,
you
know,
there's
going
to
have
to
be
a
great
deal
of
performance
and
scale
testing
to
to
in
enterprise
hardening
to
make
sure
that
that
is
going
to
serve
our
customers
in
a
positive
way,
but
that
is
underway
currently
another
thing:
global
access
option
for
gcp's
ingress,
internal
load
balancer.
So
without
this
particular
option,
traffic
originating
between
projects
in
in
a
shared
vpc
network
must
be
in
the
same
region
as
the
load
balancer
being
used.
A
So
this
facilitates
communication
cross
region
for
shard,
bbc
deployments
and
finally,
another
one
egress
ip
load,
balancing
enhancement.
This
is
for
our
current
openshift
sdn
customers
to
spread
traffic
more
evenly
across
cluster
nodes.
So
what
this
does
is
this
removes
that
single
node
chokepoint
of
egress
ip,
where
the
ip
address
is
assigned
to
one
node
in
the
cluster,
and
therefore
all
traffic
must
go
out
that
node
to
be
assigned
that
egress
ip.
A
So
you
remove
that
chokepoint
and
then,
and
you
can
spread
the
load
across
multiple
nodes
and
and
multiple
ip
assignments
across
cluster
posts.
A
a
similar
ovn
enhancement
is
is
forthcoming
in
the.
A
All
right,
so,
let's
click
gears
a
bit
and
talk,
maybe
just
a
little
bit
more
about
just
some
general
networking
enhancements
again.
This
is
a
small
sliver
of
a
much
larger
effort,
that's
going
on,
but
these
are
some
often
requests
that
are
asked
about
things.
So
so
so
so
a
big
one
is
network
observability.
A
We
have
a
rather
large
effort
underway
across
all
of
open
shift
or
to
increase
observability
of
all
the
things
specifically
for
networking.
Networking
complexities
make
cluster
administration
really
difficult,
even
for
experienced
administrators.
So
the
goal
of
networking
observability
is
to
improve
the
quality
and
visibility
of
openshift
networking
metrics.
A
Some
of
the
important
networking
correlations
that
are
inside
the
ui
or
that
can
be
drawn
for
you
for
an
improved
operational
experience.
A
So
one
of
the
big
things
we've
done
is
specifically
in
48
towards
this
goal
is
to
provide
network
flows,
tracking
and
monitoring
for
network
analytics,
and
so
this
this
will
help
us
with
a
supported
way
to
monitor
traffic
into
and
out
of
the
cluster.
A
This
specific
enablement
is
a
first
step
in
preparation
for
adding
a
netflow
s
flow
ip
ipfix
collector
to
oven
kubernetes.
In
a
follow-on
release,
so
this
is
going
to
be
really
helpful
for
troubleshooting
performance
issues,
capacity,
planning,
security
audits
and
similar,
and
look
for
much
more
from
our
observable
observability
efforts
in
fall
on
releases,
also
in
48,
we've
added
some
key
sri
iob,
capable
nik
hardware,
support
for
our
customers
and,
and
so
a
lot
of
these
requests
came
from
customers.
A
Partner
organizations
and
represented
here
is
sort
of
the
short
list
of
what
we
did
specifically
in
4.8.
A
So
we've
added
the
millinox
cx5
family,
we've
added
intel,
columbiaville,
family,
hp,
ethernet,
adapter,
specific
adapter-
and
I
know
this
presentation
is
really
about
what's
new
in
specifically
in
4.8,
but
I
do
have
to
mention
that
in
our
follow-on
release,
4.9
we've
integrated
with
rel's
fast
data
path,
team
and
their
test
harnesses.
A
So
at
four
nine
onward.
Any
nick
that
rel
supports
for
sr
iov
so
will
open
shift.
So
this
list
that
I
presented
to
you
of
three
different
adapters.
A
Hopefully
I
won't
have
to
keep
presenting
adapters
going
forward
in
4.9.
We
do
have
some
specific
ones.
We
added
in
parallel
to
to
this
harness
integration
with
rel
fast
data
path,
but
hopefully
in
the
future.
This
will
be
just
something
that
we
looked
up
in
our
documentation,
see
what
the
rel
rail
supports.
A
Look
for
more
on
that
in
a
future
release.
We've
also
added
network
policy,
support
to
mac
vlan
interfaces
for
our
customers.
So
when
you're
using
mac,
vlan
kind
of
a
quick
background
on
this,
a
virtual
network
interface
gets
created
and
it
then
gets
exposed
to
the
local
switch
with
a
new
mac
address.
A
A
A
This
will
be
just
part
of
the
larger
plan
and
vision
for
that
this
next
one,
I'm
going
to
mention
this
one
quickly
here,
because
I'm
going
to
talk
about
it
in
more
depth
in
a
following
slide,
but
we've
enhanced
the
openshift
sdn
to
oven,
kubernetes,
cni
migration,
that
we
support
to
include
support
for
upi
deployments
as
well,
which
pretty
much
completes
our
picture.
So
I'll
come
back
to
that
in
a
moment
for
security
and
compliance
reasons,
our
customers
have
also
asked
us
to
provide
enable
audit
logging
of
network
policy
events.
A
So
what
I
mean
by
that
are,
for
example,
when
a
packet
is
either
accepted
or
denied
that
largely
or
not
largely,
it
went
entirely
missing
from
any
sort
of
logging
or
auditing
events.
So
what
we've
done
is
we've
added
that
back
into
the
product.
A
So
this
information
is
presented
to
the
built-in
logging
stack
and
custom
kibana
dashboards
and
is
and
is
useful
to
support
our
customers
in
their
compliance
security
policies,
where
activity
on
the
far
wall
needs
to
be
inspected
at
run
time
to
monitor
suspicious
behaviors,
like
they
basically
act
in
the
role
of
an
ids
intrusion,
detection,
support
or
or
even
for
post-mortem
analysis.
A
Lastly,
there's
been
a
couple
of
product
updates
within
openshift
networking
core
dns.
We
updated
it
to
version
1.8,
and
this
is
going
to
include
a
number
of
feature
enhancements
and
bug
fixes
with
one
eight.
We
also
provide
the
ability
to
control
openshift
dns
pod
placement,
so
this
is
for
those
customers
with
extreme
workloads.
A
They
need
to
be
able
to
control
where
exactly
dns
runs
either
so
that
it's,
you
know
potentially
locally
available
to
where
it's
needed
for
optimal
functionality
or
maybe
to
isolate
it
so
that
workloads
aren't
or
dns
itself
isn't
bothered
by
the
workloads
or
vice
versa.
A
So
this
is
this
is
a
big
part
of
accordion
s18,
some
of
the
functionality
we've
had
all
right.
I
mentioned
that
I
was
going
to
come
back
and
talk
about
ovn
migration
tooling.
So,
as
I
previously
mentioned,
other
migration
tool
support
is
now
supported
on
all
all
platforms
that
we
support,
and
this
includes
those
platforms
that
are
both
upi
and
ipi
installation
mode
gpi.
A
I
don't
want
to
presume
too
much
here:
user,
provisioned
infrastructure
and
installer
provisioned
infrastructure,
so,
whichever
mode
of
installation
you
chose
no
matter
the
platform,
we
support
the
ovn
migration
tool
on
that
platform,
so
ovn
or
oven
is
supported
as
of
openshift46,
but
it's
not
the
default
out
of
the
box
networking.
So
customers
that
want
to
take
advantage
of
the
latest
feature
enablements
enhancements
that
are
are
really
being
done
in
ovn.
A
A
However,
for
customers,
where
that
is
not
an
option
and
still
want
to
use
ovn
and
just
convert
their
existing
clusters
to
it,
we
want
to
make
that
option
as
painless
as
possible,
but
understand
that
swapping
networking
on
a
cluster
is
a
non-trivial
process
and
there
is
likely
going
to
be
some
amount
of
service
disruption.
We
try
to
minimize
that
wherever
possible,
and
so
typically
what
people
will
do
to
mitigate
that
also
is
to
schedule
this
during
the
maintenance
period.
A
So
the
graphic
on
the
right
outlines
a
little
more
detail.
The
procedure
for
the
migration
and-
and
this
is
also
going
to
be
in
the
4.8
documentation
and
also
recognize
that
this
is
for
going
from
openshift
to
sdn
to
other
kubernetes,
but
there
is
a
similar
process
different
but
similar
process
to
roll
back
if
you
ever
needed
to
do
so.
A
Specifically,
this
is
migration,
and
I
should
point
out
this
is
not
migration
from
just
any
sdn
plug-in
a
cni
plug-in
to
another.
This
is
specifically
supported
and
fully
tested,
embedded
between
openshift
sdn
and
oven
kubernetes.
A
So
this
particular
migration
is
logically
split
into
two
phases:
one
where
the
machine
config
operator
prepares
the
nodes
for
the
new
networking.
A
So
in
that
first
phase
this
involves
a
series
of
excuse
me,
a
serial
reboot
of
all
the
nodes
in
the
cluster,
so
larger
clusters
will
take
long
longer
to
migrate.
This
is
the
current
way
that
this
is
done.
We're
looking
for
a
way
to
optimize
that
so
that
this
is
not
done
serially
by
the
machine,
config
operator,
and
so
look
for
that
in
a
forthcoming
release
in
in
the
second
phase,
the
cluster
network
operator.
So
I'm
sorry
back
up.
A
So
they
come
up
with
a
cohesive
understanding
of
the
new
networking.
There's
a
timing
issue
just
with
one
of
the
nodes
and
the
networking
didn't
quite
come
up
properly,
just
reboot
it
again.
It'll
it'll
come
up
fine
and
so
that's
the
process.
So
we've
tried
to
make
this
as
painless
as
possible.
A
Keep
in
mind
that
it
is,
you
know,
like
I
said,
before,
you're
basically
doing
open-heart
surgery
on
yourself
and
you're
doing
migration
so
plan
accordingly,
and
the
last
slide
I
wanted
to
include
in
here
was
to
talk
a
little
bit
about
some
new
plugins.
So
this
is
not
really
tied
to
an
actual
development
effort
in
the
4.8
release.
A
But
these
did
happen.
These
certifications
did
happen
during
the
development
cycles
of
4.8.
So
I
think
it's
worthwhile
to
to
bring
up
here
and
discuss
so.
The
two
new
kubernetes
cni
plug-ins
for
openshift
that
are
certified
and
supported
are
ice
surveillance
stylium
as
an
option
as
well
as
vmware's
and
treya
plug-in.
A
So
red
hat
does
not
provide
support
directly
for
these
plugins
understand.
So
that's
that's
never
been
the
case.
The
only
ones
that
we
provide
direct
support
for
are
the
ones
is
obviously
open
shift.
Sdn
our
current
default
oven,
our
next
generation
networking
and
on
the
far
right
courier
kubernetes
for
those
people
that
were
looking
for
the
equivalent
of
a
pass-through
of
networking
down
to
the
underlying
openstack
neutron
networking
layer.
A
So
we
don't,
we
don't
provide
support
directly
for
all
any
of
the
other
ones,
but
we
partner
with
the
organizations
responsible
for
those
cni
plugins
to
provide
support
and
and
because
of
that,
just
recognize
that
different
third-party
plugins
have
different
levels
of
testing
and
support
or
are
other
layered
products
such
as
openshift,
virtualization
and
openshift
service
mesh.
So
consult
your
account
team
for
more
information.
A
If
you
have
any
questions
about
that,
but
we
welcome
on
board
these
new
plugins
and-
and
hopefully
these
will
be
useful
to
our
to
our
customers
going
forward,
and
so
that
brings
us
to
the
end
of
what's
new
in
at
a
high
level
of
networking
in
openshift
4.8.
B
Thank
you
mark.
You
have
this
timing
down
perfect.
Okay,
we
do
have
a
few
questions.
First
of
all,
can
you
put
it
or
maybe
stop
presenting
sure
I
love
the
thank
you
slide,
but.
A
B
A
If
you
have
something
other
than
that,
we
do
have
several
other
mechanisms
to
get
that
traffic
in
to
include
node
port
to
include
exposing
external
ips.
Those
are
probably
the
two
most
popular
ways,
but
one
of
the
things
that
I
talked
about
during
this
presentation,
you
may
recall,
I
talked
about
gateway
api
and
how
it
is
that
we're
trying
to
unify
ingress.
A
I
fully
recognize
that
I'm
telling
you
to
do
something
a
different
way,
simply
because
you're
using
a
different
protocol,
and
so
one
of
the
goals
that
I
have
for
a
longer
term
vision,
actually
not
even
that
long,
but
maybe
mid-term
vision
for
openshift
networking
is
to
unify
the
way
we
do
ingress
so
that
you
shouldn't
have
to
care
what
protocol
you're
using
you
shouldn't
have
to
care
about.
You
know
if
you're
using
step
for
say,
for
example,
you're
doing
voice
over
ip
applications
really
shouldn't
matter.
A
What
you're
doing
you
should
be
able
to
configure
your
ingress
the
same
way,
and
this
is
our
goal.
That's
not
the
reality
today
can
do
it
and
and
our
documentation
should
cover
that
as
a
starting
point,
as
I
mentioned
check
out
exposing
node
ports
and
external
ids
with
services,
but
in
the
future
we
want
to
bring
that
together
and
in
fact,
as
part
of
that
unification,
you
may
say
you
know,
hey
mark,
I
don't
want.
I
don't
want
you
to
interact
with
my
traffic
at
all.
B
Awesome,
thank
you
and
if
that
give
a
follow-up,
please
put
it
in
the
q
a
or
any
new
questions,
because
you
know
we
have
mark
here-
and
this
is
he's
so
in
demand.
It's
not
often
that
we
get
time
with
mark
all
right.
Is
it
possible
to
turn
off
the
network
observability
features,
I
assume
it
will
introduce
additional
cpu
cost
to
the
cluster
and
some
people
may
be
sensitive
to
it.
What
do
you
think.
A
Great
question:
yes,
so
we
generally
have
we've
gone
through
several
incantations
of,
and
this
is
not
specifically
limited
to
networking
observability,
but
we've
gone
through
several
incantations
of
of
how
we
would
deploy
this
and
in
the
beginning
we
had.
You
know
several
different
layers
of
verbosity
that
could
be
configured.
A
Ultimately,
it
turns
out
that
our
users
are
really
asking
for
one
of
three
settings
on
that.
Some
of
this
observability
and
in
particular
any
observability
elements
that
are
going
to
add
overhead,
like
additional
logging
to
the
cluster.
So
the
three
settings
are
just
turn
it
off
altogether.
I
don't
need
that
information,
and
you
can
do
this
on
a
sort
of
a
we're
setting
this
up
to
be
more
of
a
per
observability
function.
A
So
you
might
you
know
this
is
definitely
going
to
incur
the
most
overhead.
So
why
on
earth?
Would
anybody
want
this?
Generally
speaking,
this
is,
for
you
know
real,
really,
really
in-depth
debugging
sessions,
but
probably
more
likely
for
those
customers
who
have
regulatory
compliance
that
they
have
to
meet.
A
That
says,
you
need
to
capture
everything
that
happened
so
that
if,
for
example,
there
was
ever
a
break-in,
we
can
go
back
and
do
some
retrospection
to
understand
exactly
what
happened,
how
they
were
potentially
able
to
break
in
what
they
did
once
they
were
in
and
and
so
forth.
So
I
I
expect
that
one
of
those
three
settings
will
be
applicable
to
most,
if
not
all,
of
our
users,
I'm
happy
to
entertain
additional
options,
but
that's
that's
the
that's
the
target
for
these
kinds
of
things
that
might
incur
overhead.
B
A
Yes,
so
great
question
so
load
balancing
the
api.
Today,
it's
kind
of
a
it's
a
you
know,
load
balancing
and
it's
load,
balancing
in
double
quotes
and
that's
with
the
keep
live,
the
ip
failover
mechanism.
It's
really
more
of
a
high
availability
kind
of
thing.
So
what
our
customers
do
today
and
one
of
the
things
that
we
support
with
with
our
partners
is
the
ability
to
use
something
like
say:
you
know
a
commercial
product
like
an
f5.
A
You
put
that
in
front.
You
use
that
to
do
the
load,
balancing
and
then-
and
we
have,
if
you
consult
our
knowledge
base
articles
there,
there
is
documented
procedures
for
doing
that.
The
support
for
something
like
that,
as
you
might
imagine,
falls
onto
the
external
load
balancer
like
f5,
so
but
we
do
provide
a
procedure
for
doing
that
and
you
can,
if
you
have
fis.
Presumably
you
have
support
for
those,
and
so
you
know
work
with
that
that
provider
in
the
future.
A
One
thing:
that's
really
interesting:
that's
coming
in
4.9!
I
know
this
talk
was
about
4.8,
but
I
don't
think
I
can
get
away
with
not
mentioning
it,
starting
in
4.9
we're
going
to
be
fully
supporting
metal
lb.
A
So
if
you
have
a
metal
bare
metal
deployment,
metal,
lb
and
49,
we're
going
to
support,
be
supporting
middle
of
b
layer
2,
and
you
can
use
that
also
in
4.10
we're
targeting
middle
lb
with
bgb
support
and-
and
you
know,
potentially
support
that
so
the
bgp
sport
will
essentially,
you
know
mostly
the
purpose
for
that
is
to
advertise
routes
to
kubernetes
services
using
bgp.
A
But
you
know,
I
think
some
form
of
nllb
may
be
maybe
useful,
and
we
will
provide
some
more
information
about
how
to
do
that.
As
we
get
closer
to
the
release
of
the
of
metal
lb.
A
Yeah
the
the
specific
details
about
that
probably
would
not
be
able
to
answer
it's
a
different
way
for
you
that
I
already
have,
and
it
you
know
the
the
advanced
or
rather
the
custom
custom.
Networking
configurations
that
you
can
do
for
vmware
installs,
fully
documented
in
our
product
documentation.
A
Anything
you're
doing
beyond,
what's
specified
in
there,
like
I
said
before,
essentially
would
involve
the
integration
with
a
third-party
product,
and
so
some
of
the
support
for
that
would
fall
onto
that
provider.
B
A
We
are
discussing
that
right
now,
that's
a
pretty
difficult
problem
to
do
in
a
universal
way
for
our
customers,
but
this
is
this
is
on
our
backlog
for
feature
enhancements
and
to
date
we
are
in
the
design
and
architecture
review
of
that
possibility.
B
B
A
You
know
really
top
of
mind
lately
has
been
our
customers,
as
as
not
only
as
the
product
has
matured
so
have
our
customers,
workloads
and
and
how
it
is
they
use
kubernetes
kubernetes
platform
to
deliver
upon
those
workloads
that
that
sophistication
is,
is
growing
at
a
pretty
rapid
pace
and
so
the
sophistication
of
the
networking
some
of
the
advanced
networking
features.
These
are
things
that
three
years
ago
would
have
been
unheard
of.
A
To
ask
for,
or
would
have
been,
you
know,
rather,
you
know
would
have
raised
a
few
eyebrows
if
somebody
would
have
asked
for,
but
with
the
you
know,
the
the
adding
of
the
telco
market
to
the
scene
of
kubernetes
platforms
and
all
of
the
requests
they
have
turns
out
that
a
lot
of
those
advanced
networking
features
that
we
always
thought
were
pretty
limited
to
to.
A
Telco
organizations
turns
out
those
are
actually
super
useful
to
some
of
our
higher
end
enterprise
organizations
as
well,
and
so
I'm
seeing
an
increased
adoption
for
and
requests
for
enhancements
to
some
of
those
those
features
so
hey.
I
saw
you
did
this.
Can
you
also
support
this
or
make
this
configurable
for
us
to
look
for
a
lot
more
of
that?
That's
that's
not
going
away,
and
that's
only
expanding
another.
Really
big
thing
is
scale
and
performance.
A
So
again,
as
our
customers
deployments
grow
in
sophistication
and
and
size
and
scale,
a
really
big
thing
that
I'm
seeing
is
people
are
essentially
redlining
what
the
capabilities
are
of
networking.
So
what
we're
trying
to
do
is
is
approach
that
from
several
angles.
The
first
is
better
documentation.
A
So
you
know,
and
when
I
look
at
our
documentation,
I
somewhat
agree
with
some
of
the
sentiments
I've
heard
recently,
which
is
it
doesn't
really
fully
explain
that
it
doesn't
explain
best
practices
and
it
doesn't
maybe
provide
really
great
guidance
for
how
to
do
that.
More
importantly,
and
so
we're
improving
that
for
sure
right
now,
but,
more
importantly,
to
a
certain
extent,
these
customers
shouldn't
have
to
worry
about
that.
A
There
should
be
some
sort
of
built-in
intelligence
and
automation
that
says
it's
time
to
start
sharding
time
to
break
up
this
workload
into
multiple
shards.
We
spread
out
across
nodes
and
have
this
all
done
sort
of
automatically
for
our
customers.
A
So
a
very
recent
request
for
an
enhancement
that
I
got
really
brought
that
to
a
head
and
and-
and
there
are
many
customers
out
there-
that
would
benefit
from
this
and
have
now
said.
Yes,
please.
We
would
love
to
see
that
so
look
for
more
around
that
in
a
future
releasable.
A
I
think
you
know,
I
think,
the
the
I
guess.
A
very
recent
thing
is,
you
heard
me
talk
earlier
about
some
of
the
new
cni
plug-ins.
We
have
one
of
those
was
stilliam,
so
one
thing-
that's,
that's
maybe
surprise
is
the
wrong
word
for
this,
but
one
thing
that
I'm
I'm
paying
close
attention
to
here
is
the
adoption
and
asks
around
ebpf
for
openshift
networking.
A
A
Cilium's
been
certified
now
for
one,
maybe
two
months-
that's
probably
too
closer
to
two
months
and
I
would
have
thought
I'd
see
greater
adoption,
but
I
have
a
feeling:
customers
might
be
waiting
for
us
to
more
natively
support
ebpf
in
our
networking
deployments,
and
please
understand
that
that
is
that
is
forthcoming
when
you
provide
greater
the
greater
control
of
bbpf
to
the
users
along
with
that
comes
some
additional
questions
about.
How
do
we
support
that?
A
So
if
you
give,
if
we
give
you
so
much
more
control,
you
now
have
the
ability
to
do
a
lot
more
things.
How
do
we
support
that,
and
so
there's
questions
around
that
that
still
need
to
be
resolved,
but
this
is
something
for
you
to
look
forward
to.
B
That
sounds
like
we
have
a
lot
to
look
forward
to
at
least
from
the
last
45
minutes
of
talking
networking
and
openshift.
There's
lots
of
great
new
features
and
enhancements.
Is
there
one
particular
thing
that
you're
especially
excited
about?
I
know
networking
is
really
exciting,
but
what
about
you
mark.
A
I'm
very
excited
about
the
the
forthcoming
bgp
capabilities,
so
we
have
a
number
of
customers
that
have
been
asking
for
bgp
and
in
fact,
they've
chosen
a
different
cni
plug-in
specifically,
and
only
because
it
happens
to
provide
bgp
capabilities
and
our
default
out-of-the-box
networking
has
not
supported
that
in
the
past.
So
look
for
that,
starting
in
4.10,
as
I
maybe
mentioned
earlier,
but
this
bgp
capability
is
really
going
to
open
up
a
lot
of
doors
and
it's
going
to
you
know
for
some
very
large
customers.
A
We
have
that
prefer
sort
of
a
flat
network
in
these
routable
service,
ip
addresses
and
the
advertisement
of
the
excuse
me
the
advertisement
of
those
readable
service,
ip
addresses
they.
They
are
very,
very
much
looking
forward
to
that,
and
so
I'm
pretty
excited
about
the
future
of
that.
I'm
excited
about
some
of
the
new
avenues
we
have
with.
A
As
I
mentioned,
with
ebpf,
and
and
actually
you
know
how
that
ebpf
will
provide
enhanced
observability
in
the
product,
some
of
the
integration
that
we
have
with
some
new
layered
products,
like
our
advanced
cluster
security,
acs,
multi-cluster,
networking
being
provided
and
and
open
and
viewed
and
managed
by
something
along
the
likes
of
advanced
cluster
manager.
Acm.
A
So
you
know
the
the
vision
here
is
that
you
define
how
it
is
that
your
network
is
configured
on
a
particular
platform
using
particular
cloud
native
load,
balancers
of
that
platform,
for
example,
and
the
ability
to
make
a
few
simple
modifications
to
that
to
that
api
to
pick
up
your
deployment
and
place
it
onto
just
another
platform.
A
So
if
you
want
to
move
from
one
cloud
provider
to
another
cloud
provider
or
to
bare
metal
or
or
whichever
way,
you're
moving,
you
change
that
outer
layer
that
defines
the
api
surface
for
that
provider,
and
underneath
that
you
know
it
unlock
sort
of
quote,
unquote,
unlocks
additional
api
capabilities
to
configure
some
of
the
specifics
of
that
particular
cloud
provider
like,
for
example,
if
you
were
on
aws,
you
would
unlock
one
of
three
load:
balancers
elb,
nlb
alb.
If
you
happen
to
choose
alb,
it
would
unlock
additional
configurations
and
capabilities
around
the
these.
A
The
albs
waxed,
the
the
web
application
firewall
functionality
and
how
you
could
configure
that
so
having
this
global
api
to
do
all
of
those
things
is
absolutely
the
the
direction
we're
going
to
for
an
improved
operational
and
continual
management,
especially
as
our
customers
continue
to
grow
through
multi-cluster
deployments.
B
Know
I
was
just
thinking
I
mean
all
right,
we'll
talk
offline
about
all
the
other,
exciting
stuff,
but
mentioning,
let's
see
the
different
workloads
and
we
have
a
question
on
whether
there
are
plans
to
segregate
traffic
on
different
uplinks
based
on
the
types
of
workloads
and
make
open
shift.
Workloads.
Aware.
A
Yeah,
well,
that's
a
really
great
question,
so
it
is.
There
is
a
huge
amount
of
momentum
within
the
openshift
engineering
and
business
side
of
things
to
to
add
increased
integration,
enhanced
integration
with
post-level
networking,
so
we're
an
operating
system
company
as
well
right.
A
So
we
have
this
phenomenal
enterprise
operating
system,
red
hat
enterprise,
linux
rel,
and
it
has
an
amazing
array
of
capabilities
for
how
it
is
that
you
can
configure
networking
how
it
is
that
you
want
to
configure
the
different
mix
in
your
hosts,
how
you
align
interface
to
those
hosts,
how
you
do
bonding
across
those
interfaces,
all
the
things
our
customers
are
asking
us
to
bring
that
level
of
configurability.
A
That's
nrel
networking
into
openshift.
So
today
you
install
openshift
and
it
uses
whatever
the
prime
by
default.
It
uses
whatever
the
primary
nic
is
in
the
box
and
it
and
it
binds
the
primary.
You
know
gateway
interface
to
that
nic.
So
all
your
kubernetes
control,
plane
traffic
flows
in
and
out
of
that
primary
interface.
A
A
So
now
what
they're
asking
for
is
well,
we
don't
necessarily
want
all
of
our
control
plane
traffic
going
out
that
primary
interface
we'd
like
to
instead,
maybe
carve
it
up
and
send
it
out
specifically
to
a
secondary
interface
and
use
that
primary
interface,
for
maybe
data,
plane,
traffic
or
a
specific
applications
traffic
and-
and
maybe
we
want
to
tie
that-
to
a
very
specific
nic
or
a
set
of
nics,
maybe
two
nics
that
are
tied
to
a
couple
top
iraq
switches
and
have
bonding
between
them
and
and
in
all
the
kinds
of
settings.
A
It's
not
just
interface
configurations
but
host
level.
Networking
from
the
perspective
of
you
know,
some
of
the
telco
use
cases
require
this,
especially
you
know
with
these
real-time
component
requirements.
They
require
things
like
precision,
timing
protocol,
that's
a
host
level
thing!
That's
that's
in
the
kernel!
So
how
do
we
enable
that
in
a
meaningful
way
for
openshift
workload?
A
By
the
way
we
do
support
today,
ptp
with
openshift,
but
that's
an
example
of
something
where
we
needed
to
increase
or
enhance
that
integration
and
involvement
and
manageability
of
host
level
networking
from
within
openshift
to
service
the
workloads
that
people
have.
So
that's
a
pretty
long
answer.
I
hope
I
answered
your
question,
but
maybe
the
the
dldr
on
that
that
too
long
didn't
read
is
we
are
trying
to
bring
post-level
networking
into
and
bring
a
tighter
integration
with
openshift
networking
so
that
you
can
carve
up
your
networking
at
the
host
level.
B
A
And
this
becomes
especially
important
in
bare
metal
deployments
right
because
bare
metal,
so
in
some
cloud
providers
I
don't
have
access
necessarily
to
some
of
the
hardware
level
capabilities
that
these
instances
are
running
on,
but
for
bare
metal
deployments
I
presumably
have
full
access
or
the
or
the
you
know
the
user
has
full
access
or
knows
somebody
who
has
full
access
to
the
host
to
be
able
to
configure
things
the
way
they
want
and
so
providing
that
for
something
like
bare
metal
deployments,
openstack
deployments.
This
is
super
important
at
that
level.
A
Yeah,
absolutely
I
was
just
on
a
phone
call
with
our
bare
metal
product
manager
before
this
to
talk
about
how
we're
gonna,
align.
B
Awesome
all
right,
another
question:
since
we're
getting
down
to
the
wire,
considering
networking
inside
ocp
being
more
mature
than
before.
Do
we
have
native
network
diagnostic
tools
and
route
topology,
you
expose
it
in
some
sort
of
operator
dashboard,
which
gives
log
analysis
packet
drops
detail
formation
of
flow.
What
do
you
think.
A
Yes,
good
question
so
as
part
of
our
much
larger
open
shift,
observability
project
we
have
and
specific
to
networking
observability.
As
I
mentioned
before
we,
you
know
this
was
really
catalyzed
by
our
telco
customer
base,
telco
for
telco
developers.
Networking
is
their
business,
so,
whereas
some
developers
they
don't
want
to
have
to
think
about
networking,
they
just
want
their
their
application
to
work,
but
for
a
lot
of
telco
developers,
networking
is
their
business.
They
need
that
visibility
into
exactly
what's
happening.
A
So
some
of
the
things
you're
going
to
see
that
are
forthcoming
that
are
in
the
works
is,
for
example,
today,
with
openshift
service
mesh,
we
incorporated
the
jaeger
project
to
provide
tracing
capability
of
traffic.
There's
no
reason
why
we
can't
add
that
to
openshift
at
large,
so
we're
looking
to
to
incorporate
some
some
package-facing
capabilities,
ovs
flows.
A
A
That
will
further
enhance
some
of
these
flows
and
bring
this
really
nice
interface
into
the
actual
openshift
console
present
this
much
deeper
inspection
of
packets
and
packets
that
were
dropped.
You
know,
traffic,
you
know,
throughput
flows
that
are
active
and
so
forth,
and
so
this
will.
A
This
will
help
as
a
starting
point
provide
that
kind
of
information
and
those
metrics
that
the
developers
and
even
the
cluster
administrators
or
project
administrators
require
to
understand
about
what's
happening
with
their
particular
projects,
and
I
could
just
go
on
and
on
there's
there's
an
improved
visibility
into
the
network
policy
being
able
to
view
that
better.
So,
for
example,
you
know
today,
if
I
said
hey,
how
do
you
have
net
network
policy
configured
on
your
cluster?
You
can
spit
out
for
me
some
gamble.
A
I've
been
doing
this
for
a
long
time
and
I
look
at
that
yaml.
My
eyes
are
going
to
glaze
over
a
little
bit
because
it's
it's
not
that
straightforward
to
read,
but
it
is
readable,
but
that's
not
the
way
security
should
be
answerable
to
so
what
we've
done
is
we
are
making
some
of
the
network
policy
configurations
more
digestible
by
the
people
who
need
to
digest
it
so,
for
example,
network
administrators
security
administrators
they're
used
to
talking
in
the
language
of
acls
zone
rules
right.
A
They
work
on
these
things
on
a
daily
basis
on
their
switches
and
everything
else
that
they
interact
with.
So
why
not
present
network
policy
in
that
format?
Maybe
present
it
in
a
graphical
format,
so
that
you
know
a
picture
truly
is
worth
a
thousand
words
where
you
say:
hey
that
that
red
line
or
that
line
that
doesn't
exist
between
these
two
objects
in
my
project,
that's
a
problem
or
maybe
that's
by
design,
and
you
use
that
to
confirm
that
you
establish
proper
network
policy
to
limit
who
can
see
what
within
your
project.
A
So
all
of
these
things
are
coming
to
bear.
There's
going
to
be.
You
know,
I
would
say:
4.8
was
really
where
we,
the
gears,
started
to
turn
to
produce
some
of
these
observe
these
enhancements
to
observability,
but
we're
at
speed.
So
look
look
forward
to
49
410
to
see
some
big
jumps
in
that
capability.
B
Awesome
there
there's
not
many
people
who
can
make
networking
sound
as
exciting
all
right.
We
have
a
final
question:
is
it
possible
for
for
a
csi
driver
to
mark
traffic
with
ip
dscp
values?
These
dscp
settings
they'll
be
used
on
aci
switches
for
delivering
quality
of
service
storage
traffic
shall
be
prioritized
and
not
dropped.
If
marked
with
the
correct
dhscp
settings.
A
So
for
so
when
I
hear
csi,
I
immediately
think
storage,
but
if
you're
talking
about
ip
storage
traffic-
and
I
think
what
I
heard
was
ip
storage
traffic
and
specifically
the
qos
of
that
traffic.
Today
we
have
some
qos
capabilities
on
sri
lv
traffic,
but
we
do
not
have
it
in
the
larger
and
ups.
We
don't
have
it
upstream
and
the
larger
kubernetes
ecosystem.
A
So
one
of
the
things
that
we
have
started
is
to
upstream
some
qos
end-to-end
capabilities
within
kubernetes
to
allow
for
some
of
these
kinds
of
rate,
limiting
that
you
would
expect
from
qos
and
it's
not
just
storage
ip
traffic.
There's
a
lot
of
customers
that
want
to
be
able
to
put
caps
on.
A
You
know
basically
any
kind
of
labeled
traffic
from
their
applications.
Maybe
they
want
to
rate
limits
backups.
Maybe
they
want
to
rate
limit
a
specific
application
or
maybe
another
one.
I've
had
a
lot
lately
the
ability
to
rate
limit
incoming
traffic
to
for
two
reasons
for
to
to
mitigate
distributed
denial
of
service
attacks
as
well.
As
you
know,
some
administrators
have
asked
me
for
the
ability
to
rate
limit
to
to
limit
these.
A
The
streaming
of
images
from
say
a
query
registry
in
to
new
projects,
because
technically
somebody
could
deny
the
service
from
within.
So
so,
they've
asked
for
these
kinds
of
capabilities
now,
there's
different
ways
to
attack
this.
Maybe
maybe
we
build
it
out
in
kuwait
itself
for
that
kind
of
a
use
case,
but
but
still
there's
many
other
use
cases
in
the
larger
team
of
things
that
could
benefit
from
end-to-end
qos
within
kubernetes
and,
of
course
that's
where
we
always
work.
B
And
I
love
that
we
are
ending
on
you
saying
we
do
upstream
first.
So
if
you're
wondering
what's
happening
with
networking
as
well,
just
go
look
up
stream
right.
B
A
We
are
customer-centric
in
our
roadmap,
so
you
heard
something
today
that
you
would
like
to
see
done
differently
or
if
you
didn't
hear
something
today
that
you
wish
were
in
in
kubernetes
or
openshift,
please
reach
out.
We
we
are
customer,
driven
and,
and
your
your
requests
of
today
could
be
product
realities
tomorrow,.