►
From YouTube: Kubernetes Networking Innovations Marc Curry @Red Hat @OpenShiftCommon Telco SIG Full Meeting April
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
On
the
mailing
list
for
transparency,
and
today
we
have
a
guest
speaker
from
the
product
management
group
of
the
openshift
team
mark
curry,
who
many
of
you
may
already
know,
though.
We've
asked
him
to
come
and
talk
about
kubernetes
network
transformation,
though
myself
and
Paul
Lancaster
are
the
co-chairs
for
this
event
and
before
we
get
going
too
far,
I
was
just
going
to
mention
that
there
are
two
upcoming
events:
there's
a
telco
panel
at
the
open
ship,
Commons
gathering
and
there
a
number
of
interesting
folks
who
are
on
that.
A
However,
if
you're
not
already
have
a
ticket
for
it,
it
is
sold
out.
But
the
good
news
is,
we
will
record
all
of
the
sessions
there
and
upload
them
again
to
YouTube
and
as
well
as
we'll
be
live-streaming
the
events.
So
if
you
are
so
inclined
and
have
facebook,
you
can
watch
along
with
us
and
see
how
the
day
goes.
So
that's
the
other,
and
there
is
also
upcoming
a
week
from
now
at
stomach
a
telco
day,
the
same
day,
May
6th
in
another
room
there
that
will
post
the
link
to
that
too.
A
B
No
I,
just
you
know,
look
forward
to
seeing
people
there.
I
will
be,
as
Diane
mentioned,
moderating
a
panel,
so
cost
Frommer's
who
are
using
OpenShift
or
containers
in
some
way
within
there
or
containers
and
kubernetes
or
in
some
way,
within
their
telecommunications
service
provider.
A
network
side
of
their
business,
mostly
networks
on
their
business.
B
So
I
will
be
carrying
that
panel
on
that,
of
course,
for
what
part
of
not
at
OpenShift
Commons
I
will
be
over
at
the
telco
partner
day
and
then,
of
course,
if
anybody
would
like
to
meet
during
a
redhead
summit,
you
know
I'm
available
at
various
time.
So
you
know,
if
you
can
always
reach
me,
get
in
touch
with
me.
I'll
put
my
help
up
my
name
and
information
in
the
chat
window.
For
those
who
would
like
to
email
me-
and
you
know,
get
a
chance
to
tip
to
me
and.
A
I
think
the
other
thing
I
forgot
to
mention
is
there
will
be
an
open
ship,
Commons
booth
in
community
central
at
Red,
Hat
summit
and
I'm,
going
to
force
Paul
to
come
and
hang
out
with
me
there
for
an
hour
and
we'll
post
the
times
for
that.
So
the
telco
me
can
have
sort
of
a
meet-and-greet
there
as
well
much
kind
of
scheduled
the
whole
thing
so
that
it
would
be
lots
there
and,
if
you're,
if
anyone
else
is
interested
in
hosting
and
coming
who's,
an
open
ship,
Commons
member.
A
It's
kind
of
my
trick
way
of
staffing.
The
booth
is
getting
members
to
come
and
hang
out
with
me
from
an
hour
and
talk
about
whatever
they're
into
with
people
who
come
to
the
booth.
So
there's
that
opportunity
so
without
further
adieu,
because
I
want
to
make
sure
we
use
Marc's
time
wisely
mark
how
about.
If
you
take
over
the
screen
sharing
and
let's
kick
off
your
talk
now.
C
Can
okay,
so
thank
you
for
for
joining
me
today.
What
I'd
like
to
do
is
present.
This
is
something
that
I
worked
on
in
collaboration
with
Dana
niihama
from
Intel
we've
been
collaborating
quite
heavily
with
Intel
Red,
Hat
and
Intel,
together
on
a
number
of
different
solutions
that
are
helping
to
enable
telco
in
in
the
end
and
their
transformation
to
the
next
generation
technologies
and
infrastructure
deployments.
C
So
this
is
actually
a
reprisal
of
something
that
we
presented
an
open
networking
summit
about
a
month
ago,
and
so
I'd
like
to
talk
to
you
a
little
bit
about
what
it
is,
we're
doing
to
to
help
with
that
and
drive
some
of
that
Network
transformation.
That
has
to
happen
specific
to
the
work
that's
going
on
in
kubernetes.
C
So
our
customers
are
familiar
with
the
value
of
cloud
computing
and
kubernetes,
and
so,
while
openshift
so
I'm,
a
I'm,
a
product
manager
for
openshift
and
I
focus
a
lot
on
the
container
infrastructure
for
it
and
adding
that
includes
especially
networking
as
well
as
other
components.
But
so,
while
openshift
does
a
tremendous
job
of
harnessing
the
tools,
our
customers
are
also
looking
to
us
to
help
them
with
the
next
steps.
C
We've
always
done
things,
and
so
this
can
roughly
be
broken
down
into
three
different
transformations
that
have
to
happen,
and
so
first
have
to
transform
our
business,
how
we
can
plan
for
and
stained
the
expected
scale
and
growth
opportunities
of
an
increasingly
complex
and
populous
market.
That's
that's
really
global.
In
scope.
C
We
require
a
network
transformation,
so
we
have
to
change
the
way
we
think
about
our
networking
and
then
it
has
to
be
flexible
to
adapt
to
new
requirements
quickly.
Be
able
to
adapt
to
new
technologies
as
they
become
available,
has
to
be
agile
enough
to
adopt.
Do
these
new
just-in-time
technologies,
especially
given
the
increasingly
adopted
world
of
open-source
solutions
and
the
accelerated
pace
that
that
entails
and
then
also
we
need
to
be
able
to
scale
some
massive
numbers
and
then
in
the
cloud
side
of
things,
we
also
require
our
cloud
approach.
C
C
So
we
also
need
to
adopt
cloud
native
practices.
So
what
this
means
is
that
we
want
destined
class
solutions
on
each
public
and
private
cloud
providers
infrastructure,
and
we
want
to
avoid
vendor
lock-in.
We
also
want
to
optimize
performance
of
our
solutions
by
providing
the
ability
to
not
only
in
this
we
place,
but
to
move
some
of
our
workloads
to
infrastructure.
That
makes
sense,
move
it
closer
to
where
it's
needed
geographically,
move
it
to
hardware
that
has
the
technology
that's
required
and
so
on.
C
But
what
we're
doing
is
we
don't
want,
and,
and
what
are
what
people
are
telling
us
is
they
don't
want
to
have
to
learn
a
new
tool
which
is
highly
customized
highly
specific
to
each
one
of
those
cloud
providers.
So
the
goal
here
is
to
provide
a
seamless,
single
pane
of
glass
development
and
solution
for
delivering
all
the
tooling
and
delivery
mechanisms
across
the
different
clouds.
So
this
is
exactly
what
we're
doing
with
openshift
and
we're
partnering
with
Intel
to
deliver
this
on.
C
Obviously,
the
the
hardware
technology
that
is
involved
and
I'll
give
you
some
some
more
details
and
a
few
slides
here
on
the
roadmap
of
how
exactly
we're
doing
that,
so
how
it
is
that
we,
as
an
industry,
move
through
this
transformation
as
we've
modernized,
the
development
of
our
applications
is,
is
accelerating
so
we're
where
we
are
today
is
in
this
yellow
sort
of
box
10
area
here.
So
we've
we've
essentially
mastered
the
bare
metal.
You
know
single
node
single
data,
centers
serve
or
delivery.
C
We've
we've
done
a
great
job
with
NF
e
on
virtualized
infrastructure,
but
now
what
we're
doing
is
we're
adding
to
and
in
some
cases,
migrating
completely
to
cloud
ready
and
cloud
native
infrastructure.
So
we
you
know
the
moving
from
or
working
in
conjunction
with
virtualized
network
functions,
we're
now
adding
cloud.
Actually,
a
container
network
functions
to
deliver
a
much
greater
scale
and
performance
in
some
cases,
and
then
that
is
really
the
next
step
to
breaking
things
down
still
further
into
microservices
for
even
even
greater
adaptability
in
performance
so
to
get
here.
C
What
do
we
need
to
do
so?
There's
some
key
challenges
that
are
required
for
this
transformation,
so
the
challenges
can
be
roughly
broken
down
into
these
six
different
segments
here,
so
the
first
one
being
automation
to
deliver
at
the
scale
that
we're
talking
about
and
deliver
across
different
cloud
providers.
This
is
this
is
beyond
a
human
being
sitting
at
a
computer
and
and
working
with
this
for
their
day
to
operations.
C
If
we
don't
get
to
a
place
of
fully
automated,
not
only
for
the
deployment
but
the
management
day
to
operations
and
management's
of
the
infrastructure,
we're
not
going
to
be
able
to
meet
this
challenge.
So
automation
is
a
very,
very
important
standardized
interfaces.
As
the
technology
changes,
we
can't,
we
can't
be
in
a
position
where
what
we
developed
yesterday
no
longer
applies
tomorrow.
We
need
to
be
able
to
build
on
standardized
interfaces
and
api's
so
that
we
can
grow.
C
We
can
avoid
vendor
lock-in
by
you
know
so
that
the
next
new
technology
will
work
across
what,
in
the
same
way,
that
you've
always
expected
it
to
with
your
addition,
other
applications,
resource
management.
We
we
need
a
new
ways
of
of
understanding
what
it
is
we're
deploying
to
so
I
need
to
know,
as
an
operator,
for
example,
which
nodes
in
my
clusters
have
the
capabilities
that
I
require,
which
ones
have
a
certain
amount
of
memory,
which
ones
have
a
certain
amount
of
CPU
power,
which
ones
have
s
RI
of
e,
capable
mix
we
need.
C
We
need
that
level
of
understanding
in
order
to
make
an
intelligent,
automated
system
that
we
can
deploy
to
and
manage.
We
need
data
plane
acceleration,
so
this
this
is,
our
needs
are
only
going
to
grow
the
amount
of
performance
that
we're
is
not
static.
This
has
to
be
able
to
grow
with
the
capabilities
of
the
underlying
technology
platform
security.
We
absolutely
cannot
compromise
on
that
when
we
work
on
any
of
these
other
challenges,
we
need
an
enterprise
quality
solution
where
platform
security
is,
is
not
something
you
disable
to
get
something
to
work.
C
We
can
continue
to
deliver
on
the
promise
of
production,
quality
applications
and
the
other
challenge
is
migrating
to
containers,
though
containers
are
just
quite
simply
more
agile
than
a
virtualized
machine
is
there's
a
number
of
benefits
which
I
won't
I
won't
expand
upon
here,
it's
easily
to
goo
easy
enough
to
Google
what
all
those
advantages
are,
but
suffice
it
to
say
that
a
number
of
our
current
virtualized
network
functions
that
we
have
today
run
far
more
efficiently
when
broken
up
into
appropriately
sized
and
developed
containers.
C
So
those
are
the
those
are
the
challenges
to
be
addressed.
We
have
to
match
that
to
the
platform
technologies,
and
this
is
where
we
work
with
our
partners
like
Intel,
Mellanox
and
so
forth,
do
to
make
sure
that
we
are
developing
for
the
hardware,
that's
most
common,
most
prevalent
in
the
cloud
provider,
data
centers
and
on
from
data
centers
today.
C
So
we
continue
to
work
with
our
partners
to
make
sure
that
we
align
closely
to
them.
We
also
need
to
account
for
an
accelerated
ecosystem
of
solutions.
The
open
source
is
obviously
Red.
Hat
is
all
in
on
open
source,
but
open
source
is
a
far
more
accelerated
mechanism
for
the
delivery
of
the
kind
of
solutions
that
we
require.
There
are
lots
of
options,
and
so
we
need
to
be
able
to
in
an
agile
way
flex
to
include
the
technologies
that
are
available.
C
The
best
technologies
that
are
available
today,
as
well
as
be
able
to
adopt
the
best-in-class
that's
going
to
be
available
tomorrow
and
and
so
open
source
is,
is
really
the
way
to
do
that.
Full
transparency
on
how
things
work
and
api's
and
I
think
we
have
sort
of
general
agreement
in
the
industry
on
that,
but
I
won't
spend
too
much
time
there
so
specific
to
containers.
So
here's
what
we
get
a
little
bit
closer
to
what
I
do
on
a
day
to
day
basis.
So
what
exactly
are
the
challenges
in
containers?
C
How
why
are
containers
not
ready
for
someone's
telco
NFV
solutions
today?
What
are
what
are
we
doing
about
that?
Well,
we
can
roughly
break
some
of
those
challenges
down
into
these
four
groups,
so
the
challenges
being
that
kubernetes
networking
so
kubernetes
is
the
is
the
platform
upon
for
orchestration
and
management
of
containers
that
the
industry
has
standardized
on,
and
so
what
we
as
a
leading
contributor
to
kubernetes
do
is
we
want
to
make
sure
that
it
meets
the
use
cases
provided
for
telco
nfe?
So
what
are
those
challenges?
C
So
it's
it's
networking
in
general,
its
data
playing
acceleration.
We
need
to
make
sure
that
containers
have
the
ability
to
take
advantage
of
the
types
of
bypass
technologies
that
we've
grown
accustomed
to
in
the
data
center
and
some
of
those,
quite
frankly,
just
have
not
existed,
but
I'm
going
to
talk
about
our
progress
there
and
where
we're
going
to
develop
those
resource
management.
Again,
we
need
to
have
this
enhanced
platform
awareness.
C
If
we're
going
to
build
up,
build
out,
intelligent
day
to
operations
and
management
infrastructure
capabilities,
we
need
to
know
if
you
are
using
multiple
clusters
and
potentially
different
geographic
locations.
We
need
to
know
when
you
initiate
or
instantiate
a
new
application.
Where
should
that
be
placed?
We
need
to
know
what
our
valid,
by
forms
to
which
it
could
potentially
be
migrated
for
reasons
of
disaster
recovery,
high
availability
for
reasons
of
potentially
moving
it
closer
to
where
it's
needed
geographically
and
other
reasons.
C
So
there's
we
have
to
have
this
deeper
understanding
of
what's
happening
in
order
to
be
able
to
meet
those
those
technologies
and
then
finally
telemetry
so
telemetry
at
the
scale
that
we're
talking
about
and
across
multiple
clusters
is
something
new
that
we're
working
on
it.
Red
hat
and
with
kubernetes-
and
so
we
can
talk
a
little
more
about
that,
so
the
key
challenge
is
then,
maybe
break
them
down
still
further,
so
you
saw
the
enabling
technologies
in
that
middle
column.
But
what
exactly
are
those
four?
C
Let's
talk
a
little
bit
about
those,
so
multi
is
the
first
one,
one
of
the
key
things
that
we're
talking
to
our
customers
to
tell
in
a
feed
that
they
have
said
to
us
that
they
require
is
the
ability
to
have
multiple
network
interfaces
for
their
traditional
vnfs
as
the
one
deployed
as
containers
or
CNS.
So
this
in
kubernetes,
there's
only
ever
been
a
single
interface
for
containers
because
there's,
quite
frankly,
hasn't
been
the
need
for
anything
more
than
a
single
interface.
C
But
for
these
new
use
cases
we
had
to
essentially
enable
this
in
upstream
kubernetes.
This
isn't
a
Red
Hat
thing.
This
is
this:
is
we
always
do
upstream?
First,
that
we
developed
it
in
conjunction
with
some
of
our
partners
and
especially
including
Intel,
to
deploy
this
in
upstream
kubernetes
and
and
and
we
have
in
approximately
three
weeks
or
so
three
or
four
weeks,
we're
actually
going
to
deliver
upon
this
promise
in
a
fully
supported.
C
C
So
if
you
want
to
achieve,
for
example,
line
rate
performance,
you
can't
have
your
application
bouncing
from
core
to
core.
You
take
a
hit
every
time
you
do
that.
The
same
thing
is
also
true
for
a
little
bit
further
down
there
again
Numa
awareness.
We
need
to
have
that
in
order
to
be
able
to
be
localized
to
have
our
memory
localized
to
our
applications,
dynamic,
huge
page
allocation.
C
Some
of
these
we've
actually
already
solved
in
upstream
kubernetes,
one
of
those
being
huge
pages,
so
we're
well
on
our
way
to
some
of
these
and
then
finally
platform
telemetry
information
with
collecti
and
and
so
forth,
and
so
some
of
those
some
details
about
where
we
are
we're.
So
those
things
are
on
the
right
hand
side.
So
the
work
that
we're
doing
so
again
we're
collaborating
pretty
heavily
with
Intel
but
understand
Red
Hat
is,
is
a
top
two
contributor
to
upstream
kubernetes.
C
C
It
was
a
very
long
process
to
get
to
where
we
are
is
one
of
our
big
successes
was
to
produce
a
version,
one
spec
or
how
it
is
that
we're
going
to
do
multiple
networks,
multiple
interfaces
per
pod
inside
of
a
cluster,
and
so
we
finally
released
this
year
that
that
I'm,
sorry,
the
end
of
last
year,
Aqib
con.
We
released
that
approved
version,
one
spec
and
again
in
our
next
release
of
OpenShift
we're
actually
going
to
be
providing
that
capability
to
have
more
than
one
interface
per
pod
and
an
enterprise
fully
supported
way.
C
So
what
is,
for
example,
Malta's
it
that
it
achieves
that?
So
let
me
back
up
for
a
moment
and
explain
a
little
bit
about
the
way
kubernetes
works.
The
kubernetes
has
a
container
network
interface
or
CNI.
It's
really
nothing
more
than
a
specification
and
set
of
libraries
for
writing,
plug-ins
to
configure
network
interfaces
in
Linux
containers
and
pods,
and
by
the
way,
you've
heard
me
sort
of
maybe
interchangeably
use
containers
of
pods
for
those
of
you
that
aren't
comfortable
with
that
difference.
C
Pods
are
really
just
a
logically
grouped
collection
of
containers
that
happen
to
share
a
common
namespace.
So,
for
example,
if
you
had
multiple
containers
in
a
pod,
they
would
all
have
that
stain
collection
of
interface,
though
so
the
way
it
works
is
Malta's
is
the
kubernetes
only
and
ever
traditionally
has
recognized
that
first
interface
that
was
plugged
into
the
CNI,
and
so
this
is
where
traditionally
the
Sdn
would
live.
The
Sdn
would
get
plugged
in
there
whenever
a
new
pod
gets
instantiated,
the
pod
says:
hey.
How
should
my
networking
be
configured?
C
It
reaches
up
to
kubernetes
for
the
answer
to
that
question.
Kubernetes
queries,
the
whatever
is
plugged
into
the
CNI
for
that
networking
configuration
information,
and
then
that
is
delivered
to
you,
the
pot
on
that
on
that
primary
interface,
what
is
called
each
zero
here?
So
all
can
the
problem
with
that.
Is
you
probably
come
up
with
your
own
reasons?
Why
that's
a
problem,
but
a
major
problem
is
that's
where
all
the
control
and
data
plane
traffic
is
blowing,
there's
not
a
whole
lot
of
flexibility
there.
C
So
what
we
did
was
we
enabled
again
working
with
Intel
upstream.
We
enabled
montes
as
a
meta
plugin
for
this
kubernetes
CNI
interface.
Malta's
enables
the
ability
to
create
multiple
network
interfaces
per
pod
and
assign
a
CNI
plug-in
to
each
of
those
interfaces
created.
So
on
the
right
hand,
side
here,
you
see
there
is
a
second
interface
now
you
can
have
a
third
fourth.
C
If,
however,
many
you
require,
but
in
this
case
we
just
have
two
plugins
that
are
notice
how
they
actually
are
plugged
into
multis
and
then
multiplied
multiplexed
view
to
kubernetes,
so
kubernetes
doesn't
didn't
have
to
change.
It
still
thinks
there's
only
essentially
one
interface,
one
C&I
plug
in
and
but
under
the
covers,
there
is
actually
multiple
that
are
being
configured
here.
In
this
case,
the
secondary
interface
is
in
this
case
is
a
Mac
VLAN
plugin.
The
name
is
whatever
you
want
it
to
be.
C
In
this
case
it
was,
it
happens,
to
be
set
to
net
0,
but-
and
we
have
a
number
of
plugins
which
I'll
talk
about
in
a
moment
which
we
are
going
to
support
at
the
first
at
the
next
release
of
openshift
that
you
can
plug
in
you
and
that
list
of
plugins
is
going
to
grow.
The
use
cases
are
are
plenty
for
the
types
of
plugins
that
are
required
after
this
presentation.
C
C
So
let's
talk
a
little
bit
more
about
the
near
term
multi-strap.
So
this
this
really
opens
up
a
lot
of
doors.
For
us,
this
is
sort
of
a
fundamental
thing
that
is
going
to
enable
a
lot
of
new
technology
that
is
required
or
fundamental
to
telecoil
and
a
few
use
cases.
So
the
first
one
first
plugins
that
we
are
going
to
fully
support
with
the
release
of
openshift
4.1
at
the
end
of
May,
are,
of
course,
our
default
Sdn.
C
C
We
are
initially
targeting
to
very
select
cards,
the
most
popular
ones
from
Mellanox
and
intel,
and
then
we
expect
that
stable
hardware
that
we
function
with
and
we've
tested
against
to
grow
over
time
in
in
the
next
version
of
open
chef
or
to
which
is
targeting
the
September
timeframe
of
this
year.
We're
going
to
be
adding
at
least
a
couple
more
plugins,
including
a
VLAN
enabled
bridge
plug-in,
as
well
as
IP
b.left.
C
So,
where
do
we
go
from
here
lightly?
So
you've
heard
me
mentioned
that
this
really
opens
a
lot
of
doors
having
the
ability
to
add
new
plugins,
and
you
know
a
little
bit
about
the
technology
behind
multa
and
how
it
might
maybe
does
that.
But
let
me
tell
you,
from
a
perspective
of
a
product
manager,
there's
a
number
of
use
cases
that
this
really
really
helps
set
the
stage
for
or
in
fact,
might
even
solve
altogether
right
out
of
the
gate.
C
So
some
of
those
are
listed
here,
I
stopped
when
I
got
to
the
bottom
of
this
screen.
That
does
not
mean
there's
not
more
there's
plenty
more,
but
these
are
some
of
the
ones
that
are
top
of
mind
so,
for
example,
functional
separation
of
control
and
data
planes,
as
you
saw
kubernetes
again
with
malta,
allows
you
to
have
more
than
one
interface.
Well,
kubernetes
itself
still
only
really
recognizes
that
first
primary
interface
for
all
of
its
control,
plane
traffic.
C
C
If
you
use
the
technology
like
srl
V
on
the
secondary
interface
that
can
be
tied
directly
to
a
NIC,
you
can
bypass
everything
and
one
fell
swoop,
and,
and
you
can
achieve
that
line
rate
performance,
there's
other
things
that
need
to
be
solved,
as
I
mentioned,
like
new
my
awareness
and
dpu,
pinning
and
so
forth.
In
order
to
get
that
line
rate,
but
we
can,
we
can
get
close
to
it
even
without
them.
C
So
that's
one
really
huge
thing.
Another
thing
that
our
customers
are
telling
us
that
they
want
is
is
to
have
greater
control
over
who's
allowed
to
access
the
hardware
on
the
box,
so
we're
working
it
with
upstream
kubernetes
on
admission
controller
capabilities
that
would
allow
us
to
define
who
is
allowed
to
use
what
interfaces
hardware
interfaces
on
the
hosts
selves,
we're
working
on
dynamic,
runtime,
enablement
capabilities,
so
the
ability
to
say
on
the
fly
I
would
like
to
add
this
secondary
or
tertiary
interface.
C
In
my
cluster,
without
having
to
do
it,
essentially
a
network
restart
which
is
where
we're
at
today
or
do
a
Greenfield
if
it
depending
on
the
technology,
then
we
were
adding
those
capabilities
upstream
link
aggregation
for
Network
redundancy,
so
the
ability
to
a
lot
of
our
customers
are
asking
us
or
additional
data
playing
interfaces
that
they
can
tie
to,
mix
which
are
then
tied
directly
to
top
of
racks,
which
is
any
of
they
use
this
again
for
aggregation
and
redundancy.
So
we're
solving
that
with
with
the
mostess
technology.
C
This
is
going
to
enable
us
to
deploy
different
types
of
network
protocol
stacks
different
SLA,
different
capabilities,
we're
also
in
combination
with
the
Sdn.
That's
being
used,
we're
going
to
be
able
to
offer
increase
the
traffic
isolation,
segregation
and
security
through
additional
plugins
we're
going
to
be
able
to
work
on
future
things
such
as
user
space
networking,
for
example,
VPP
and
OBS
DB
DK.
C
We're
gonna
also
be
able
to
enable
things
like
QoS
right,
so
the
different
interfaces
you
might
want
to
use
one
of
those
that
your
secondary
data,
a
plain
interface
to
a
direct
tie
to
storage,
may
be
for
backups.
We've
got
a
number
of
customers
that
are
concerned
about
that
that
storage
or
backup
process
consuming
so
much
of
the
bandwidth
that
potentially
it
slows
down
or
causes
issues
with
control
play.
C
Also.
This
is
going
to
help
us
enable
on
a
technology
which
is
forthcoming
to
in
red,
and
that
is
container
native
virtualization,
basically
building
on
Cubert
networking,
if
you're
unfamiliar
with
that
that's
the
ability
to
run
virtual
machines
and
in
a
containerized
cost
a
container
cluster.
The
virtual
mean
virtual
machines
being
treated
just
like
any
other
container
and
being
orchestrated
by
kubernetes,
though,
and
specifically
one
of
those
plugins
was
the
VLAN
enabled
bridge
plugin,
which
is
gonna,
help
get
us
there.
C
C
But
the
idea
is
that
our
customers,
if
they
want
to
move
from
one
Sdn
to
another,
then
you
could
use
multi
to
instantiate
a
secondary
interface,
plug-in
your
next
generation
Sdn
into
that
second
interface
and
then
establish
routes
to
move
the
workloads.
As
you
see
fit
from
one
Sdn
to
the
other,
there's
that's
very,
very
tricky.
But
again
we
are
working
on
that.
We
are
seeing
some
early
successes,
especially
for
simple
deployments
and,
of
course,
we're
not
going
to
rest
on
our
laurels.
C
So
we
got
to
a
version
1.0
specification
for
Malta's
in
upstream
through
this,
this
network
funding
group,
the
network
plumbing
group,
is
a
collaboration
now
I
believe
we're
upwards
of
around
20
different
vendors
that
are
contributing
to
that.
It's
an
excellent
working
group,
if
you
have
not
already
checked
it
out,
but
we're
building
towards
specification
version
2.0,
which
will
will
add
a
lot
of
functionality
that
helped
us
get
to
the
things
that
we
can't
get
to
you
quite
yet
with
1.0.
C
So
in
summary,
there
is
quite
a
bit
of
network
transformation
going
on
and
we're
trying
to
enable
the
parts
and
pieces
that
are
that
are
required
for
that
transformation.
We're
trying
to
enable
in
the
new
container
world
we're
trying
to
enable
those
things
that
have
become
sort
of
an
expectation,
if
not
a
requirement
in
in
the
traditional
data,
centers
and
deployments
for
telco
and
a
fee.
A
C
Real
quickly
on
that,
you
know
just
the
idea
of
the
operational
focus
is
we
want
to
bring.
We
want
to
we've
always
focused
an
open
shift
on
the
developer
experience
and
we
excel
at
that.
When
we
acquired
core
OS,
we
actually
got
a
lot
of
new
technologies
and
that
are
bringing
the
and
enhanced
operational
experience
to
OpenShift
as
well.
So
we
have
we've
essentially
now
made
the
day
to
operations.
A
D
Yeah,
so
my
question
I
popped
there
in
the
chat
was
I
was
interested.
If
you
could
speak
to
the
mapping
of
these
OpenShift
developments,
the
road
mapping
in
relative
to
OpenStack
I
know
there's
a
little
an
effort.
Openshift
has
been
integrated
over
in
recently
over
into
the
OpenStack
platform
and
these
new
Maltese
developments,
specifically
the
multiple
network
interfaces
that
are
going
to
be
enabled
in
in
the
openshift
roadmap.
Can
you
speak
to
the
roadmap
relating
to
OpenStack
sure.
C
So
are
the
open
shift
and
OpenStack
relationship
has
a
few
different,
a
few
different
deployments
styles
of
deployment,
so
the
first
one
is
just
plain:
Jane,
literally
open
shift
on
top
of
OpenStack.
What
you
have
is
you
have
an
SDN
solution
on
top
of
another
Sdn
solution
that
double
encapsulation
of
packets
has
been
an
issue
for
some
of
our
customers,
other
Cup
most
of
our
customers.
It's
really
not
that
big
of
an
issue,
but
but
for
those
four
for
whom
it
is
an
issue.
C
We
actually
used
an
enabling
technology
from
the
OpenStack
side
called
courier
and
what
we
do
is
we
take
courier
and
use
it
as
the
cni
plugin
into
kubernetes,
and
you
can
think
of
courier
as
an
adapter.
What
happens
is
when
a
new
pod
is
instantiated
and
it
asks
kubernetes
for
how
its
networking
should
be
configured
kubernetes
asks
its
CNI
plugin.
How?
What
should
we
do?
Courier
actually,
then
says:
let
me
reach
all
the
way
down
to
Neutron
and
whatever
plugin
it's
using
to
provide
that
information
to
back
to
kubernetes,
so
we
can
configure
the
pod.
C
So
in
that
sense,
we've
we've
sort
of
collapsed.
The
SDN
between
the
two
we've
gotten
rid
of
the
double-o
capsulation,
and
so
that's
another
way,
and
eventually
it's
it's
going
to
be
our
well
today.
It's
our
best
practices,
way
of
doing
open
shift
on
OpenStack
but
specific
to
your
question
about
how
does
that
play
with
Malta's?
There's,
no
reason
why
motifs
couldn't
have
as
its
prime
very
interface,
essentially
a
courier
enabled
mechanism
for
the
control
plane
and
then
use
another
plugin
for
secondary
tertiary
interfaces.
C
It's
also
true
that
we
could
use
courier
technically.
This
is
not
something
that
we
have
tested,
because
this
is
all
really
quite
quite
new
for
us
we're
just
releasing.
You
know
some
of
the
first
implementations
of
it
now,
but
the
there's
there's
technically
no
reason
why
courier
couldn't
be
deployed
as
the
secondary
data
plane
interface.
So
maybe
control
plane
remains
within
the
open
ship
cluster
using
a
default
Sdn.
C
D
C
It
has
a
very
specialized
purpose
in
life,
and
that
is
for
that
interface,
which
is
being
configured
in
this
case,
the
default
being
the
primary
for
that
interface
being
configured
I'm
just
going
to
reach
down
to
whatever
plug
in
Neutron,
is
using
and
I'm
going
to
use
that
to
configure
the
IP
and
so
forth.
For
the
for
the
pod
that
I'm
instantiating.
B
This
is
I
mean
this
is
Paul
Lyon
Kasper
mark
I.
It's
not
necessarily
an
audience
question,
it's
kind
of
an
insider
question,
but
maybe
the
audience
is
also
interested
in
there's
a
a
lot
of
this
work
that
we're
doing
I'm
I'm,
hoping
that
we're
working
with
partners
like
Intel
to
do
performance,
testing
that
we
can
share
with
communities
and
so
I'm
wondering
if
you
could
maybe
speak
to
what
we're
doing
there
to
show
how
the
work
that
we're
doing
in
communities
like
Malta's
and
kubernetes
actually
is
becomes
meaningful
to
various.
B
C
It's
a
great
question
fall,
so
we
have
a
number
of
different
ways
that
we
view
that.
So
in
fact,
many
of
the
numbers
that
you
see
published
upstream
and
kubernetes,
those
are
actually
numbers
that
were
we're
actually
created
by
Red
Hat.
So
we
have
a
performance
and
scale
team
that
focuses
on
exactly
that
and
when
they
stand
up
massive
clusters
and
basically
see
what,
at
what
point
do
things
finally
break
these
limits?
We
published
those
upstream
and
for
kubernetes
and
and
then
we
for
downstream
and
OpenShift
we
actually?
C
Instead,
we
move
away
from
maybe
more
theoretical
kind
of
numbers
to
more
practical
limits
of
of
supportability.
So
there's
really
two
kind
of
sets
of
numbers.
There
there's
there's
the
theoretical
limit,
there's
the
practical
limit
beyond
which
a
customer
may
not
want
to
exceed-
and
you
know,
there's
reasons
beyond
performance
that
might
limit
our
recommendation.
For
example,
you
know,
even
though
you
might
be
able
to
do
500
nodes
in
a
cluster.
What
is
your
your
liability
footprint?
You
know
how
many
nodes
are
you
willing
to
put
in
there?
C
And
you
know,
maybe
maybe
your
your
risk
assessment
is
that
you
want
to
cap
that
off
at
say,
200
node,
so
there's
there's
other
factors
that
come
into
play
when
we
talk
about
some
of
the
performance
and
scale.
The
actual
performance
numbers,
of
course,
are
on
unadulterated.
We
published
those
exactly
working
with
our
partners.
C
They
they
basically
feed
us
hardware,
and
we
put
this
into
our
labs
and
we
generate
performance
numbers
that
we
then
publish.
We
are
working
very
closely
today
with
specifically
with
Intel
and
Mellanox
on
a
number
of
their
cards,
and
those
numbers
are
actually
in
flight
for
many
of
the
things
that
we're
doing
now,
including
SR,
iov
and
Malta,
so
expect
to
see
those
published
at
or
just
after,
the
GA
of
the
technologies.
B
Thanks
mark
I
mean
the
reason
I
ask
is
because
a
lot
of
you
know
the
customer
base
out
there.
The
potential
customer
base
of
courses
really
winds
up
building.
You
know
building
their
businesses
on
what
ultimately,
they
think
they
can
get
close
to.
As
far
as
the
performance
is
concerned,
so
I
think
you
know
the
work
that
we
do
there
in
the
work
for
you.
What
a
publish
really
helps
us
and
helps
the
community
I.
A
We
get
those
we'll
make
sure
we
come
and
have
them
presented
here
at
the
cig
and
have
some
more
further
conversations
about
them.
Looking
at
the
time
now
is
there
any
other
questions.
People
might
have
now.
I'm
gonna
take
back
the
green
for
a
minute
and
share
and
just
sort
of
walk
through
mark
I.
Think
I
found
you
on
the
schedule
at
red
hat
summit.
Talking
about
open
ship
high-performance
networking
with
SR
I
Iove
is,
is
that
you
and
in
that
yes,.
C
Yep,
it's
actually
a
myself
and
the
primary
engineer
from
CTO
networking.
That's
working
on
that
say
he's
you
know:
I
will
be
co-presenting
on
that
he's.
The
brains
of
the
operation
I'm,
just
the
product
manager,
but
I'm
also
going
to
be
delivering
a
couple
of
panels
on
open
chef
roadmap.
It's
going
to
be
basically
two
instances
of
it,
something
because
there
wasn't
a
large
enough
room.
I.
A
Know
had
problems
with
the
rooms
at
Summit.
They're,
not
big
enough
and
summit
keeps
growing
the
same
with
the
Commons.
So
we'll
look
forward
to
that.
So
I
wanted
to
mention
that
we
are
going
to
have
tell
sig
telco
sig
meet-and-greet
up
at
the
Photoshop
Commons
booth
at
on
may,
8th
at
1:00
p.m.
so
prior
to
your
talk.
I
think
the
theater
urine
is
that
the
theater
in
the
on
the
expo
floor
mark
or
is
that
in
an
actual
room
somewhere,
but.
A
So
we'll
all
be
out
there
on
the
expo
floor
on
there.
So
if
you'd
like
to
come
and
meet
up
beforehand,
all
and
I
will
be
around
the
Commons
booth
in
a
community
central
also
at
1:00
p.m.
so.
If
you
just
want
to
hang
out
there,
that
would
be
great
just
to
get
some
face
time
with
some
folks
and
other
than
that.
A
We
look
forward
to
talking
with
you
again
in
another
month
at
the
next
meeting,
which
I
haven't
looked
up
the
data
of
the
fourth
Friday
of
each
one
and
if
you
have
a
topic
and
we
will
be
looking
for
I,
think
maybe
mark
if
we
could
get
Intel
to
come
in
and
talk
about
Malta's
from
their
point
of
view.
Maybe
the
multitude
map
that
might
be
something
that
people
would
be
interested
in
too,
as
well.
So
in
another
yeah.
C
We
sure
I'm
sure
they
would
be
happy
to
do
that.
We
do.
We
do
share
that.
In
fact,
to
a
certain
extent,
Red
Hat
really
dictates
the
roadmap
a
little
bit
more
since
we're
we're
more
product
oriented
than
Intel
is
Intel's
more
research
oriented.
So
we
actually
did
take
that
roadmap
a
little
bit
more.
We
can,
but
we
could
certainly
work
together
to
deliver
our
unified
information
about
that
great.
A
So
I
think
that's
all
we
have
for
today.
Unless
Paul
you
have
something
else
up
your
sleeve
there
I.
A
Then
I'm
gonna,
let
people
have
all
of
eight
minutes
back
before
their
next
meetings
or
whatever.
It
is
they're
going
on
to
do
and
look
forward
to
seeing
most
of
you
at
the
Commons
event
in
in
Boston
in
a
few
weeks,
or
maybe
at
the
Commons
event,
at
KU,
con
EU
and
May,
and
over
in
Barcelona,
if
you're
so
lucky
as
to
get
to
come
to
Barcelona
with
us.