►
Description
Come hang out with Olive Power as we look into Running Telco workloads on Kubernetes.
00:00:00 - Welcome to TGIK!
A
Okay,
hello,
everybody
and
welcome
to
episode
139
of
tgik,
I'm
here
with
my
good
friend
and
colleague,
abhishek
vidra
and
we're
bringing
this
week's
tgik
live
from
asia.
So
a
very
early
morning
here
in
singapore,
5
am,
and
so
you
know
to
anyone
from
this
part
of
the
world
who's
up.
You
know
kudos
to
all
of
you.
I
see
yogi
our
colleague,
yogi
rampura
good
morning
to
you,
yogi,
not
so
early
for
you
in
india,
but
great
that
you
can
join
us
hello,
josh
great,
for
you
to
join
us.
A
A
You
know
moving
from
uk
to
singapore
and-
and
you
know
kobe
everything
just
seemed
to
kind
of
back
up,
and
you
know
now,
I'm
really
glad
to
do
it,
as
I
say,
with
avishek
yeah
abhishek,
it's
good
to
do
this
session
right.
B
Awesome,
this
is
an
awesome
feeling.
I
guess
thank
you,
everyone
for
joining
us
and
it's
early
morning
for
us,
so
we
are
on
with
our
coffee.
I
guess
everybody
can
get
their
coffee
and
be
with
us.
A
Yeah
exactly
so,
we
just
say
hello:
lots
of
people
are
joining
hello,
nadir
nice
to
see
you
on
here,
hello,
ivan
and
joe
nice
for
you
to
drop
in
and
catch
up.
I
made
it.
I
finally
made
it
okay,
so
we'll
keep
monitoring
the
chat,
but
we'll
we
go
straight
into
you
know
what's
been
happening
in
the
week
today
and
of
course,
as
everybody
knows,
it's
been
kubecon
week,
so
there's
lots
of
good
stuff
that
is
coming
out
of
the
kubecon
week.
A
B
A
A
Eric
hey
paul
eric's
from
dallas
fort
worth
hello,
tuka
from
helsinki,
wow
great,
to
see
everyone,
martin
from
the
netherlands
sevi
from
istanbul.
This
is
great
great
community.
We
have
here
yeah,
so
yeah,
so
kubecon
happened
this
week
so
yeah
this
time.
Virtual
again,
the
european
one
was
virtual
and
this
one
was
supposed
to
be
in
boston,
but
it's
virtual
as
well,
and
so
you
know,
there's
been
a
lot
of
great
talks
at
qcon
if
you've
all
got
any
talks
that
we
haven't
mentioned
here,
that
you
thought
was
great.
A
You
know
please
drop
them
in
the
chat.
Okay,
so
the
first
one
there.
So
you
know
the
link.
There
is
saying
that
all
the
talks
will
be
on
the
cncf
youtube
channel.
So
that's
going
to
be
a
great
resource
for
us
to
have
a
look
at
when
it's
published
on
the
fourth
of
december,
and
then
we
have
some
cube
com
infographics,
some
summaries
by
jerry
hargrove
click
that
link
there.
A
Okay,
wow
make
that
a
little
bit
bigger.
Let's
have
a
look
and
see
what's
in
there,
so
we
got
some.
Oh
wow.
Look
at
that.
So
we
got
some
little
information
stuff
about
each
of
the
different
keynotes.
That's
really
cool
it's
kind
of
a
bit
blurry
for
me,
so
I'm
not
sure
if
you
all
can
see
that
properly.
But
that
looks
really
awesome.
Go
check
that
out.
A
Awesome
and
then
infrastructure
for
entertainment.
This
looks
like
a
really
great
talk.
This
is
from
justin
over
at
justin
garrison
at
amazon
and
he's
talking
about
how
to
create
movies
and
ship
them
to
theaters,
as
well
as
what
it
takes
to
stream
movies
directly
to
viewers.
So
that
sounds
like
a
really
great
talk
and-
and
you
know,
a
nice
kind
of
mix-up,
it
seems
in
the
kind
of
talks
that
have
been
presented
at
this
year's
kubecon.
So
that's
really
nice
to
see
as
well.
A
Then
there's
this
one
about
you
know
from
tabitha
sable
pki
the
wrong
way:
simple,
tls
mistakes
and
surprising
consequences.
That
looks
like
an
awesome
talk.
I
haven't
managed
to
make.
Any
of
these
talks
actually
have
been
selected.
I
wish
I
could
have
you.
Have
you
managed
to
catch
any
of
these?
It's
a
bit
awkward
times
for
us
right.
B
Actually,
a
few
of
them,
but
yes
I'll,
be
looking
at
the
recordings
later
on.
Of
course,.
A
Yeah
yeah
yeah-
I
might
catch
that
summary
that
we
sort
of
put
there
that's
going
to
be
out
in
the
fourth
of
december.
That
looks
great,
so
yeah
tls
certificates
always
fun
and
games
to
manage
those,
and
so
that
looks
like
a
really
great
talk
there.
So
what
else?
What
else?
Oh
yeah
cheryl's
keynote,
are
certifications
worth
it
and
that's
fairly
topical
as
well,
because
there's
just
been
a
new
certification
announced
by
cncf,
which
we'll
have
a
look
at
in
a
minute,
so
yeah
so
cheryl
talks
about
you
know.
A
You
know
our
cert.
You
know
that's
kind
of
a
good
question.
Right
are
certifications
worth
it.
Do
they
show
do
they
do
what
they're
supposed
to
do,
which
is
show
a
level
of
knowledge
in
a
particular
subject,
and
then
you
take
that
knowledge
and
perhaps
apply
it
in
real
life
situations
right.
So
I
think,
there's
a
place
where
certifications.
Always
that's
kind
of
my
opinion
right.
It's
a
good.
You
know
it
gives
you
confidence
that
you've
got
a
certain
grounding
in
a
subject.
Otherwise
you
know
there's
sometimes
I
find
there's
not
many
barriers
or
there's.
A
Not
many
sort
of
you
know
steps
almost
that
you
can
kind
of
tick
off
and
say
I've
done
that
and
to
show
a
certain
level
of
knowledge.
You
know
without
sort
of
certifications
and
kind
of
borders
like
that,
it
kind
of
feels
like
you're
just
trying
to
learn
everything
all
at
the
same
time.
So
it's
very
difficult,
so
you
know
certifications
in
that
respect.
I
think,
are
super
useful.
A
B
A
Give
you
a
little
summary
pdf.
Doesn't
it?
Let's
have
a
look
at
that,
let's
just
say:
oh
okay,
that's
the
actual
text
of
the
speech.
Okay!
Well,
that
kind
of
looks
like
a
really
good
talk
and
this
one
and
then
it's
got
the
slides
as
well.
A
Okay,
so
it's
all
there
actually,
so
george
has
actually
done
a
great
job
and
actually
collating
exactly
what
went
on
in
that
talk,
and
then
there
was
a
panel
session,
so
that
looks
like
a
really
good,
interesting
panel
session
as
well
a
great
lineup
on
the
panel
hacking
and
hardening
in
the
cloud
native
garden.
A
Have
you
ever
wondered
how
hackers
think
what
do
attackers
look
for
when
they
approach
a
cluster
and
what
security
hardening
steps
can
stop
them
in
their
tracks?
That
looks
like
a
great
session.
Yes,
indeed,
yeah,
and
then
you
know
the
panels
where
you
can
kind
of
come,
come
and
ask
your
own
questions.
That's
awesome
and
then
this
one
performs
sort
of
sort
of
related
a
little
bit
having
fun
cloud
native
form
with
honk
cto.
So
they've
got
a
honk
cto
sort
of
command
line.
A
A
So
that
looks
that
looks
pretty
cool
and
then
there's
a
video
that
you
can
watch
as
well,
so
awesome
and
then
there's
this
great.
I
was
reading
this
earlier.
There's
this
great
summary
by
jaroslav
pansola.
I
hope
I
pronounced
that
right
where
he
writes
up
about
his
key
takeaways
from
kubecon,
so
yeah.
A
So
this
this
looks
great
and
it's
his
first
q
com,
so
awesome
stuff
and
he
kind
of
obviously
interested
in
service
mesh
as
he's
written
there
and
so
he's
kind
of
done
a
lot
of
a
lot
of
information
about
that.
How
about
you
know
he
talks
about
there's
a
lot
of
offerings
that
have
come
out
in
the
in
the
in
the
service
mesh
space,
and
so
that's
that's
interesting
to
watch
that
space
grow
so
yeah,
that's
a
really
awesome
article,
so
worth
a
read.
A
Okay,
so
there's
some
information
that
are
related
to
core
the
core
kubernetes
project
and
yeah.
So
fair
enough,
a
lot!
A
lot
of
the
folks
are
at
kubecon,
so
you
know
there'll
be
more
and
more
in
this
space.
Next
week
there
has
been
some
articles
in
this
space.
First
and
foremost,
I
guess
kubernetes
1.2
beta
is
released,
and
so
that's
like
super
exciting.
A
So
we'll
click
on
that
and
see
what's
going
on
in
there
so
yeah,
so
you
can
kind
of
you
know,
get
downloads
probably
have
it
have
a
read
of
the
release,
notes
and
see:
what's
changed,
and
so
you
know
lots
of
good
stuff.
In
there
you
know,
bug
fixes
and
changes
in
labels
for
control,
payloads
lots
of
updates.
I
really
need
to
take
time
to
have
a
read
and
see
what's
going
on
in
version,
1.20.
A
significant
release
there,
and
then
there
was
a
cve
posted
as
well,
which
you
know
it's
worth
was
catching
up
on
as
well.
It's
related
to
the
csi
snapshot
controller.
A
It's
an
interesting
article
there
and
then
in
the
cube
in
you
know
the
cube
can't
happen.
So
there
was
kind
of
lots
of
announcement
last
week
that
were
like
kind
of
really
exciting
and
we
kind
of
like
this
one
abstracting
we
cert
manager
donated
to
cncf
yeah
one
of
our
pet
projects,
so
yeah
that's
great
to
see
that
the
the
guys
at
jet
stack
well,
it's
a
different
different
company
name
now,
yeah
vanavi,
so
yeah,
so
that
that
starts
manager.
A
Project
donated
cncf,
that's
great
to
see
the
jet
stack
a
formerly
uk
company.
So
it
was
great
to
see
that
sort
of
success
story
there
and
then
yeah.
So
we
talked
about
certification
and
so
there's
a
new
kubernetes
certification
coming
out
in
and
around
security.
A
And
so,
if
you
have
a
look
at
the
resources
around,
you
know,
what's
there
to
prepare
there's
like
a
repository
here,
I
had
a
look
at
the
the
the
kind
of
the
the
the
items
and
the
sort
of
things
that
are
covered.
A
You
know
that
you
have
to
kind
of
you,
know,
sort
of
study
up
on
or
be
understand
when
trying
to
study
for
this
exam,
and
a
lot
of
it
is,
is
you
know,
as
you
would
sort
of
imagine,
because
the
cka
exam
is
a
prerequisite,
so
you
have
to
obviously
know
the
concepts
that
are
in
that
which
are
fairly
broad
and
varied
in
relation
to
kubernetes.
A
You
know
so
cka
you
know
really
does
cover
most
of
the
topics,
but
the
cks
then
dives,
specifically
into
the
elements
related
to
how
you're
test
setting
up
your
cluster
and
then
how
you
come
along
to
harden
it.
So
you
know
no
surprises
there
that
it's
focusing
on
security,
but
it's
nice
to
see
that
you
know
the
kind
of
individual
topics
that
you
need
to
zoom
in
in
are
highlighted
there
yeah.
A
So
when
you
know
when
I
was
having
a
look
at
the
content
in
a
bit
more
detail
a
couple
of
weeks
ago,
you
know
network
security
policies,
pod
security
policies
and
how
you're
setting
up
your
your
clusters
to
prevent
you,
know
so:
unwanted
traffic
and
how
to
lock
down
your
api,
etc.
So
those
kind
of
things
you
you
know
you
can
imagine
are
kind
of
key
to
sort
of
passing
that
exam
and
then
you've
got
all
the
content
related
to
how
to
study
for
that
exam.
So
that's
really
awesome.
A
Everybody
shouting
out
for
sort
manager
as
well
excellent.
A
B
So
super
important
buildbacks,
especially
from
the
developers
perspective
and
getting
their
codes
deployed
in
code
deployed
in
kubernetes
and
irrespective
of
the
language
and
the
frameworks
you're
using
especially
like,
for
example,
your
goal
line
java
node.js
buildbacks
can
be
super
handy
for
you
in
that
sense,
for
developers.
A
B
A
Always
great
to
see
those
kind
of
journeys
right
so,
like
an
organization
kind
of
you
know
highlighting
that
kind
of
journey
going
to
kubernetes
any
issues
they
have.
You
know
the
environment,
that
they're
building
and-
and
you
know
how
they've
kind
of
learned
and
evolved
and
grown
they're,
always
really
really
interesting
articles
to
read
so
definitely
catching
up
on
that
one.
So
there's
a
two-part
blog
series
and
that's
that's
awesome
and
then
datadog,
you
know.
Finally,
you
know
datadog
has
a
container
survey.
A
So
these
again,
these
are
always
like
really
interesting
to
read
and
I
think
there's
like
11
tips
in
here
11
facts
about
real
world
container
use
and
if
we
look
at
the
top
three
just
so
keep
you
you
know
interested
in
reading
the
rest
of
the
article
first
one
I
thought
is
it
so
kubernetes
runs
in
half
of
container
environments
so
as
an
orchestration
for
containers,
kubernetes
is
in
half
assume
that
means
in
production
right,
so,
okay
or
just
general
containers.
Maybe
the
article
makes
that
more
clear,
okay
and
then
fact
number
two.
A
Ninety
percent
of
containers
are
orchestrated.
So
you
know
if
organizations
are
running
containers
at
all
at
scale,
they're
orchestrated
somehow,
so
you
know,
there's
no
manual
or
sort
of
scripted
sort
of
automation
of
containers
or
anything
like
that.
It's
like
orchestration-
and
I
guess,
there's
kind
of
like
key
players
in
the
orchestration
space
that
play
in
there
that
we
could
probably
all
guess,
yeah
see
my
kubernetes
cs.
Mesos,
nomad
and
number
three
yeah
number
three
majority
of
kubernetes
workloads
are
under
utilizing,
cpu
and
memory.
A
Okay,
so
we're
still
kind
of
not
kind
of
getting
the
maximum
benefit
that
compaction
that
we
can
get
and
maximizing
the
utilization
of
our
hardware,
something
that
we'll
kind
of
allude
to
later
on
in
this
talk
actually
have
a
check
right.
A
So
yeah,
it's
timely!
So
that's
interesting
that,
like
there's
still
an
underutilization
there,
so
maybe
still
we're
still
maturing
in
how
to
you
know
size
and
scope
and
and
spec
out
our
environments
when
we're
building
kubernetes
platforms.
So
wow
that's
like
kind
of
a
packed
week.
I
think
there's
a
lot
of
stuff
coming
out
from
qcon,
obviously,
and
and
some
a
lot
a
lot
of
great
articles
out
there.
A
I
guess
coming
up
to
the
end
of
the
year
as
well,
and
so
there's
a
lot
of
kind
of
reflection
on
the
year
gone
by
and
you
know
some
more
stats
onto
how
people
are
using.
You
know
kubernetes
and
containers
and
et
cetera.
So
that's
like
really
awesome
cool.
So
let
me
just
quickly
check
the
comments.
The
chat
awesome
so
there's
some
talk
about
looking
at
rook
and
seth
and
longhorn.
A
A
A
So
it's
a
little
bit
deep,
a
little
bit
more
difficult
to
kind
of
dig
that
out
that
sort
of
gold
stuff.
Okay.
So
what
else?
Okay,
okay
good,
so
the
chat
seems
to
be
ticking
along
quite
nicely.
Okay,
so
I
guess
average
check.
We
need
to
get
into
you
know
why
we
wanted
to
do
tgik
just
because
this
is
an
awesome,
awesome
movements
and
community
to
be
involved
in,
but
then.
A
Secondly,
why
did
we
want
to
do
our
first,
one
on
telco,
telco,
workloads,
running
kubernetes,
so
I'll
just
kind
of
you
know
from
my
part
and
then
I'll,
let
abby
shake
sort
of
take
over.
You
know
we
and
many
navishake.
You
know
work
fairly
similar
grounds
over
here
in
singapore.
Abhishek
is
originally
from
india,
but
he's
based
in
singapore.
Just
like
I
am,
and
you
know
we
we
speak
to
customers
about
and
helping
them
implement.
A
You
know
successful
kubernetes
platforms
and
running
their
applications
successfully
in
that
environment
and
more
and
more,
the
customers
that
we
speak
to
are
telco
organizations
who
want
to
run
their
workloads
on
kubernetes
and
their
workloads.
That
we'll
see
in
this
in
this
talk
are,
you
know,
have
their
own
particular
sets
of
requirements
that
we
have
to
build
into
how
we
build
our
kubernetes
platform
to
be
able
to
support
these
workloads.
These
container
network
function
workloads
or
cnfs,
which
is
what
the
telcos
are
looking
to
run
on.
A
Kubernetes,
and
so
we
decided
to
kind
of
start
this
session
on
tgik
and
we
both
believe
it's
the
first
of
at
least
one
more,
if
not
two
more
sessions
on
you
know,
running
telco
workloads
on
kubernetes,
and
you
know
the
kind
of
options
that
you
have
in
that
space
and
and
sort
of
what
different
organizations
and
what
different
vendors
are
doing
in
that
space
and
trying
to
kind
of
you
know
maximize
that.
You
know
the
output
that
the
telcos
are
looking
for.
B
What
we
basically
wanted
to
do
me
and
olive
was
in
this
session-
maybe
talk
around,
maybe
show
you
some
cool
stuff
as
to
how
we
can
use
kubernetes
to
build
telco,
create,
maybe
a
platform,
and
why
kubernetes
actually
stands
out
for
the
new
generation
of
telco
technologies
like
5g,
we
are
going
to
discuss
that.
I
guess
olive.
I
guess
it
would
be
nice
if
we
discussed
more
about
a
little
bit
of
history
from
where
we.
A
A
But
I
think
we'll
just
briefly
kind
of
say
how
we
got
you
know
from
here.
You
know
what
are
the
kind
of
you
know
all
the
telcos
that
we're
talking
to
work
are
talking
about
sort
of
rebuilding
their
infrastructure
to
support
5g
right,
and
so
you
know
how
do
we
get
to
there
and
what
sort
of
architectures
have
evolved
to
make
the
5g
architecture
what
it
is
today
and
and
therefore
you
know
how
that
impacts,
how
our
kubernetes
platforms
are
built
for
for
those
requirements,
and
so
we
got
a.
B
This
is
my
favorite
diagram.
Actually
I
can
kind
of
like
it.
So
if,
if
you
look
at
you
know
how
the
how
telco,
especially
the
mobile
telecom
evolution,
has
happened,
you
would
see
that
you
know
the
1g,
which
was
in
early
1980s,
was
basically
analog
based,
it
was
circuit
switched
and
you
know
two
most
important.
You
know
technologies
were
like
advanced
mobile
phone
system
and
not
a
mobile
telephone.
B
I
guess
at
this
point
of
time
there
was
nothing
digital
about
about
the
whole
whole
mobile
technology.
Then
the
early
1990s
saw
something
pretty
interesting.
We
went
to
2g
and
then
we
went
to
digital.
We
did
have
voice
and
limited
data
with
us
and
if
you
remember
the
days
of
early
1990s
or
and
if
you
had
ever
used
2g,
I
guess
that
was
the
time
when
you
were
using
some
kind
of
sms
services
at
that
time.
B
Circuit
switching
was
also
the
part
of
it,
but
then
I
guess
packet
switching
was
introduced,
and
that
was
a
big
leap.
I
must
say-
and
that's
that
was
the
time
where
you
know
we
got
these
kind
of
technologies
like
gsm
and
cdma,
especially
if
you
are,
if
you
have
a
sim
on
your
phone,
you
are
for
a
hundred
percent
using
a
gsm
based
technology
system.
Then,
if
we
move
on
to
early
two,
thousands
or
actually
2010
we
go
to
3g
and
3g
was
pretty
interesting
in
many
ways.
B
It
was
video
plus
data
and
there
was
certain
elements
of
circuit
switching
but
in
3gb
actually
moved
towards
complete
packet
switching
networks,
but
there
were
certain
little
parts
of
circuit,
switching
which
which
we
were
moving
from
2g
to
3g,
and
then
the
the
4g
is
what
we
stand
on
today.
It
has
voice
broadband
data,
video,
it's
it's
completely.
Packet
switched
network
and
it
technologies
like
wi-fi
max
and
lte
lt's,
long-term
evolution
for
for
4g
kind
of
technologies
and
which,
which
the
ones
we
are
using
today.
B
Also
and
again,
we
are
heading
up
towards
5g.
I
guess
I
guess
everybody
is
hearing
the
buzz
about
5g
networks
and
5g
has
lost
promises
as
far
as
bandwidth
is
concerned.
As
far
as
the
iot
implica
implementations
and
implications
are
concerned,
so
we
can.
B
We
can
talk
a
lot
about
what
are
the
use
cases
of
5g,
but
what
me
and
always
think
was
we
will
not
more
discuss
on
the
use
cases,
but
rather
the
architecture
and
what
makes
it
and
what
makes
it
makes
spike
g5g
and
how
kubernetes
can
play
a
very,
very
important
role
in
that.
What
do
you
think
always.
A
Yeah
yeah
sounds
good,
I
mean
you
know.
We
all
know
when
we
talk
about
taco.
First
of
all,
it's
all
those
acronyms
right,
but
but
that
your
page,
their
outlines,
you
know
at
least
explain
some
of
those
actions
right
so
yeah
so
I
mean
so
yeah.
It's
like
the
evolution
of
like
you
know
the.
A
I
think
we're
going
to
talk
about
it
a
little
bit
how
the
architecture
is
really
from
so
we're
talking
about
telco
organizations
how
they
construct
their
data
centers
when
the
architecture
has
changed
from
sort
of
at
the
3g
time
space
to
to
5
to
5g,
that's
kind
of
where
there
was
the
two
main
sort
of
architecture
changes
and
we've
got
some
diagrams
of
them
to
show
that,
because
these
architecture
changes
can
then
dictate
how
telco
telco
organizations
are
building
their
data
centers
to
be
able
to
support
the
workloads
sort
of
based
around
this
architecture
and
so
again
sort
of,
like
you
know,
sorry
for
the
acronyms
and
stuff
which
we're
just
kind
of
you
know,
using
sort
of
telco
speak
we're
trying
to
we're
trying
to
get
fluent
in
it
right
so
yeah.
A
So
so.
This
is
a
3g
ppr
architecture,
something
similar.
B
To
that
so
now
I
guess
we
have
seen
the
evolution,
and
now
I
guess
we
we
can
quickly
discuss
about.
How
does
4g
architecture
looks
like
because
once
we
understand
and
we
get
a
feel
of
what
what
is
og
architecture
and
that
is
basically
the
evolved
packet
core
now
epc
as
we
call
it
on
the
specifications
of
3gpp.
B
We
will
be
very
good
position
to
actually
understand
the
5g
architecture
and
then
we
would
be
in
a
very
great
position
to
really
understand
where
humanities
plays
the
role
in
in
5g.
So
I
guess,
let's
have
a
look
at
this,
so
in
what
I
would
try
to
do
is
I'll
try
to
break
down
break
it
down
in
the
most
easiest
way.
B
I
can
explain
this,
so
if
you,
if
you
can
see
as
you
on
your
on
your
left
hand,
side,
it
is
u-e-u-e-e,
is
user
equipment
and
if
you,
whenever
you
make
a
call
to
somebody,
your
your
the
signals
out
of
your
phone
goes
to
the
nearby
ram
network
and
then
the
the
signals
are
sent
to
your
core
in
data
center
in
the
code
data
center.
It's
super
important
to
know
this.
B
That
you
have
something
called
as
mme
and
if
you
look
at
enemy,
mme
is
mobility
management
entity,
and
you
would
see
there
are
two
important
kind
of
pathways.
One
is
the
dotted
red
one.
The
other
is
the
blue
one.
So
with
the
dotted
red
one,
what
we
are
trying
to
say
is
control
plane
like,
for
example,
if
somebody
wants
to
call
somebody
and-
and
there
is
the
signal
from
ram
going
to
the
core
that
is
like,
for
example,
here
it
is
a
mobile
phone.
B
It
goes
to
your
ram,
that
is
radio
access
network,
and
then
the
signal
goes
to
your
code
to
process
it
and
set
it
further.
The
first
thing
the
first
place
where
the
signal
comes
in
is
actually
mmv.
Mma
is
actually
responsible
for
multiple
things
and
but
one
very
important
thing
which
mme
is
actually
responsible,
is
for
a
session
creation
creation
of
session
on
which
the
data
could
be
sent
from
one
end
to
another.
So
enemy
talks
to
hss
hsas
is
home
subscriber
server.
B
That
means
are
you,
it
would
authenticate
you,
it
would
authorize
you,
it
would
see
that.
Are
you
having
those
plans
which
or
not?
Are
you
having
all
those
kind
of
bandwidth
with
you
to
even
make
a
call
or
send
that
particular
data
or
take
or
maybe
use
a
particular
service
on
the
internet?
And
if,
yes,
it
gets
a
response
from
hss
and
then
mme
actually
creates
a
session?
B
There
are
a
lot
of
other
things
also
which
go
on,
but
I
will
I
would
kind
of
skip
over
it
and
just
talk
about
something
pretty
important
there
and
then
what
mme
does
is
it
creates
a
session.
B
If
you
look
at
the
s11
connection
here,
it
creates
a
session
with
serving
gateway
and
once
it
creates
a
session
with
the
serving
gateway,
then
through
lte
node,
the
data
is
sent
via
the
serving
gateway
to
the
pdm
and
basically
pdns
and,
if
you
say,
p
gateway.
They
are
packet
data
network
gateways
primarily-
and
this
is
the
place
through
which
your
package
will
be
routed
to
the
external
internet
or
maybe
to
ims
service
or
to
any
other
vpn.
B
A
And
that's,
I
think
it's
worth
pointing
out
abstech
that
the
epc
is.
Basically,
you
know
if
we
represent
that
as
the
data
center,
that
the
telco
organizations
build
and
to
run
this.
This
software
and
this
software
exists
as
virtual
network
functions
right.
So
a
lot
of
the
implementations
on
4g
were
built
around
virtualization
virtualized
workloads
on
various
different
types
of
virtualization,
open,
stack,
piece
here
and
running
these.
B
Right
now,
these
these
are
basically
network
functions
which
are
used
to
actually
kind
of
move,
your
data
in
from
the
core
to
the
larger
internet
or
the
ims
or
the
vpn
services.
Now,
if
we
look
at
the
history
again
a
little
bit
of
history
again
so
sorry
for
talking
about
history
again,
so
the
if
you
look
at
3g
and
and
4g,
these
functions
were
actually
realized,
especially
when
people
in
the
3g
world.
B
And
so,
if
you
are,
if
you
were
a
telecom
provider
and
if
you
get
into
your
data
center
or
anybody
gets
into
the
data
center
or
telecom
provider,
they
would
see
different
boxes
of
different
oem
vendors
with
different
technologies,
all
together
and
everybody
coming
together,
making
a
very
high
brain
data
center,
with
very
different
machine
machines,
very
different
softwares
embedded
into
them,
and
and
of
course,
since
there
there
are
different
machines.
There
are
different
architecture.
There
are
different
softwares,
which
embedded
into
them.
B
The
integration
was
super
tough
because
maybe
a
product
from
one
oem
for
hss
may
talk
to
a
product
of
another
oem
for
mme
on
very
different
protocol
sets,
and
then
the
protocols
may
not
be
standard.
So
these
were
some
very
serious
challenges
which
any
telecom
provider
was
facing
in
in
3g
and,
of
course,
even
when
4g
actually
started.
B
So
one
idea
was
that
how
can
we
standardize
at
least
the
hardware
when
we
can
run
these
functions
so
to
standardize
that,
or
rather
to
you
know,
have
a
standard
hardware
in
your
data
center
rather
than
having
different
kind
of
oem
hardwares?
The
idea
of
a
vnf
comes
into
picture,
that
is,
virtual
network
functions,
so
to
go
down
to
the
most
basic
idea.
B
The
idea
is
that,
instead
of
having
m
these
embedded
softwares
or
dedicated
different
kind
of
architecture
of
machines,
how
about
we
have
we
deploy
them
as
virtual
functions,
or
rather
processes
running
in
virtual
machines
on
and
standard
x86
architecture,
machines.
A
So
the
telco
orgs
weren't
immune
to
kind
of
trying
to
standardize
their
hardware,
so
they
could
run
on
any
x86,
as
you
say,
rather
than
having
to
be
beholden
to
any
particular
sort
of,
perhaps
expensive
type
hardware
from
one
specific
vendor
right
like
like
many
other
sort
of
organizations
as
well,
the
telco
organizations,
you
know,
let's
virtualize
our
workloads,
so
we
can
run
them
on
any
backend
hardware
and-
and
so
so,
a
lot
of
organizations
have
gone
down
that
path,
as
you
say,
running
a
lot
of
their
work.
A
Vnfs
on
on
on
openstack
and
things
like
that,
and
so
you
know
that
I
mean
this
is
the
situation
where
we
find
a
lot
of
tackles
in
today
right.
B
Absolutely
is
exactly
the
situation
we
find
a
lot
of
times,
and
this
was
this
was
a
great
move.
You
know
in
in
in
a
way
that
now
you
can
manage
your
functions.
Vnf's,
you
know
on
standard
architecture,
it's
it's
more
about
it's
more
software,
driven
than
heartbeat
little
approach,
it's
it's
virtualized!
So
things
are
great.
It
helps
you
skin
to
a
certain
limit.
Also,
so
things
were
good.
Things
were
nice,
but
now
we
are
talking
about
5g.
Isn't
it
yeah?
B
No
4g
4g
is
great
and
yeah
and
4g
is
here
to
stay.
It
will
stay
for
certain,
maybe
maybe
a
decade
more,
if
I'm
not
wrong
with
5g
coming
in,
but
the
5g
is
already,
I
guess,
in
specifications
and
actually
in
reality
in
certain
locations
in
the
world.
So
if
you
look
at
5g
and
you
look
at
this
particular
architecture
first
again,
I
would
like
to
also
tell
you,
though,
these
were
those.
These
are
virtualized
networks.
B
Those
those
now
like
mme
hss
packet
gateway
can
be
realized
as
virtual
machines
or
or
processors
running
on
virtual
machines.
The
idea
was
that
this
particular
architecture,
they
being
virtual
machines,
was
not
highly
scalable
or
maybe
suitable
for
the
kind
of
bandwidth
we
are
expecting
and
the
kind
of
scalability
which
we
are
expecting
under
5g,
and
if
you
see
in
5g
we
are
talking
about
the
bandwidth
of
like
say
gbps.
We
are
not
talking
anything
lesser
than
that.
We
are
talking,
maybe
trade,
bandwidths,
so
the
same
architecture.
B
With
the
same
idea
I
got,
maybe
I
would
say
reworked
or
re-architected
into
microservices.
B
The
reason
was,
of
course,
if
you
go
into
microservices
architecture,
we
can
of
course
scale
much
faster
than
vnf,
or
rather
virtual
machines,
and
more
than
that,
this
architecture
was
was
very
tightly
coupled.
That
means
there
were
certain
protocols
with
which
enemy
was
talking
to
hss,
which
were
different
from
when
mme
was
talking
to
serving
gateway.
Then
again,
there
was
no
scope,
no
way
when
enemy
can
directly
reach
to
packet
gateway
is
required,
so
the
architecture
of
4g
is
and
was
in
a
way.
B
If
you
look
at
it
was
little
tightly
coupled
this
architecture
was
re-thought
in
terms
of
microservices
and
the
following
architecture
of
5g
was
introduced.
B
If
you
look
at
now
same
same
components,
I
actually
have
got
divided
into
into
microservices,
using
the
fundamentals
of
micro
services
and
12
and,
of
course,
the
12
factor,
also
in
many
in
many
many
many
sense,
and
now
these
micro
services
can
of.
Of
course
you
can
see.
B
This
is
a
standard
bus
architecture
in
which
the
micro
services
are
actually
re,
redone
and
and
each
micro
services
now
talks
to
another,
unlike
the
4g
on
a
standard
protocol
that
is
http
and
every
microservice,
and
this
particular
architecture
actually
has
a
very
mandatory
specification
for
being
api
first.
So
now
you
can
access
your
microservices
through
apis
and
this
particular
architecture
becomes
more
and
more,
I
would
say,
super
flexible.
B
Your
go
to
market
like,
for
example,
if
you
need
to
change
in
this
architecture
in
4g,
if
you
had
to
change
by
any
chance
hss,
you
would
need
a
complete
shutdown
of
this
particular
core
implementation,
but
as
far
as
we
know
how
microservices
and
and
what
are
the
benefits
of
microservices
I'll
not
get
into
all
that
idea
of.
Why
microservices?
And
if
you
change
one
microservice,
the
other
mic,
you
would
not
need
to
change
the
other
micro
services.
B
I
believe
you
all
know
about
it
somewhere,
so
so
this
these
all
benefits
come
into
picture
now,
if
this
is
microservices,
what's
the
first
class
or
what's
what's
the
place
where
the
microservices
should
live
on,
they
can
live
on
vms
too.
I'm
not
saying
no,
but
what's
the
best
place
where
the
microservices
should
live.
B
So
I
guess
that
should
be
containers
and
then
there
should
be
an
orchestration
or
maybe
some
orchestration
tool
to
manage
that
containers
and
what
good
then
cuban?
It
is
yeah.
A
A
Sort
of
path
by
the
telcos
to
you
know,
move
from
applications
running
on
vms,
put
it
simply
to
microservice
architecture
running
on
containers
on
kubernetes
right.
B
Absolutely
absolutely
and
that's
where
that's
where
the
whole
idea
of
of
kubernetes
comes
into
picture.
That's
where
the
whole
idea
of
c
and
f
comes
into
picture,
this
cloud
native
network
functions,
and
so
we
move
from.
We
make
a
move
from
vnfs
to
cnfs
in
in
especially
in
5g,
but
again
just
to
be
pretty
clear.
B
It's
not
it's
not
that
that
it's
all
micro
services
in
all
kubernetes
many
times
you
will
find
a
lot
of
epc
implementations,
which
would
have
some
bnf's
with
some
combination
of
cnf,
because
we
need
to
understand
is
that
we
are
evolving
still.
So
we
are,
we
are
moving
from
4g
to
5g
and
even
we
are
evolving
the
5g
architecture,
but
in
in
the
literal
terms
the
5vr
5g
architecture
actually
would
look
something
like
this
in
in
the
microservices.
This
is
more
of
the
literal
terms.
B
Well,
so
now
we
have
the
motivation.
Now
we
know
why
cuban
it
is
is
is
important
for
telcos
and
where
we
are
coming
from
now.
What
we
wanted
to
do
and
what
me
and
oliver
was
thinking
was.
We
were
thinking
that,
yes,
there
are
cnfs
like
like
amf
on
your
screen,
a
usf
smf
they
are.
These
are
all
cnfs
which
can
be
deployed
on
kubernetes,
but
for
telcos
for
telcograde
kubernetes
deployment
of
cnf.
B
B
You
can
bring
in
the
cnf
suit
from
ericsson,
maybe
from
any
company
like
samsung,
or
maybe
you
can
use
the
open
open
source
cnn
suits
also,
but
if
there's
a
kubernetes
cluster-
and
you
would
like
to
deploy
these
cnns
and
you
would
like
to
use
the
power
of
of
kubernetes,
then
I
guess
this
is
our
motivation
of
the
talk.
This
is
what
we
want
to
talk
about,
how
we
can
create
a
humanities
platform
which
enables
tech,
co
workloads.
What
do
you
think.
A
Yeah,
so
I
mean
you
can
see
from
that
diagram
straight
away
that
the
some
of
the
requirements
for
these
microservices-
you
know
you
know,
test
test
the
functionality
to
the
limit
of
you
know
current
container
configuration,
and
so
we
talked
about
the
data
plane
and
control
plane,
and
these
microservices
will
need
to
be
able
to
take
input
from
both
of
those
and
so
having
multiple
virtual
mix
on
a
vm
is
fairly
trivial.
A
We
need
to
configure
you
know,
cnet
to
have
multiple
interfaces
as
well
onto
our
containers,
and
so
that's
not
out
of
the
box
functionality
right
and
so
that's
the
first
open
source
project.
We're
going
to
have
a
look
at
that
allows
us
to
configure
the
containers
for
multiple
input
interfaces
so
that
they
can
live
comfortably
in
an
architecture
like
this,
where
they're
processing
traffic
from
both
the
control
plane,
which
is
kind
of
in
and
around
you
know
the
metadata
associated
with
the
user
equipment
and
then
the
actual
processing
of
the
data
as
well
right.
A
So
two
very
different
sort
of
network
interfaces
to
process
two
very
different
types
of
traffic,
yeah.
B
So
yes,
so,
as
oliver
said,
you
know
we,
we
know
that
we
need
two
clear
segregations
in
in
one
in
control
plane.
The
other
is
in
the
user
plane
and
on
vnfs,
when
we
were
on
4g
things
very
easy,
you
can
create
an
interface
but
in
containers
by
default
containers
boot
up
always
with
a
single
interface,
or
rather
the
network
interface.
So
I
guess
only
for
this
particular
implementation.
We
are
using
multus
by
a
yeah.
B
We
are,
we
are
using
the
multis
project
to
implement
a
container
multi
multi
interface
containers.
So
this
is
basically
our
overview
of
this
particular
diagram.
So
you
see
a
cluster,
the
master
plugin
here
you
know
for
the
api,
cubelet
and
so
on,
but
on
the
pod
you
can
create
net
zero
net
one
using
the
multis
we'll
just
hack
around
it,
a
little
bit
in
in
short
while,
but
the
basic
idea
is
to
have
multiple
interfaces
more
than
one
interfaces
on
container
yeah.
A
B
So
the
end
goal
is
to
have,
of
course,
have
this
kind
of
architecture
enable
anyone
to
implement
this
kind
of
architecture,
but
to
do
that,
all
these
cnns
would
somewhere
or
else
require
multiple
interfaces,
network
interfaces
and
what
we
are
going
to
do
is
we
are
going
to
make
cuban
is
aware
that
if
there's
a
cnf
which
is
deployed
in
such
a
way,
which
requires
an
extra
interface,
we,
the
the
kubernetes,
should
the
underlying
platform
should
enable
that
particular
container
to
have
that.
B
So
the
maltese
is
the
project
which
we
are
trying
to
implement
here.
So
this
is
the
multis
page.
I
I
would
suggest
everyone
to
look
at
the
quick
start.
It's
pretty
straightforward.
I
guess
we
have
also.
B
I
couldn't
help
myself,
so
I've
done
few
stuff
already,
but
what
we'll
do
is
we'll
verify
it
quickly
and
where
now
so,
this
is
basically
getting
started
and
what
you
would
generally
tend
to
do
here
is
you
would
clone
mulches
and
you
would
deploy
the
multis
daemon
sets
once
you
deploy
the
multis
demon
sets
and,
of
course,
there's.
B
One
important
requirement
is
that
you
need
to
have
a
cni
plug-in
already
running
on
on
your
kubernetes
cluster,
so
what
we
have
done
is
we
have
already
deployed
calico
as
our
cni
plug-in
and
with
that
we
are
deploying
multis
on
on
top
of
it.
So
so,
let's
see
what
happens
so,
what
I've
done
is
is
my
screen.
B
So
what
I've
done
is
I've
actually
get
cloned
motors
here?
If
you
see
I've
already
cloned
that
and
what
I'm
I
did
was
nothing
nothing
pretty
important.
I
just
wanted
to
run
the
multi-stream
sets,
so
I
actually
ran
the
multis
demon
set
yamo
and
when
I
did
that.
Finally,
let's
see,
let's
verify
this-
whether
I
do
have
multi-steaming
sets
running
in
my
cube
system,
namespace
launch
so
so.
B
B
Yes
right,
so
these
are
the
demon
sets
which
are
running
on
on
on
every
note
in
in
in
my
rather
worker
notes.
So
just
to
show
you
if
I
can
say
this,
so
this
is
my
cluster,
and
these
are
my
working
nodes
and
the
mulches
even
sets
are
running
across
them.
One
nice
way
of
testing.
It
also
is
to
check
whether
your
multiscore
is
actually
created
in
in
your
cni
net.d,
in
of
your
or
in
one
of
your
worker
nodes.
B
B
B
Right,
I
am
free
cni
and
under
cni
we
we
would
actually
we
would
actually
look
for
net.d
and
inside
net.d
when
we
installed.
You
would
see
this
folder
multi-story
and
inside
modules.
B
Only
you
would
basically
see
multiscube
conflict
file,
which
is
a
sign
that
the
multis
is
actually
is
deployed,
and
now
it's
it's
ready
now
the
maltese
demon
sets
are
are
are
are
deployed
and
they
are
present
for
you
so
that
the
basically
that
doesn't
mean
that
when
the
multi-demon
sets
are
deployed,
that
doesn't
mean
that
we
can
get
into
and
we
can.
We
can
deploy
any
any
pod
with
multiple
interfaces.
B
B
I've
already
done
that,
and
but
we
will
quickly
discuss
about
this
also
so
after
deploying
the
daemon
sets
what
I,
what
I
do
is
I
look
at
and
I
try
to
kind
of
deploy
a
network
attachment
definition.
So
if
I
say
this
get.
B
And
if
you
observe
that
I
have
created
a
few
of
the
network
attachment
definitions
here
and
let's,
let's
try
to
examine
the
first
network,
adapter
definition,
that's
my
client
b,
so
I
would
describe
it.
B
B
Yeah
they
are
implemented
by
the
ci,
and
that's
that's
one
of
the
way
you
can.
You
can
tell
your
cni
that
you
know
hey.
I
want
to
extend,
and
I
want
to
do
certain
things.
According
to
change,
the
configuration
implement
things
so
network
definition,
cni
are
basically
different.
Definition.
Attachments
would
allow
you
to
to
use
the
cni
capabilities
and
extend
them
in
certain
ways.
So
this
is
exactly
what
we
are
trying
to
do.
B
We
are
trying
to
create
some
certain
network
definitions,
which
would
you
know
when
used
in
any
part
in
future.
Any
cnf
in
future
would
allow
us
to
create
a
different
interface
which
actually
finally
connects
to
the
underlying
cni
plug-in.
That
is
calico
which
we
are
which
which
is
already
present
here
in
in
this
cluster.
B
So
if
you
look
at
this
also,
what
I've
done
is
I've
done
I've
given
a
subnet
I've
given
a
range
wherein,
if
any
cnf
comes
in
and
they
want
a
interface,
they
would
get
it
into
this
range.
I
set
the
route
and
I've
set
the
gateway.
So
now,
whenever
cnn
will
be
deployed,
whenever
the
legitimate
cnf
will
be
deployed
and
the
network
definition
is
available,
it
would
you
can,
it
would
get
an
ip
from
this
range
as
a
second
as
a
second
interface.
B
Rather,
I
would
say
rather
I
would
say
it
would
be
such
a
way
that
the
events
the
network
attachment
is
actually,
if
you
see
this
is
the
network
attachment
and
if
I
would
need
a
cnf
or
any
port
to
use
this
network
attachment
to
create
a
different
interface.
What
I
would
do
is,
I
would
do
something
like
this,
so
this
is
here
I
would
mention
so,
let's
I
say
I
would
create
a
sample
pod
and
I
would
mention
my
network
attachment
definition
here.
B
As
an
annotation,
and
once
I
do
that,
I
would
be
my
container-
will
of
course
come
up
with
a
default
interface
plus
the
interface
in
the
range
which
I
just
showed
you.
This
is
the
first
step
of.
B
The
the
telco
telco
trade
platform,
or
to
make
a
telegraph
platform,
you
know,
deploy
mergers.
The
deployment
of
multis
will
deploy
daemon
sets
across
all
all
your
nodes
will
configure
multis
with
your
cni
plug-in
on
all
your
nodes.
You
can
go
and
verify
it,
as
we
just
showed
you
right
now
and
once
that
has
been
done,
create
a
network
attachment
definition
which
would
talk
about
if
the,
if
there's
a
part
which
refers
this
particular
network
attachment
definition
in
its
annotation,
then
in
what
range
should
be.
B
Should
that
be
be
available
to
that
particular
pod
and
all
different
configurations
which
you
would
like
to
mention
here
so.
A
B
My
I've
I've
all
I've
also
played
played
it
with
andrea,
of
course.
So
yes,
we
are,
I
guess
maltese
works
with
either,
but.
A
A
So
basically,
we're
kind
of
laying
the
ground
the
network
foundation
for
any
cnf
workloads
by
configuring,
our
kubernetes
api
and
our
clusters.
B
So
basically,
what
we
have
done
is
something
very
interesting.
What
we
have
done
is
we
have
been
taken
standard
kubernetes.
We
have
deployed
a
network
plug-in
cni
that
is
calico
and
after
that
we
needed
multi-home
containers.
And
how
can
we
achieve
that?
So
we
deployed
this
I'll
just
deployed
all
the
demon
sets
like
all
my
nodes,
configured
them
at
the
node
level,
and
then
all
I
did
was
I
just
wanted
to
stitch.
B
I
wanted
to
give
a
reference
to
my
future
pods
or
the
parts
which
will
be
deployed
like
as
in
cnn,
that
how
to
find
the
underlying
multis
implementation
on
my
cni
plugin.
So
what
I
did.
I
did
a
network
attachment
definition
for
that.
A
A
Bit
when
we're
talking
about
cnf
is
is
kind
of,
is,
is
data
right
and
throughput,
and
so
so
that's
the
kind
of
next
step
in
kind
of
prepping.
Your
cluster,
if
you
like,
for
being
able
to
run
cnx.
B
Indeed,
now
once
we
have
always
understand
the
idea
of
of
network
attachments,
please
please
take
a
note
of
the
idea
of
network
attachments,
which
I
just
showed
you
right
now
now.
If
we
go
back
to
our
diagram
again,
I'd
like
to
refer
to
it,
so
you
would
see
that
there
is,
of
course
there
is
a
control
plane
and
there
is
data
plane
and
we
have
created
multi-home
containers
for
that
now,
of
course,
for
us,
the
bandwidth
matters
for
telco.
B
If
you
are
creating
a
telco
grade
platform,
your
bandwidth
would
matter
a
lot
and-
and
we
know
that
we
need
to
deploy
our
telco
workloads
on
in
containers,
because
thanks
to
this
kind
of
an
architecture
which
we
just
agreed
on
now
we
need
containerization
and
most
probably
in
in
many
of
your
cases,
your
kubernetes
would
be
actually
deployed
on
a
virtualized
environment.
It
may
be
openstack,
it
may
be
kvm
rev
or,
of
course,
in
our
case,
which
we
tried
doing
was
vsphere
virtualized
or
we
used
the
vmware
virtualization.
For
that.
A
So
avishek
I'm
going
to
interrupt
you
there.
So
let's
just
talk
about
deploying
kubernetes
on
a
virtualization
layer
for
for
let's
talk,
let's
keep
it
to
the
conversation
to
telcos
for
a
second,
I'm
deploying
kubernetes
on
a
virtualization
layer
as
opposed
to
kubernetes
directly
on
bare
metal.
B
Absolutely
so
so
most
of
the
cases
you
will
find
that
you
have
your
kubernetes
on
a
virtualized
environment.
Now,
once
you
have
them
on
your
virtualized
environment,
you
would
realize
that
you
are
not
only
virtualizing
your
compute,
but
you
are
virtualizing
your
network
stores,
so
you
actually
will
end
up
most
probably
with
two
two
levels
of
virtualization
in
network,
and
that
is,
and
that
actually
would
kill
your
bandwidth.
B
Even
do
you
do
have
a
great
infrastructure
in
your
in
your
backbone
network,
with
with
nick's
off
a
very
high
bandwidth,
like
maybe
50,
plus
gbps
or
so
on.
Still,
you
would
find
yourself
that,
since
these
networks
are
virtualized,
you
would
not
be
able
to
achieve
that
kind
of
bandwidth
which
your
which
your
underlying
network
would
give
most.
B
Probably,
for
example,
if
your
underlying
network
gives
you
the
x
bandwidth
with
virtualization,
that
would
become
maybe
less
of
the
next
by
by
by
some
factor
and
when
it
goes
to
the
kubernetes
level.
When,
when
the
cni
is
used,
it
gets
even
even
even
decreased
to
a
certain
factor.
A
Yeah,
so
I
I
guess
I
was
kind
of
referring
to
okay,
so
we've
got
a
virtualization
layer
and
we're
going
to
come
along
with
another
sort
of
layer
in
between
with
our
kubernetes
platform,
to
run
our
containers
right
and
the
container
layer,
and
so
there's
all
these
layers
that
are
kind
of
between
our
running
application
in
the
container,
which
needs
to
access
the
underlining
hardware.
A
And
so
we
we're
kind
of
putting
all
these
layers
in
between,
let's
just
quickly,
just
briefly
discuss
like
the
option
about
okay,
why?
Why
do
I
need
to
run
my
kubernetes
platform
and
telco
workloads
as
containers
on
a
virtualization
layer,
as
opposed
to
running
directly
on
bare
metal
and
perhaps
eliminating
some
of
those
containerization
layers
that
we
now
have
to
traverse
through
or
somehow
figure
a
way
to
kind
of
get
around?
So
our
container
can
directly
access
the
hardware.
A
So
you
know
we're
talking
about
things
like
scale
right.
The
virtualization
layer
gives
you
that
you
know
sort
of
a
bare
metal
implementation
doesn't,
but
I
guess
could
that
be
offset
with
the
scale
that
the
kubernetes
and
containerization
layer
gives
you,
but
you
still
can't
really
scale
the
only
lining
hardware
right.
So
that's
where
that
extra
machine
virtualization
comes
into
play
is
that
is
that
how
you
see
it
as
well.
B
Yes,
exactly
all,
if
I
guess
yes,
yes,
we
can
deploy
kubernetes
on
bare
metal.
There's
no
problem
on
that,
but
I
guess
again
the
scale
factor
would
matter
the
elasticity
of
an
environment
would
also
matter
and
with
virtualization
coming
into
picture
in
in.
In
this
scenario,
your
elasticity
of
environment
actually
increases
multi-fold.
B
So
yes
virtualization,
I
guess-
and
hence
for
this
example,
which
we
wanted
to
show.
We.
We
had
a
choice
of
deploying
it
directly
on
bare
metal,
but
we
choose
not
to
and
we
choose
what
virtualized
environment,
to
deploy,
kubernetes
and
show
that
how
you
can
create
a
telco
grade,
kubernetes
platform
on
a
virtualized
platform
of
your
choice
and
still
achieve
and
get
those
bangers.
A
Yeah,
so
we're
kind
of
getting
the
best
of
both
worlds.
We
have
all
the
sort
of
flexibility
that
we
need
in
terms
of
scale
that
we
could
want
by
having
that
virtual
machine
virtualization
layer,
as
well
as
the
ability
to
harness
the
bandwidth
of
the
underlining
that's
provided
by
the
underlying
nikon
hardware,
the
actual
physical
physical
name.
A
So
that
technology
that
we're
going
to
look
at
next,
you
know:
we've
looked
at
the
networking
space,
we
had
how
we've
addressed
it
with
multis
and
now
we're
going
to
look
at
okay.
Well,
how
can
our
containerized
workloads,
you
know
harness
the
the
the
actual
physical
configuration
of
the
mix
on
the
actual
physical
hardware
and
not
lose
any
latency
by
having
to
kind
of
go
through
all
these
virtualization
layers,
yeah.
B
If
I
do
that
I'll
be
able
to
actually
get
the
bandwidth
which
my
underlying
network
is
is
is
providing
me
that's
in
short,
the
best
way
we
can
talk
about
high
data
plane,
or
rather
data
data
transfer,
so
their
technology
is
like
sri
uv.
I
guess
this
is
the
page
for
sriv
and
the
sriv
and
technologies
like
sri
uv
and
dptk.
Again
we
are
using
intel's
implementation
for
sri
ov
and
and
ndpdk.
There
are
different
other
implementations
out
there
by
different
oems.
B
B
It
is
so
if
my
containers-
and
if
I
somehow
am
able
to
develop
this
kind
of
ability,
when
my
containers
can
go
directly
down
straight
down
to
my
physical
network
and
I
can
harness
them,
then
I
guess
I
can
get
my
bandwidth
so
how
we
can
do
that
and
what's
what's
the
technology
which
does
that
so
different
hypervisors
have
different
ways
of
enabling
sr
iuv
on
on
them
and
when
you
enable
sriv,
what
you
are
actually
looking
for
is
a
certain
amount
of
or
certain
number
of
vfs.
B
That
is
virtual
functions,
exposed
on
your
machine
on
on
your
vm.
So,
for
example,
if
I
break
it
down
further,
you
would
have
if,
in
our
example,
we
are
using
asxi,
or
rather
we
are
using
a
vsphere.
So
on
your
esxi
host,
you
will
have
sriv,
you
would
need
an
sri
uv
enabled
nik
card.
There
are
certain
versions
on
this
page.
You
can
find
them.
There
are
certain
versions
of
sri
uv
enabled
nick
cards
which
are
which
are
required.
So
you
need
to
look
at
them.
B
They
need
to
actually
be
there
on
your
hosts
once
they
are
there.
We
we
can
use
the
for
our
example.
We're
using
vsphere,
so
I'll
quickly
show
you
that,
once
those
are
installed
on
the
host,
how
you
can
quickly
configure
your
vm
to
an
sri
uv
network
interface,
it's
just
all
about
adding
an
interface.
So
I
go
in
here-
and
this
is
my
one
of
my
vms,
where
I
have
enabled
the
sri
uv.
B
So
what
we
have
done
is
not
at
all
rocket
science.
We
have
added
an
sri
uv,
enabled
adapter,
and
this
adapter
type
is
sri
o
we
pass
through-
and
this
is
already
actually
installed
on
my
hosts
and
it
has
been
configured
on
on
on
vsphere.
There's
a
dedicated
documentation
also
that
how
you
can
quickly
enable
an
sri
uv
on
on
on
vms.
B
And
if
I
come
back
to
my
diagram
again,
these
virtual
functions,
which
will
be
exposed
on
on
my
on
my
vm,
can
be
actually
consumed
by
the
cnfs
and
if
cnf
is
somehow
able
to
attach
itself
to
the
underlying
vf,
that
is
virtual
functions.
B
Thanks
to
sriov,
we
got
the
virtual
functions,
enabled
it's
it's
a
very
interesting
talk
that
what
virtual
functions,
however,
virtual
functions
get
enabled
by
sri
uv
and
what
happens
why
the
nick
card?
We
will
reserve
it
for
for
some
other
talk.
Otherwise,
all
it
will
go
in
a
very
different
tension.
So
so
all
you
need
to
know
is:
we
need
to
have
sriv
enabled
on
vms,
as
I
showed
you,
they
will
expose
dfs
and
once
the
vfs
are
are
exposed.
B
B
So
the
sri
uv
project,
which
we
are
just
looking
at-
and
if
you
note
this-
is
an
sriov
plugin.
B
What
it
means
is
that
if
you
deploy
a
sri
uv
plugin
on
kubernetes,
this
will
enable
any
prospective
cnf
to
find
the
underlying
vf
attached
to
it,
so
that
it
can.
It
can
actually
talk
directly
to
the
underlying
nick
by
passing
the
host
s,
bypassing
the
hyper
hypervisor
directly,
as
as,
as
I
show
in
here,
if
you
look
at
these
yellow
lines,
yes,
the
purple
lines,
rather,
the
purple
line
is
my
data
path.
B
So,
basically,
what
we
want
to
do
is
enable
sra
we
expose
vf
and
and
deploy
sriv
plugin,
so
that
my
prospective
container
native
functions,
which
I'm
going
to
deploy
them
as
microservices,
will
be
able
to
find
these
underlying
vfs
and
send
the
data.
That's
that's!
That's
the
whole
rocket
science,
but
what's
more
interesting
here,
is
that
there's
one
more
thing
which
we
need
to
note
here
is.
B
Every
cnf
which
would
be
deployed
should
have
sri
uv
drivers
included
built
in
them,
so
that
this
particular
plugin
that
they
are
able
to
attach
to
this
particular
plugin,
and
then
they
are
able
to
talk
to
the
underlying
virtual
function.
So
we'll
try
to
do
this
I'll,
try
to
again
maybe
show
you
this,
how
we
have
enabled
the
sriv,
so
we
I
just
showed
you
that
how
we
enable
sriv
on
the
vsphere
level,
let's
look
at
it,
how
we
can
promote
it
to
kubernetes.
B
Please
remember
on
kubernetes
sri:
u
is
just
a
plugin
to
actually
find
out
the
underlying
vf
or
the
virtual
function,
rather
in
another
way,
if
you,
if
you
like
to
look
at
this,
I
guess
maybe
this
is
again
olive.
I
guess
this
is
another
way
of
looking
at
these
things.
I
believe
like,
for
example,
you
have
pod
and
if
you
see
the
south
zero,
not
zero
and
eth0,
this
is
going
to
see.
B
This
is
courtesy
multis,
which
we
just
did
and
then
you
would
see
the
sri
op
plugin,
which
actually
gets
embedded
in
your
cni
implementation
and
then
that
sliv
plug-in
actually
enables
you
to
find
a
virtual
function,
which
is
actually
exposed
already
on
your
virtual
machines,
because
you
have
configured
your
virtual
machines
for
sri
uv,
like
as
I
showed
and
then,
and
if
you
see
this
red
data
path
happening,
your
your
pod
can
directly
talk
via
one
of
the
interfaces
to
directly
to
the
underlying
link.
That's
the
basic
idea.
B
Again:
there
are
subtle
differences
before
I
go
and
show
that
there
are
subtle
differences
into
sri
uv
and
dpdk,
but
from
an
implementation
point
of
view.
Sriov
is,
is
great
technology
when
you're
talking
about
north
south
bound
traffic
and
my
apologies-
not
southbound
traffic
and
dpdk
is
all
about
east
west
pound
traffic,
so
these
technologies
have
their
edge
over
each
other.
But
the
idea
is
pretty
similar.
You
want
to
get
down
to
your
backbone
network
and
get
the
real
bandwidth
which,
which
your
backbone
networks
get.
B
So,
let's
quickly
see
how
this
looks
like
on
kubernetes
and
enable
kubernetes
for
sri
uv
and
dpdk.
A
So
abhishek
we've
got
a
question
in
the
chat
that
talks
about
from
ymo
from.
Can
you
touch
if
this
is
somewhat
similar
to
the
work
being
done
as
network
mesh?
Is
that
similar
to
a
vf
running
at
lower
transport
protocols.
B
Yes,
it
is,
it
is
somewhere
similar
to
to
network
mesh
in
many
ways.
Yes,
and
I
I'm
I'm
pretty,
I
guess
I
would
pretty.
I
would
like
to
talk
more
on
that
when
the
network
mesh
or
in
what
context
are
we
talking
about
network
mesh?
That
would
actually
matter
a
lot
to
talk
further
and
maybe
discuss
this,
but
we
can.
B
This
is
somewhere
similar.
You
are
basically
trying
to
the
whole
idea.
Is
you
are
trying
to
go
back
to
your
bare-bone
network
or
the
backbone
network?
I
would
say,
and
and
capitalize
that
that
bandwidth,
which
which
it
gives,
which,
of
course,
the
virtualized
networks,
will
never
be
able
to
give
and
you
being
telco
that
this
is.
This
is
super
primary
for
you
for
any
platform
in
telco.
A
A
B
Of
course,
yes,
it
is
a
cnn
implementation.
That's
true!
There
will
be
a
cni
plug-in
if
you
see
here
for
a
change
I've
put
in
a
flannel
instead
of
calico
just
for
the
change
you
can
use
panel
also
and,
of
course,
that
cni
interface
also
has
sri
uv
plug-ins.
So
again,
these
are
all
plug-ins.
If
you
have
to
look
at
this
in
this
way,
these
are
all
plugins
to
the
dni
itself.
B
So
quickly
shall
we
shall
we
quickly
do
this.
B
Again,
as
we
did
moses,
the
idea
of
doing
it
is
pretty
similar
in
in
the
implementation
and
and
I'm
a
person,
I'm
generally
more
interested
in.
Why
than
interested
in
how
so,
but
still
in
sri
uv.
So
what
what
we
did
was
same
play
page
and
as
I
as
we
talked
about
you
can
go
in
here,
read
about
you
can
go
in
here
read
about
sri
uv
in
detail.
B
B
I
do
have
an
sro
vcrd,
so
I
would
be
creating
certain
crds
for
this,
so
basically
it
it
actually
ends
up
creating
a
network
attachment
a
network
definition
attachment,
like
you
saw
in
multis.
So
again
the
idea
is
simple:
we
will
deploy
this.
The
daemon
sets
will
be
created
because
you
would
need
something.
B
You
would
need
some
process
running
to
identify
the
underlying
vfs,
because
vf
will
be
exposed
on
the
vm
on
the
worker
node
vms,
and
you
would
require
certain
gaming
sets
running
on
each
worker
node
so
that
you
they
are
aware
of
the
underlying
vf
or
they
can
actually
discover
the
underlying
wheels,
and
that
is
what
your
demon
sets
would
actually
do,
and
then
you
would
need
the
recipe
is
again
the
same.
B
You
would
need
to
create
a
network
definition
attachment
network
attachment
definition
so
that
once
that
is
created,
you
can
refer
that
network
definition
attachment
to
your
pods
or
to
your
cnfs.
So
you
can
say:
okay,
I'm
deploying
now
a
cnf
abc,
which
needs
to
be
sri
to
be
enabled,
or
what
I
would
do
is
in.
B
A
B
A
B
Again,
if
I
look
at
look
at
network
actually,
so
these
are
the
network
definitions
and
right
now,
just
right
now
we
saw
one
of
the
network
definitions
that
was
maclan,
mcwheel
and
conf
right.
We
did
this
for
multis.
Now
we
are
going
to
do
the
same
thing
for
sri
uv.
I
have
created
a
network
definition
for
sriv
net
one,
but
before
doing
that,
what
I
have
done
is
I
have
used
these
yaml
file
to
deploy
sri
uv
daemon
set.
So
first
have
a
look
at
the
sru
with
demon
sets.
A
A
Clear
just
for
a
clear
avatar,
so
the
network
attachment
definitions
is
created
by
the
multi-sriv
install
or
is
it
something
that
you
have
to
define?
We
have
to
define
that.
B
We
have
to
define
that
and
we
have
to
deploy
that.
So
that's
the
reason
we
are
describing
it
also.
So
if
you
look
at
a
standard,
a
standard,
multis
definition,
network
definition
would
look
like
this.
A
A
B
If
you're
doing
for
sri
op,
you
have
to
deploy
the
daemon
sets
for
sri
uv,
then
create
the
respective
network
attachment
definition
for
that
deploy
the
network
attachment
definition
like
I
can
do
it.
I
can
create
it
in
one
file
and
I
can
create
it.
So
if
you
want
we
can.
B
We
have
already
done
it
here
and
if
you
see,
if
you
see
this,
we
have
two
mac
and
v's,
which
created,
which
I
created
one
these
many
days
back
the
other.
In
these
many
minutes
back.
A
Yep
and
then
you
as
an
attachment
on
your
cnf,
you
would
reference
the
relevant
network,
attachment
difference.
A
So
that
it's
kind
of
like
it's
sort
of
labels
that
part
as
the
cnf,
then
that
needs
to
use
those
plugins
actually.
A
A
B
So
this
is
for
the
mulches,
the
similar
kind
of
thing,
which
we
would
we
are
actually
going
to
do
for
sriv
also
and
for
dpdk,
also
as
far
as
that,
how
part
is
concerned,
as
far
as
the
implementation
participants
yeah.
So
so
it's
the
same
thing
which
we're
going
to
do.
We
just
wanted
to
check
whether
our
demon
sets
are
running,
and
we
know.
A
B
So
once
they
are
there,
I
would
go
ahead
and
again
create
a
network
definition
for
sio
sriuv
and
we
can
go
ahead
and
do
that.
I've
already
done
that
in
a
very
similar
way.
I
the
way
I
just
explained
right
now,
the
way
we
did
it
from
for
macklin
we
yeah
macbeth.
B
B
This
is
again
the
different
definition
for
sriv
the
with
its
configurations,
which
you
have
to
mention,
and
it's
it's
already
available
in
in
the
page,
also,
which
I
have
shown
there
certain
configuration
aspects
which,
if
we
get
into,
we
would
require
a
whole
day.
So
I
would
not
dwell
into
that,
but
they
are.
The
page
is
self-explanatory
for
sriv.
B
A
So
we've
kind
of
primed
primed
our
kubernetes
platform,
with
multis
with
sri
ov,
to
enable
mdbtk
plugins.
B
B
Here
I
believe
here
itself
again
here.
So
these
are.
These
are
the
pods
which
are
which
are
which
are
using
the
underlying
vf's.
B
Even
if
I
refer
to
a
network
attachment
definition,
that
the
pod
is
going
to
land
on
a
vm
which
is
actually
sri,
you
will
be
enabled
and
it
has
vfs
or
not,
because
in
your
data
center
there
can
be
a
very
high
possibility
that
you
may
have
say
100
nodes,
maybe
or
maybe
like
50
nodes
and
out
of
which
you
may
have
say,
27
nodes,
which
are
sri,
ov
enabled
and
the
other
are
not
so.
The
question
is.
B
Absolutely
absolutely
absolutely
absolutely,
but
the
plugin
actually
will
only
come
into
picture
when
when
the
vf
is
there,
but
but
the
sri
uv
plug-in
does
not
give
me
any
api
or
any
way
of
letting
me
know
that
this
daemon
set
of
sri
uv
running
on
this
vm
can
let
me
know
that,
oh
no,
there
is
no
any
underlying
vf
to
this
to
do
this.
So,
if
imagine
a
situation,
I
need
to
deploy
a
cns
on
this
primed
platform.
Now
we
have
done
everything
and
we
want
to
deploy,
but
we
are
very.
B
We
know
that
out
of
50
nodes,
27
nodes
are
only
having
our
sri
iob
enabled,
and
every
node
may
have.
The
first
node
has
say:
20
virtual
functions.
The
second
node
has
five
virtual
functions.
B
How
how
am
I
going
to
schedule,
and
how
am
I
going
to
place
my
cnf
to
to
the
node,
which
is
sri,
o
really
built?
So
again?
The
idea
of
I
believe
to
solve
this
kind
of
question.
The
idea
of
labels
and
selectors
comes
into
picture.
There
should
be
an
entity.
B
There
should
be
something
in
kubernetes
cluster,
which
should
kind
of
go
and
look
at
every
node
and
find
out
what
are
the
features
which
are
enabled
on
the
particular
node,
like
is
sri
uv
enabled
is
this
vm
has
so
called
this
particular
operating
system
kernel
or
not.
B
So
we
need
something
which
can
tell
the
scheduler
or
rather
tell
us
or
tell
which
can
which,
if
we
make
a
call
to
it,
it
can
give
us
all
the
labels
of
all
the
features
which
are
available
on
the
node
to
do
this,
and
this
is
pretty
important
when
you
are
deploying
and
when
you
are
scaling
your
cnf
to
do
this,
we
have
a
concept
of
known
feature
discovery.
B
So
the
node
feature
discovery
is
again
when
you
deploy
this,
this
gets
deployed
as
demon
sets,
every
demon
sets,
demon
set
or
demon
pod
actually
goes
on
every
node
and
scrapes
the
node
for
all
the
information
and
then
creates
labels
around
it.
B
B
We
have
implemented
it,
so
it
just
gets
implemented
in
in
its
own
name
space,
and
if
you
will
see
here,
there's
a
namespace
I
have,
which
is.
B
I
guess
I
guess
I'm
having
an
issue
with
my
screen
here,
all
right,
yeah,
so
there's
a
name
space
for
node
feature
discovery
which
runs
the
node
set.
So
if
I,
if
I
do
this,
you
can
get
ds.
B
You
would
see
that
there
would
be
no
feature
discovery.
Daemon
sets
running
on
every
worker
nodes
which
I
have.
This
will
be
scraping
the
the
data
and,
if
I
run
a
query
around
it-
and
this
is
pretty
interesting
if
we
look
at
node-
feature
discovery
documentation-
and
this
is
the
project
which
we
are
using
for
node
feature
discovery.
B
And
this
is
pretty
pretty
pretty
straightforward
a
way
of
event.
So
if
you
run
this
command
and
if
you
kind
of
query
this
particular
daemon
sets
running
on
every
every
node,
we
are
using
cube
cuddle.
You
can
use
apis
also
to
do
that,
and
if
you
run
that
it
is,
it
will
give
me
back
all
the
labels
of
all
the
features
which
it
discovered,
which
it
has
discovered
in
all
the
nodes.
B
Exactly
so,
this
completes
the
circle
as
to
how
you
can
actually
start
building
your
telco
grade
kubernetes
platform.
Of
course,
then,
as
far
as
the
quality
of
service
is
concerned,
there
are
other
projects
like
topology
manager
which
come
into
project
you.
Can
you
can
schedule
your
your
cloud
native
functions
according
to
the
pneuma
placements
also,
so
that
can
also
be
achieved
in
many
ways,
and
I
guess
I
in
accumulated
1.19
topology
manager
is
also
cheap.
A
Yeah
so
I
mean
yeah,
I
mean,
I
think,
we've
we've
kind
of
shown
how
how
we
can
prime
a
kubernetes
platform
for
cns
before
there's
one
or
two
questions
that
I
just
want
to
draw
your
attention
to,
but
before
we
do
that
we
could
just
briefly
discuss
so
the
kind
of
logical
step
I
would
say
for
some
projects
and
some
organizations
out
there
is
to
take
these
projects
that
these
technologies,
that
you
need
to
add
on
top
of
kubernetes
right
to
make
it
a
telco
platform
and
then
simply
provide
and
wrap
that
up
as
a
telco
sort
of
delivery.
A
So
here's
your
telco
platform
pre-built
ready
to
go
just
you
know
log
on
to
this
website
to
configure
it.
You
know
your
ip
ranges
and
some
other
parameters,
your
labels,
naming
conventions
and
you
know,
click
go
and
you
have
your
telco
telco.
Kubernetes.
Basically
is
that
is
that
has
has.
Is
that
is
that
kind
of
out
there?
Is
that
what
kind
of
anybody
doing
that
or
is
each
kind
of
telco
organization
kind
of
like
taking
kubernetes,
whatever
distribution
that
they
their
their
preferences
and
then
building
these
layers?
On
top.
A
Yeah
I
mean
so
that
that's
kind
of
like
when
there's
a
set
of
steps
that
you
have
to
implement
or
technology
that
you
have
to
kind
of
add
on
top,
usually
that
kind
of
thing
that
gets
brought
in
as
a
kind
of
a
okay.
Let's
make
that
a
project
or
a
sort
of
automatic
extensions
installed
on
top
to
make
it
to
make
a
kubernetes
version
for
telcos,
if
you
like,
that
would
be
my
sort
of
that.
That
would
be
kind
of
something
that
I
think
would
be
useful
to
a
lot
of
telco
organizations.
A
So
there
was
again
a
question,
so
I
think
yeah.
I
think
that
was
a
good
session.
So
I
think
there
was
a
question
on.
Can
you
touch
on
where
the
kubernetes
control
plane
run
normally
in
telcos,
for
example,
in
remote
locations
with
maybe
no
links
back
to
the
central
servers
or
on
or
on
you
know,
or
touch
in
general,
on
kubernetes
topology
in
telco
environments?
So
how?
How
are
kubernetes
topologies?
What
do
they
look
like
for
telcos?
I
guess.
B
Absolutely
yeah
yeah.
So
if
you
look
at
the
core
co-data
center,
of
course,
core
data
centers
would
would
have
in
you
know
in
one
particular
location.
The
core
data
center
would
have
its
control
plane
and
mask
nodes
geographically
co-located,
but
but
then
again
when
we
are
talking
about
edge
computing,
where
you're
talking
about
moving
this
particular
core,
that
is
maybe
5g
or
4g
core
or
the
core,
which
we
are
talking
about,
like
in
sense
that
this
epc,
the
very
epc
itself
like
this.
B
This
is
actually
a
core
implementation,
or
rather
epc,
which
you're
talking
and
then
it
totally
depends.
If,
if
you're
going
through
the
edge
computing,
you
may
have
only
your
workload
clusters
running,
whereas
your
control
planes
may
be
running
in
a
geographically
different
location,
maybe
maybe
somewhere
in
the
in
the
in
your
data
center.
B
But
again,
we
also
need
to
understand
that,
even
though
the
edge
nodes
would
be
running
in
in
different
geographical
location
and
the
masters
would
be
running
in
different
locations,
they
also
need
to
be
primed
with
all
these
technologies,
as
you
mentioned
in
in
edge
for
for
them
to
actually
capitalize
the
highly
depleted
data
transfers
and
basically
melt
us
in
everything.
But
the
topologies
can
come
in
multiple
ways.
It
totally
depends
on
the
use
case.
B
A
typical
topology,
rather
very
fascinating,
very
typical
topology
would
be
in
sense
like
if
you
look
at
edge
computing
and
and
if
you
look
at
things
like
vmware,
robo
or
vsphere
robo,
you
would
have
control
playing
in
at
one
place,
whereas
in
another,
in
red
hat,
as
well
as
in
rancher,
and
there
are
very
different
other
companies
who
are
implementing
this
in
in
very
different
ways.
B
So
the
control
planes
will
be
at
the
edge
location,
whereas
the
control
plane,
sorry,
the
worker
nodes,
will
be
at
the
edge
location,
whereas
in
the
in
the
control
plane
will
be
located
in
your
data
center
that
that
is
highly
possible
there.
There
may
be
use
cases
there
and
you
would
have
your
control
plane
also,
to
a
certain
extent
on
your
edge
sites.
It
totally
depends
on
those
use
cases.
I.
A
Mean
they
can
see,
there's
a
there's,
there's
a
scenario
where
maybe
edge
edge
implementations
some
of
those
particular
devices
may
morph
or
expand
or
scale
into
being
a
core
right
with
the
growth
of
that
area.
I
think
fran,
frank
in
the
chat
has
has
kind
of
answered.
A
A
Yeah,
that's
that's
awesome,
frank
things.
Thank
you.
I
think
I
think
I
think
we're.
I
think
I
think
we've
kind
of
covered
all
we
wanted
to
cover
today
in
you
know.
How
do
we
build?
You
know?
How
do
we
extend
your
company's
platform
to
be
cnf
ready
right?
You
know,
there's
more
there's
more
to
talk
about,
and
you
know,
as
I
said
at
the
start
of
the
episode
we
just
wanted
to
this
session.
A
We
weren't
sure
how
it
would
go
down,
but
you
know
we
kind
of
wanted
to
talk
this
as
the
kind
of
first
telco
session
that
potentially
we
might
give-
or
somebody
else
might
give
on
this
talk
on
this
platform,
and
you
know
this
is
kind
of
the
first
step
we
see
in
in
sort
of
you
know,
implementing
cnfs
will
tell
you
know.
A
Obviously
lots
of
projects
out
there
and
that
we
maybe
haven't
touched
on
as
much
we've
kind
of
talked
about
the
ones
that
we
were
demoing
here
today,
but
it's
a
huge
space,
and
so
I
hope
maybe
you
know
we
come
along
and
we'll
do
another
one
in
a
sort
of
sort
of
the
next
kind
of
stage
right.
What
do
you
think.
B
Yeah
yeah
there
are
a
lot
of.
There
are
a
lot
of
projects
which
we
love
to
talk
about
and
discuss
that,
and
I
guess
today
what
we
really
wanted
to
do
was
explain
how
you
can
find
cuban
it
is
for
for
telco
trade
workloads,
but
there
are
many
other
use
cases
which
we
can
actually
talk
about.
There
are
different
kind
of
cni
plugins,
which
we
need
to
look
at
in
in
very
different
ways
for
for
them
for
telco.
A
Yeah
we
mentioned
like
melanox
and
entry
sort
of
implementations
as
well,
which
are
you
know,
that's
another
interesting
one
that
we
talked
about
talking
about
so
yeah
interesting,
interesting
stuff.
I
really
appreciate
everybody's
input
from
the
chat
it's
made.
It
really
enjoyable
and
I
hope
we've
kind
of
responded
in
kind
and
made
made
this
episode
interesting
for
you.
You
know,
for
you
guys
taking
the
time
out
to
join
us.
We
really
appreciate
it.
A
So
that's
that's
it
for
me.
I
think
I'm
happy
that
it's
kind
of
seemed
to
gone
okay
and
again
thanks
to
everybody,
especially
the
tgik,
back
team,
paul
and
george
and
and
everybody
who
kind
of
sorted
us
out
today
in
terms
of
setting
up
setting
up
the
audio
and
the
visual
and
getting
everything
to
work
really
really
appreciate
it.
Thank
you.