►
Description
We believe in the open ecosystem in the telco world. CSPs should learn from the open-source community-driven approach, apply Carrier Grade requirements, and force SLA and go brave.
In this webinar, we want to show how to achieve significant cost reduction using open source projects like ONF EPC, CNCF, or other open-source projects to migrate from legacy Network Function or vendor-specific VNFs to cost-effective, flexible, and future proof CNFs. We want to show you how to tame the legacy protocols like SIP, DIAMETER, SMPP, and prepare for 5G RESTFul API.
Presenters:
Grzegorz Sikora, VP Business Development @OVOO
Rafał Myśliwiec, Software Engineer @OVOO
Paweł Kulpa, Software Engineer @OVOO
A
Okay,
let's
get
started
hello
everyone.
Thank
you
very
much
for
joining
us
for
today's
cncf
webinar
how
to
migrate,
nf
or
vnf
to
cnf
without
vendor
lock-in.
I
am
jerry
fallon
and
I
will
be
moderating
today's
webinar.
We
would
like
to
welcome
our
presenters
today.
Greg
sequora
vp
of
business
development
at
ovo,
rafale
software
engineer
at
ovo
and
powell
coopa
software
engineer
at
ovo,
just
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
A
There
is
a
q,
a
box
at
the
bottom
of
your
screen,
so
please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
are
in
violation
of
the
code
of
conduct.
Please
be
respectful
of
your
fellow
participants
and
presenters.
B
Okay
thanks
jared
yeah,
so
my
name
is
greg
shikora.
I
will
be
going
through
this
presentation
to
today,
with
rafa
and
and
pavaro,
and
today
we're
gonna
talk
about
migration
of
network
function
to
cloud
native
function
without
vendor
locking.
How
does
it
works
so
we
will
see.
B
B
In
the
upper
operator
environment,
but
privately,
I
am
ultra
adrenaline
runner
and
I
am
telecom
and
cloud
expert
with
20
plus
years
experience
and
blockchain
addicted
enthusiasts
raphael.
Michelivets
is
our
swiss
knife
of
messaging
experience
in
software
engineering
expert
in
legacy
protocols,
migration
and
pavel
culpa
is
responsible
here
for
cloud
cloudification
and
his
masterchef
of
cloudification.
B
Company
was
settled
down
in
2012
and
since
the
beginning
we
are
focused
at
telco.
So
telco
is
our
domain
and
we
do
the
best
telco
and
we
have
few
products
here
are
some
examples
of
our
products.
B
B
It's
quite
easy
to
find
projects
open
source
projects
which
fulfill
your
needs,
but
the
most
important
is
to
to
know
how
to
connect
them
together
and
use
them
to
provide
solution,
which
is
either
carrier
grade
or
telco
great
and
can
provide
sla,
and
we
have
this
experience.
We
did
more
than
15
big
migrations
which
we're
using
open
source
projects.
B
B
Definitely
smart,
smart!
Everything
5g
is
not
just
next
g.
It's
it's
a
completely
different.
It's
new
provides
us
new
demands.
We
can
work
and
play
in
the
in
the
cloud.
B
We
have
huge
demands
for
data
gig
gigabits
in
the
seconds,
augmented
reality
virtual
reality,
new
use
cases
like
vehicle
to
vehicle
communication,
which
require
extremely
low
latency.
On
the
other
hand,
we
have
a
lot
of
devices
iot
devices
which
communicate
between
each
other,
sending
small
portion
of
data,
and
we
have
home
customers
who
who
are
using
huge
pipes
for
ultra
hd,
video
and
so
on
on
or
online
gaming.
B
As
you
see,
we
see
some
path
between
network
functions
and
cloud
native
functions,
and
this
is
kind
of
the
evolution
to
the
cloud
before
15
2015
most
of
services
applications
were
deployed
as
a
native
network
functions,
which
is
which
were
usually
deployed
as
a
solution
placed
on
bare
metal
in
the
service
provider
data
center,
then
we
had
hype
of
nfv
and
most
of
those
legacy.
Solutions
were
somehow
migrated
to
vnf
virtual
network
network
function,
but
it
wasn't
disruptive.
It
was
just
squeezing
and
fitting
the
legacy
solution
to
virtualized
solution.
B
The
cloud
native
functions.
It's
completely
different
story.
The
solution
has
to
be
redesigned
to
be
able
to
work
in
such
environment,
and
nowadays
we
have
some
kind
of
hybrid
deployments,
because
we
have
to
fulfill
the
requirements
of
legacy
services
and
legacy
protocols
in
the
future.
Everything
will
be
purely
cloud
using
restful
api
to
communicate
each
other
and
how
to
do
this
evolution,
so
we
have
to
focus
at
different
areas.
I
I
mentioned
here
a
few
of
them
like
development
process.
Definitely
you
are
not
able
to
that
in
traditional
waterfall.
B
You
have
to
be
devops
and
do
this
kind
of
continuous
integration,
continuous
delivery
process.
Application
has
to
be
migrated
from
monolithic
to
micro
services.
It
has
to
be
cutted
to
small,
smaller
portions
and
those
smaller
portions
micro
services
has
to
be
containerized
to
be
ready
to
cloud,
and
this
is
more
or
less
how
we
see
this
evolution
at
following
slides.
B
We're
going
to
show
how
to
do
that,
step,
step
by
step
and
based
on
this
slide,
we
we
figure
out,
and
we
worked
out
our
architecture
blueprint
for
pro
for
for
such
kind
of
services,
which
can
be
deployed
either
in
a
service
provider,
back-end
data
center
or
at
edge.
B
And
here
we
have
the
architecture,
blueprint
and
I'd
like
to
mention
here.
One
important
point,
but
previously
here,
as
I
mentioned,
service
providers,
sorry
network
equipment
providers
who
would
wanted
to
do
migration
from
network
function
to
virtual
network
function.
They
they
just
migrated
service,
as
such.
B
Migration
to
cloud
native
function
require
redesign
and
shifting
part
of
functionality,
part
of
the
service
to
open
source
projects,
and-
and
here
we
are
showing
how
to
do
that-
for
the
high
availability
for
automatic
automatic
deployment.
We
are
using
such
solutions.
Products
like
ansible,
terraform,
docker,
kubernetes
chef,
so
we
can
use
them
to
to
do
automatic
deployment
on
infrastructure
as
a
service
or
even
bare
metal
as
depend
on
on
service
provider,
needs
and
demands.
We
can
use
different
years.
B
On
top
of
that,
we
have
to
manage
the
access
layer
and
communication
with
other
services
and
platforms
using
either
http
restful,
api
or
ss7
protocol
like
camel
map
and
so
on,
diameter
sip
smpp,
and
then
we
have
services
and
we
don't.
We
don't
need
any
any
middleware
here
like
like
previously
js
leads
zip
servers
and
so
on.
We
can
use
frameworks
like
akka
or
springboot
to
to
to
implement
service
and
use
kubernetes
to
orchestrate
those
services.
B
On
top
of
that,
we
need
data
layer
either
for
session
replication
or
persistent
data
using
different
applications
and
and
solutions
like
hazel
cast
cockroach
itself.
For
for
block
storage,
so
I
wanted
to
to
make
my
part
extremely
short,
because
the
the
most
exciting
part
we'll
start
right
now,
and
so
we
will
move
forward
to
demo
part
and
travel
mike
is
yours.
C
Some
of
them
will
be
redesigned
slides,
as
as
greg
mentioned
earlier,
there
will
be
some
kind
presented
in
a
verbal
way
and
other
slides
will
be
shown
in
practical
way
by
pavel
by
I
mean
some
creation
of
the
kubernetes
kubernetes
and
other
parts
of
our
migration
stuff.
So
at
the
beginning
we
have
a
single
instance,
application
monolithic
application,
hard
to
maintain
one
which
is
very
complex
because
it's
not
an
easy,
stateless
application,
but
it
contains
a
lot
of
advanced
mechanics
mechanisms
such
as
caching
such
as
queuing.
C
What's
important,
the
most
it's
it.
It
is
handling
the
integration
with
telco
protocols
such
as
ship
map
smpp
diameter,
which
is
making
them
ready
to
cooperate
with
five
keywords
in
cloud
native
in
cloud
native
environment,
which
is
some
kind
of
stuff
that
all
service
network
functions
providers
were
afraid
of
and
at
the
end
we
want
to.
We
want
to
achieve
the
distributed
micro
service,
oriented
system
which
is
which
is
integrated
with
the
kubernetes
stack
and
please
to
the
next
slide.
C
The
reason
why
we
selected
it
is
because
it
has
the
greatest
variety,
great
greatest
variety,
of
the
the
integration
with
telco
protocols,
so
ship
smpp
map
diameter
with
ocs
and
so
on,
and
it
is
some
kind
of
replacement
of
earlier
earlier
smscs
which
existed
on
the
market,
which
were
non-distributed
word,
which
worked
in
a
legacy,
nf
or
vnf
way.
C
So
when
it
comes
to
initial
infrastructure,
what
we
have
right
now
at
the
at
the
bottom,
we
have
the
external
infrastructure.
So
we
have
the
layer
of
the
client
and
at
the
upper
level
we
have
our
local
infrastructure,
which
is
what
we
have
inside
inside
our
virtual
machines.
So
everything
we
have
in
opens
up
openstack,
as
we
have
everest.
C
As
we
see
every
services
laying
on
the
separate
virtual
machine.
Also,
when
it
comes
to
services,
they
are
monolithic,
they
handle
logic,
they
handle
it.
They
handle
api
within
the
single
deployable
units,
they
have
a
separate
cache
layer,
they
have
a
separate
queuing
layer,
they
have
a
separate
connection
points
with
with
the
clients,
with
a
shipmap
smpp
diameter
clients,
which
is
very
hard
to
maintain,
which
causes
us
the
problems
when
we
want
to
scale
in
the
scale
up
or
scale
down,
or
we
want
to
change
versions.
C
It
requires
some
intervent
interventions,
not
only
from
our
side
but
also
from
external
point
of
view.
So
also
we
have
the
configuration
for
we
have.
We
have
the
management
database,
which
is
mysql,
which
is
very
good
for
us,
but
when
we
want
to
move
into
the
cloud
native
when
we
want
to
move
to
cloud
native
and
we're
an
environment,
we
want
to
have
a
distributed
database.
C
And
the
step
number
one,
what
we
want
to
what
we
will
do
here
is
to
replace
the
old
alt
mechanism,
which
makes
our
system
which
make
our
system
single
single,
stated
and
separate
when
it
comes
to
cueing,
and
we
want
to
make
it
shared
so
and
distributed
so
we
we
will
use
hazelcast
for
caching
service
for
caching
of
the
state
for
caching
of
the
configuration
management.
Also
here
we
can
use
also
aerospike
or
radiosource
or
any
other
distributed
caching
system
that
that
one
can
imagine
also
for
cueing
layer.
C
In
the
step
number
two,
when
we,
when
we
have
distributed
service,
we
have
the
problem,
also
with
the
pro
with
the
fact
that
we
have
connected
logic
with
the
access
layer,
so
we'll
isolate
the
access
layer
and
we
will.
We
are
building
the
api
gateways
for
each
protocol
separately,
making
them
connect
directly
to
the
client
and
leave
all
the
internal
communication
within
the
http.
C
So
our
migration
was
not
that
hard,
but
also
to
make
this
database
to
have
the
database
to
be
a
transactional
one.
It
is
important
for
us
that
this
distributed
database
will
be
a
transactional
because
of
requirements
from
many
clients.
That's
why
we
chose
to
go
with
cockroachdb,
which
is
one
of
the
databases
also
recommended
by
cncf.
C
It's
not
only
distributed,
but
also
easily
horizontally
scalable,
it's
fault,
tolerant
and
it's.
When
it
comes
to
migration.
We
had
some.
We,
we
didn't
have
any
problems
when
it
comes
to
access
layer,
because
the
only
thing
to
do
was
to
migrate
drivers
and
make
some
minor
changes
in
sql
statements,
but
they
they
are
only
because,
because
the
driver
is
changed
from
mysql
to
postgresql,
and
but
these
are
only
minor
small
changes.
Also,
we
had
to
make
a
migration
of
the
triggers
to
the
application
for
some
deployments
it
can
be.
D
First,
we
need
the
kubernetes
cluster,
our
cluster
contain
a
tree
master
and
with
etcetera
and
three
workers
for
storage
process
and
volume.
We
we
are
using
itself.
We
deploy
our
kubernetes
cluster
on
openstack.
D
Before
a
webinar,
I
record
some
video
from
creation
a
cluster
and
deploying
a
application
on
kubernetes
yeah.
A
D
D
Okay,
this
video
yeah.
First,
we
create
a
cluster
on
rancher
type,
some
name
for
the.
D
D
And
com
copy
command
run,
which
we
must
run
on
every
node,
with
proper
prefix.
D
To
create
virtual
machine
on
openstack,
we
hit
with
cloud
in
it
and
we
add,
comment
to
run
after
a
machine
will
be
ready,
of
course,
to
cloud
in
it
and
we
set
proper
parameter.
Node.
D
D
D
Now
we
see
on
our
openstack
dashboard,
we
have
a
six
virtual
machine,
three
master
and
two
worker,
and
we
see
on
our
rancher.
We
have
a
proper
provision,
kubernetes.
D
D
Okay
next
slide:
okay
ss7
protocol
using
acetape,
so
we
need
enable
in
our
kubernetes
cluster,
because
http
isn't
default,
enabled
in.
D
D
Sctp,
the
standalone,
sip
stack
and
ss7
sdp
layer
require
adoption
to
proper
expose
acid
and,
as
a
seven
toward
external
infrastructure,
minor
http
update
to
establish
association
on
node
port
and
as
a
seven
layer
above
it
is
not
impact
in
a
sip
stack.
There
is
a
lot
of
changes
in
message
processing
due
to
ip
addresses
and
a
port
present
in
different
headers
like
vr
route.
D
Balancing
on
our
cluster,
we
install
necessary
component,
which
is
elastic
stock
cassandra
kafka
to
install
open
source
software.
We
are
using
a
helm
and
before
that,
we
prepared
value
values
proper.
Now
we
with
helm,
we
are
installing
a
kafka.
C
C
Coming
back
to
the
logic
services,
firstly,
as
they
were
on
a
pure
instant
on
a
pure
virtual
machines,
there
was
no
demand
for
them
to
be
dockerized,
but
now,
when
we
want
to
move
to
kubernetes,
we
had
to
do
it.
C
Also,
when
it
comes
to
our
ci
cd
pipelines,
they
must
have
been
reconfigured
in
order
for
them
to
be
integrated
with
our
new
docker
registries.
Our
new
dock
kibernet's
cluster.
So
now
jankis
was
about
to
build
an
image
to
push
it
to
the
docker
registry
and
then
deploy
it
to
the
kubernetes
cluster
when
only
needed.
C
D
Okay,
we
create
a
config
map.
Config
map
allows
us
to
separate
configuration
file
from
image
content
instead
of
putting
all
config
file
inside
inside
the
image
we
create
a
config
map.
This
example
show
config
as
a
seven
map
gateway
and
configuration
for
messaging
gateway.
C
C
In
this
in
these
files
we
can,
we
can
configure
some
kind
of
parameters
needed
for
our
service
to
be
working
properly,
some
kinds,
for
example
the
volume,
our
volume
volumes
connections,
open
parts
and
all
the
other
features
that
are
provided
to
us
by
kubernetes,
and
we
want
to
use
it
so,
for
example,
their
application
scenarios,
replica
numbers
and
so
on.
D
D
Okay,
we
are
creating
a
messaging
gateway
deployment
in
messaging
gateway
and
namespace,
and
we
create
a
service
for
that
deployment.
D
C
Gateway,
so
that's
it.
I
think
all
the
steps
are
discussed
now.
I
will
summarize
the
technical
part
so
before
the
migration,
what
we
have
what
we
had,
we
had
an
unscalable,
unmanageable
monolith
with
no
shared
state
with
not
no
shared
cues,
so
the
logic
layer
was
not
distributed
at
all.
We
had
manual,
mysql
replication
mastered
slave
slave
to
master,
which
was
non-transactional,
which
we
had
problems
with
with
some
kind
of
the
synchronization
and
so
on.
C
We
had
the
situation
where
each
client
had
to
adjust
option
config
after
our
internal
infrastructure
changes,
also
virtual
machines
added
resources,
utilization
overhead
not
only
for
our
services,
but
also
for
the
operational
maintenance
staff,
so
for
elastic
for
for
prometheus
and
grafana
stuff
and
so
on,
and
we
had
to
do
manual
deployment
manual
scaling
of
the
services.
C
Actually,
the
only
thing
what
we
could
have
done
was
to
optimize
it
with
some
kind
of
uncivil
stuff
and
so
on,
but
you
know
this.
This
also
is
a
piece
of
code.
This
also
had
to
have
to
be
managed.
This
is
not
provided
as
the
ready
to
go
services,
as
it
is
being
done
right
now
with
the
kubernetes
environment
and
human
nets,
community,
and
so
on
and
after
the
migration.
C
After
the
migration
we
have
logically
simple
architecture,
we
have
fault
tolerancy
for
basically
all
our
services.
What's
most
important
for
our
logic
services,
when
some
kinds
of
disruptions
appear
on
our
logic
services,
they
will,
they
will
be
restarted
or
the
pods
will
be
restarted
and
no
changes
in
configuration
will
be
required
from
the
client
side.
C
No
resource
utilization
when
it
comes
to
os.
We
have
also
ready
to
go
configuration
from
the
vendors
that
are
giving
us
the
the
oem
stuff
operation
and
maintenance,
so
for
elastic
for
lockstar,
for
hibana,
grafana
and
prometheus,
and
so
on.
For
even
cockroach.
We
all
have
ready
to
go.
Config
recipes
from
the
vendors
within
we've,
also
the
kubernetes
yaml
files
to
integrate.
So
we
don't
have
to
invent
anything
because
this
con
this
integration
with
kubernetes
is
pretty
straightforward
and
managed
by
the
defender.
C
So
no
no
worries
here.
What's
most
important
for
us
and
for
the
telco
word,
we
proved
that
the
integration
with
the
sometimes
legacy
teleco
protocol,
such
as
zip
map
smpp
diameter,
is
possible
and
can
be
done
with
a
little
bit
extra
effort
that
we've
managed
to
do
with
within
the
kubernetes
environment,
also
with
the
kubernetes
network
and
so
on,
and
coming
back
to
the
effects
that
we've
made.
C
These
are
the
numbers
only
for
our
migration,
so
these
numbers
can
differ
completely
when
it's,
depending
on
the
deployments
that
you're
running,
but
for
our
for
us
in
a
scenario
when,
where
we're
running
a
campaign
of
smss
with
1000
tps's
that
are
affecting
all
of
our
services
inside
we
achieve
which
we
achieve
the
visible
numbers,
the
presented
numbers
of
the
reduction
when
it
comes
to
computer
compute,
computation.
B
B
Usually,
for
instance,
even
if
you
have
sdp
service
delivery
platform,
you
have
many
service
services
running
on
the
same
platform:
monolithic,
big
deployments,
heavy
regression
testing.
So
this
is
something
put
utilize
huge
amount
of
time,
resources
from
from
vendor
and
and
customer
perspective,
complicated
deployment
procedure,
usually
manual,
a
lot
of
coordination
and
so
on
heavy
acceptance,
tests,
limited
automatic,
scalability
or
even
no
automatic
scalability.
B
First,
from
the
technical
perspective
we
can
in,
we
can
tailor
to
devops
mode
where
we
have
quick
updates
and
smooth
updates
upgrades,
and
we
can
even
upgrade
piece
of
software
and
and
and
also
something
what
wasn't
even
possible.
And
we
couldn't
imagine
before
the
well-known
a
b
test
from
web
word
here
can
be
applied
as
well.
B
We
can
imagine
that
we
can
update
just
one
one
port
and
verify
a
small
portion
of
traffic
that
it
works,
and
so
we
are
able
to
do
that
out
of
the
box
automatic
scalability
and
high
availability,
and
this
is
something
what
gives
us
kubernetes
once
we
are
able
to
ut,
tailor
legacy
protocols
and
squeeze
our
service
to
to
to
be
implemented
the
support,
and
so
we
we
can
take
advantages
of
such
approach.
B
Simplified
automatic
testing
configuration
so
we
can
use
continuous
integration,
continuous
delivery
pipelines
and
apply
automatic
testing
there.
As
well,
and
as
rafael
mentioned,
we
have
reached
operation
maintenance
tool
set
which
can
which
we
can
use.
We
have
ready
to
use
configuration
and
so
on
on
the
business
side,
so
definitely
cost
effect,
effectiveness
and
time
to
market.
B
I
know
this
is
buzzword,
but
we
shown
that-
and
we
proved
that
even
I
remember
before
we
started
playing
with
with
kubernetes
our
engineers
were
a
bit
scared
and
they
said
okay,
it
has
to
last
a
long
time
we
have
to
to
change
our
processes
and
so
on,
but
at
the
end
of
the
day
we
see
that
it's
it's
straightforward
once
you
do
that,
it's
the
next
projects,
next
services,
it
will
be
really
straightforward
and
you
can
do
that
very
quickly.
B
B
So
if
you
know
how
what
and
how
to
use
you
can
rule
the
world
and
service
providers,
this
is
last
but
not
least,
can
contribute
even
contribute
to
open
source
development
community,
and
we
have
such
exams
foreign
by
poland,
which
were
was
contributing
to
to
onf
and
they
contribute
to
develop
epc
for
for
a
use
case,
which
work
is
important
for
them
and
they
did
it
and
right
now,
this
work
can
be
used
by
others
and
others
service
providers,
other
other
operators
can
reuse.
B
So,
as
you
see,
let
me
use
some
annotation
yes,
so,
as
you
see
here,
we
have
monolithic
applications
and-
and
actually
this
is
actually
previously,
that
the
service
was
really
monolithic.
We,
we
added
the
whole
operation
maintenance
stuff.
We
used
open
stock,
we
we
had
automatization
and
so
on,
but
but
but
still
the
application
was
monolithic,
and
we
had
here
huge
logic
and
the
application
was
quite
heavy.
B
We
get,
we
got
rid,
sorry
we
removed.
Where
is
my
pen.
B
B
Resilience
we
added,
we
used
other
protocol
projects
for
session
replication,
so
we
don't
have
to
implement
that
again.
We
used
open
source
project
for
a
database
for
storage.
We
are
using
as
well
ready
to
use
patterns
for
deployment
of
opera
operation.
Maintenance
stack
here
is
some
work
needed
to
separate
legacy
protocols
and,
at
the
end
of
the
day
you
are
migrate.
B
You
have
to
consider
the
concentrated
business
logic,
which
is
just
part
of
the
solution,
and
even
if
you,
if
you
say
I
don't
want
to
to
use
solution,
I
don't
I
want
to
to
use
different
solution.
You
can
replace
this
part
as
well.
So
this
is
something
what
what
is
important
to
mention
that
that
service
provider
operator
has
freedom
and
and
from
our
perspective,
we
have
to
justify
and
show
that
we
we
are
available
partner,
vendor
and-
and
we
want
to
be
as
as
as
good
as
possible
to
to
provide
the
best
solution.
A
All
right
well,
thank
you
very
much
everyone
for
that
wonderful
presentation.
We
have
about
nine
minutes
for
our
questions,
so
please
feel
free
to
drop
them
into
the
q,
a
section
with
a
reference
to
a
vnf
where
a
specific
file
for
it
with
has
connection
points,
vdu
and
virtual
links.
How
are
these
components
translated
to
cnf
or
in
your.
B
So,
maybe
in
in
general,
because
I
I'm
not
sure
that
I
fully
understand
the
question
but
but
what
we,
what
we
we
shown
and
we
presented
that
virtual
environment,
for
instance.
B
In
our
case,
we
are
using
openstack,
so
we
can
use
automatic
deployment
and-
and
in
our
in
in
our
case,
chef
for
for
automatic
deployment,
and
we
can
use
resources
like
computation
memory,
storage
from
virtualized
environment,
and
if
we
want
to
to
migrate
to
different
virtualized
environment,
we
have
to
repeat
this
work
and
use
different
automatization
and
deployment
patterns,
for
instance,
for,
for
instance,
if
you
want
to
to
make
grades
to
to
public
cloud
or
or
vmware
based
solution
in
case
of
cnf.
B
So
we
are
providing
this
solution
as
a
solution
which
runs
spots
in
kubernetes
cluster
and
nevertheless,
where
this
kubernetes
cluster
runs,
we
can
use
the
same
deployment
patterns.
I
hope
so
that
I
replied
to
these
questions
in
right
away.
A
He
added
a
little
more
specification
for
the
for
the
question
for
a
specification
file
which
have
which
has
tosca
templates
for
the
vnf.
B
Okay,
okay,
because
pavel
maybe
this
is
a
question
for
you,
because
in
case
of
vnf
we
are
using.
Chef,
am
right.
B
D
Template
yeah
and
we
create
a
template
file
for,
for
example,
for
keeper,
for
instance,
for
a
network
and
for
tenant
and
for
all
that
stuff,
and
in
this
case,
in
our
openstack
kubernetes
cluster,
we
are
using
this
tool
to
create
a
virtual
machine
and
our
kubernetes
cluster
is
put
on
this
virtual
machine.
The
dnf
template
file
are
using
to
create
an
instance
needed
to
create
a
cluster.
B
Domains
so
this
this
example
is
for
real-time
service
and
messaging
gateways.
Example
of
the
other
example
which
we
have
and
we
didn't
mention
is
is
ocs.
We
have
our
ocs
online
charging
system
migrated
to
cnf
as
well,
so
we
are
not
oss
providers,
so
we
are
the
rather
concentrate
concentrated
at
real-time
use.
Cases
like
like
prepaids,
like
like
mm,
tell
ocs,
and
so
on.
B
B
B
B
We
can
recommend
deployment
kubernetes
at
openstack
or
bare
metal,
bare
metal
and
at
open
stack
is,
is
or
or
is
we
can
decide
to
select
the
solution
if
you
want
to
have
flexibility
at
bare
metal,
if
you
want
to
have
efficiency,
for
instance,
epc
gateway,
like
p
gateway,
should
should
run
on
bare
metal
because
we
need
we
need
efficiency
and
we
can
be
as
clo.
We
have
to
be
as
close
as
possible
to
cpu.
B
D
D
A
D
On
our
openstack,
we
are
using
only
a
network
service
and
our
clusters.
Communication
are
communicating
on
this
network
and
and
that's.
A
B
Because
not
sure
again,
from
my
perspective,
I
also
not
sure
that
I
understand
the
question
right
in
right
way,
but
as
as
pavel
presented
or
here
we
have
openstack
and
we
are
using
a
virtual
vms
provided
by
openstack
to
to
deploy
workers
and
masters
and
and
we
are
using
ceph
for
storage
and
what
service
we
are
using
for
networking
pavo
here.
D
For
networking
here
we
are
using
flannel.
A
Okay,
well,
thank
you
all
for
a
wonderful
presentation
and
for
a
great
q
a
session.
That's
all
the
time
we
have
for
today's
presentation
today.
This
webinar
will
be
available
later
today
on
the
cncf
webinar
page
at
cncf,
dot,
io,
slash
webinars,
thank
you
to
our
presenters
and
to
all
of
you
for
joining
us
today.
Everyone
take
care,
and
we
will
see
you
all
next
time.