►
From YouTube: CNCF Telecom User Group Meeting - 2020-12-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
this
is
a
telco
music
group
and
this
call
is
being
recorded
the
meeting
and
they
are
posted
to
youtube.
So,
if
you're
presenting
anything
or
speaking,
then
it
will
be
presented
on
a
book
forum.
The
meetings
meet
monthly,
the
first
monday
and
the
time
switch.
A
A
If,
if
there's
an
item
that
you
want
to
discuss,
I
I
do
think
that
a
lot
of
the
discussions
that
have
been
happening
in
the
cnf
working
group
well
channel
in
slack
there's
quite
a
few
of
them-
that
we
could
probably
bring
into
the
tag
to
focus
on
for
the
telecom
user
group
is,
is
meant
to
be
a
place
where
we
can
have
a
more
open
discussion
about
any
type
of
concerns
or
ideas
that
are
related
in
the
telecom
domain
and
as
people
are
coming
in
and
looking
at
new
technology
within
I'd,
say
the
cncf
ecosystem,
so
cloud
native
ecosystem
and
community
across
the
board.
A
So
if
anyone
had
any
of
the
things
I
mean,
I
see
tal
and
some
other
people
that
have
been
presenting
and
talking
quite
a
bit
in
many
areas.
But
if
y'all
have
any
topics
that
we
haven't
got
to
on
the
cnn
working
group,
where
it's
been
kind
of
to
the
side,
then
please
speak
up
or
we
can
add
them
to
a
future
tag.
As.
A
A
All
right,
all
right,
so
the
right
now
this
singing
of
working
group
is
weekly,
and
that
means
it's
right.
After
this
call,
we
may
do
some
adjustments
so
they're
not
back
to
back
like
that,
going
forward
right
now
we're
looking
at
having
a
cnf
working
group
today
and
it's
likely
to
be
next
week,
there
may
be
a
gap
with
holidays
and
then
starting
up
again.
A
So
there
could
be
stuff
related
to
the
tag
the
cnf
working
group,
potentially
the
cnf
test
bed
and
the
cnf
test.
Suite
will
be
on
that
track.
We're
still
trying
to
work
out
the
details.
That's
in
february
and
kubecon
and
cloudnativecon
europe
cfps
will
be
closing
on
sunday
december
13th.
So
we've
got
a
week
if
you're
going
to
get
something
in
it's
putting
that
out
and
there's
been
more
and
more
tough
on
topics.
A
So
please
get
stuff
in
there,
and
so
the
kubernetes
cloud
native
community
will
get
more
and
more
engaged
directly
all
right
bill.
Do
you
want
to
talk
about
the
white
paper
and
I
can
hand
it
to
you
I'll,
stop
screen
sharing
for
a
minute.
B
Sure
so,
for
some
reason,
my
internet
connection
is
slightly
unstable
today,
but
in
case
anybody
wasn't
aware.
The
white,
the
first
white
paper
that
we
were
working
on
in
the
telecom
user
group
has
now
been
published
and
you
can
find
it
in
the
github
repo.
I
can
add
a
link
the
link's
already
in
there.
B
So
if
you
want
to
use
this
to
go
out
and
talk
about
cloud
data,
please
feel
free
to
use
this
as
a
reference
point
going
forwards,
and
I
think
it's
a
great
first
step
for
this
group
and
I
look
forward
to
seeing
more
white
papers
coming
out
of
this
group
and
kind
of
with
that.
I
I
think
that's
a
good
transition
to
jeffrey
salin's
to
talk
about.
Maybe
the
the
next
white
paper
to
come
out
of
this
group.
So
jeffrey,
do
you
want
to
take
it.
C
Okay,
yeah,
so
first
I'm
gonna
throw
out
a
disclaimer.
I
started
this
18
months
ago
in
in
our
space.
Things
moved
pretty
fast,
so
just
bear
with
me
here,
but
the
the
initial
audience
for
the
white
paper
that
bill
was
just
referencing
was
really
around
like
that
kind
of,
like
cto
senior
architect
level.
C
Right
like
there
was
some
discussions
in
the
cnf
working
group
about
talking
about
motivations
and
like
why
cloud
native,
and
I
think
that
that
first
white
paper
was
kind
of
trying
to
achieve
some
of
that
of
just
a
generic.
Why
would
we
you
know?
C
Do
cloud
native
in
the
first
place
and
to
me,
like
the
general
motivation,
I
think
is
kind
of
covered
in
them
if
we
kind
of
take
off
the
niche
use
case
hat
for
a
second
and
think
about
like
what
our
enterprise
you
know,
web
services
look
like
for
how
we
present
our
stuff
to
customers,
how
we
run
a
lot
of
our
own
internal
I.t
clouds
in
telco
and
stuff.
I
would
say,
the
vast
majority
of
us
I
would
be
willing
to
bet.
C
I
know
charter
is,
does
a
ton
of
cloud
native
esk
software
development
for
things
like
you
know
our
online
marketplace,
things
like
that
it
runs
in
kate's,
it
runs
in
the
cloud
etc.
So
this
was
supposed
to
kind
of
be
like
some
of
the
early
discussions
I
had
with
other
providers,
like
mr
bernier
down
here,
some
of
the
vendors
like
geargate
on
the
call
etc
talking
about
some
of
the
drivers
and
challenges
for
like.
C
Why
would
we
do
cloud
native
for
the
actual,
like
telco
and
cable
workloads?
Why
would
we
shove
a
cmts
into
containers
or
you
know
whatever?
Why
would
we
do
the
packet
core
the
way
we're
starting
to
move?
I
mean
the
packet
core
standards
themselves
are
now
actually
starting
to
dictate
cloud
native.
You
know
and
cloud-centric
approaches
so
like
this
is
creeping
into
our
standards
bodies
as
well
as
we
continue
to
kind
of
permeate
into
the
cloud
native
world.
C
So
you
know
this
is
your
typical
just
generic?
This
is
why
we're
doing
it
it'll
need
to
be
updated.
Like
I
said,
it's
pretty
old,
but
the
long
and
short
is
is
not
only
do
a
lot
of
us
think
this
is
a
good
idea,
it's
being
forced
on
us
right.
Vendors
also
have
to
do
their
economy
of
scale
discussions
internally
and
there's
no
way
that
they're
going
to
be
able
to
support
and
staff.
C
You
know
a
future
where
they
offer
a
physical
packet
core,
a
virtual
packet
core
and
a
cloud
native
packer
core
right,
like
they
need
to
focus
their
efforts,
and
a
lot
of
our
vendors
are
like
hey.
This
is
the
direction
we're
going
charter.
You
need
to
get
ready
for
it
right.
I
have
these
discussions
with
a
lot
of
the
big
players
pretty
regularly.
So
then,
really
getting
down
into
what
pieces
should
be
cloud
native
like.
C
Why
are
we
doing
this
and
stuff
is
kind
of
like
my
attempt
here,
looking
at
some
of
the
challenges
we
had
with
nfv
and
seeing
if
we
can
remediate
some
of
those
and
not
carry
them
forward,
I
used
some
of
this
business
speak.
I
try
to
do
a
little
bit
of
research,
but
you
know
the
whole
like
crossing
the
chasm
and
early
majority
thing
right
like
at
this
point.
Kate's
has
crossed
the
chasm
it's
used
in
a
lot
of
places.
It's
widely
adopted,
it's
kind
of
like
it's
really
not
even
like
that.
C
Chic
anymore,
like
it's
kind
of
like
a
safe
technology
for
hosting
and
orchestrating
containers,
which
that's
what
the
market
people
you
know
say.
Those
of
us
that
are
running
like
real
world
case
workloads
want
to
pull
our
hair
out
sometimes,
and
you
know
cry
in
the
shower,
so
nobody
can
see
our
tears.
But
for
the
most
part,
though,
it's
it's
baked
into
a
lot
of
stuff.
One
thing,
though,
I
don't
think
that
this
whole
diagram
that
you
know
shows
the
little
curve
going
up
and
across.
C
I
know
your
technology
is
now
kind
of
like
widely
adapted
as
this
ecosystem
continues
to
grow
and,
like
you're
constantly
in
like
this
weird
spot,
where,
like
more
and
more
stuff,
is
being
added
to
this
extensible
platform
and
you'll
have,
while
kate's
itself
is
established,
early
adopters,
pulling
different
technology
buckets
into
it
like
lots
and
lots
of
companies
running
kate's,
not
every
company
that
runs
cases
using
a
service
mesh.
C
A
lot
of
the
optimizations
that
towels
covered
around
like
the
topology
manager,
some
of
the
more
fancy
cni
multiplexers
like
malta,
stand
them
etc,
like
those
are
out
there,
and
we
have
early
adopters
that
are
consuming
them
and
doing
cool
stuff
with
them,
but
there's
a
lot
of
people
who
are
still
scared
of
it.
So,
like
one
of
the
things
I'm
hoping
this
group
and
the
cnf
working
group
provides
us
with
the
best
practices
and
stuff
is
for
those
that
are
in
the
more
risk
adverse
spaces.
C
How
do
we
know
where
something
is
within
the
ecosystem
as
a
whole?
As
far
as
like
its
adoption,
its
stableness,
you
know
like
technically
v6,
like
that's
a
huge
huge
thing
for
all
of
us.
Service
providers
is
still
what
in
beta
status
upstream
it's
slowly,
but
surely
every
kate's
release
getting
more
and
more
mature,
but,
like
you
know,
sometimes
even
just
the
terminology
and
how
cncf
like
designates
whether
something's
an
alpha,
beta
or
ga,
is
different
than
like
the
rest
of
us
like
and
so
like.
C
If
one
of
my
executives
hears
beta,
it
doesn't
matter
if
it's
been
in
beta
for
seven
years
and
it's
super
stable,
it's
no.
We
don't
put
beta
code
into
our
production
network
right
so
trying
to
change
those
perceptions
so
yeah
some
of
the
like
challenges,
I
mean
you
have
generic
telco
challenges.
Integration
is
hard
right.
We
have
these
giant
brown
fields,
we're
constantly
putting
new
greenfield
stuff
into
it.
C
When
do
we
just
completely
slice
off
a
section
of
our
infrastructure
and
run
it
in
a
vacuum
and
allow
it
to
eventually
grow
and
consume
the
old
brown
field?
When
do
we
need
to
directly
interface
with
the
brown
field?
Well,
I
think
we're
doing
a
lot
of
cool
stuff
in
the
dc
space.
I
don't
think
any
of
us
are
prepared
to
put
like
you
know,
one
of
our
like
core
routers
in
containers
yet
and
in
a
stack
of
x86.
C
I
know
intel
would
love
for
me
to
do
that,
but
I
don't
think
we're
quite
there
yet
when
we're
talking
about
like
something
that's
got
like
you
know:
400
gig
line
rate
in
this
pack
and
pushing
hundreds
of
millions
of
packets
a
second
like
tool
sprawl.
This
is
another
big
one,
and
so
once
again
to
taylor's
earlier
disclaimer,
when
I
think
of
the
telco
user
group,
I
think
of
us
telco
users
and
our
vendors,
who
help
us
talking
about
like
generic
challenges
in
the
cloud
native
space
tool.
C
Sprawl
is
huge
a
lot
of
times
when
we
have
like
these
different
engineering
groups
like
what
I'm
at
and
charter
we're
constantly
doing,
like
our
little
r
d
phase,
doing
something
cool
with
like
some
piece
of
technology,
we're
like
hey
operations.
You
need
to
deploy
this,
it's
awesome
and
it
gets
to
the
point
where
you
know
we
start
having
these
discussions
about
an
integrated
vertical
stack
of
case
versus
the
vendor
provided
stack.
You
know,
look
in
the.
I
don't
know
if
I
pronounced
that
right.
C
I
apologize
if
I
didn't
has
put
a
lot
of
stuff
in
like
the
work
that
deutsche
telecom
has
done
around
building
out
their
openstack
in
their
kubernetes
environments,
and
I
would
be
willing
to
bet
if
we
had
him
on
the
phone
right
now
and
maybe
he'll
be
on
the
next
call.
That
tool
sprawl
is
a
huge
part
of
this
right.
When
you
present,
like
different
interfaces,
you
put
a
different
wrapper
around
the
kate's
api.
C
You
change
all
the
cube
cuddle
commands
to,
like
you
know,
os
instead
or
some
mirantis
command,
or
whatever
it's
just
more
and
more
stuff.
C
They
know
you
know,
we've
got
all
this
old
alu
gear
and,
like
this
person
is
like
an
alcatel
master
from
before
the
mergers
and
stuff
and
they're
like
keeping
all
this
like
20
year
old
equipment
growing
for
us.
So
you
get
like
this
thing
where,
like
operations
hits
this
point
where
there's
just
like
no
more
new
stuff
right
like
you
need
to
consolidate-
and
this
is
a
tough
one
in
this
new
space.
C
I
think
this
is
one
of
the
hardest
challenges,
because
providers
can't
run
50
different
flavors
of
cakes
and
expect
operations
to
consume
it,
but
there
also
has
to
be
some
type
of
concession
to
vendors
on
like
how
do
they
maintain
an
sla
in
a
third-party
stack,
a
provider
stack
or
can
they
get
to
the
point
where
one
second
chat
and
please
by
any
means
step
in
and
interrupt
me
at
any
time,
I'm
just
kind
of
covering
some
stuff
I
had
because
I
was
asked.
I
don't
know
where
my
chat
window
is.
C
F
But
like
I,
I
think
one
important
aspect
of
all
of
these
is
somehow
creating
some
kind
of
industry
standards
or
best
practices,
and
here
on
standards
I
mean
like
really
something
that
is
defined
in
the
tag
or
in
the
or
in
the
cns
working
group,
because
there
are
lots
of
things
so
cubase
by
default
is
a
plugable
thing,
so
you
can
have
lots
of
different
flavors
of
it
and
and
the
cntp
you
know,
tries
to
build
up
this
like
opening
at
its
distribution,
which
means
that
that
is
somehow
tries
to
fix
this
all
these
moving
parts
in
the
infrastructure.
F
But
on
top
of
that
we
have
still
lots
of
different.
Like
not
agreed.
Things
like,
I
know
like
best
example,
and
most
most
simplistic
example
is
like
resource
naming
in
in
kubernetes.
Now
we
have
the
situation
that
every
operator
requires
us,
a
different
naming
scheme,
how
to
name
the
resources
in
in
you
know
both
manifests
what
we
deliver
to
to
them
and
that's
not
not
really
optimal
from
vendors
perspective
and
this
kind
of
things
I
think
we
need.
C
Industry
yeah,
I
agree
I
will
say,
and
I'm
going
to
cover
something
like
down
here
in
the
nfe
specific
stuff,
but
like
one
of
the
things
where
I
think
like
cnt
has
helped,
but
one
of
the
places
I
think,
your
gay,
that
we
still
need.
Some
help
is
around
like
defined
interfaces
like
we
have
the
reference
architecture
and
reference
implementation,
but
like
there's
still
enough
room
like
for
interpretation
where
I
could
set
up
kate's
wrong
and
still
follow
the
guide
or
I
could
do
something
that
breaks
how
you
interact
with
it.
C
I'm
hoping
like
the
best
practices
would
be
something
that
like
supplements,
the
reference
architecture-
and
I
mean
I've,
said
this
in
this
channel
and
the
other
chat.
No,
I
don't
think
we
need
another
reference
architecture.
We
have
them
right
like.
I
know
how
I
can
plumb
multiple
interfaces
into
something
I
know
where
cates
goes
in
the
stack,
there's
plenty
of
documentation
to
pull
from.
I
think
that
next
level-
and
so
I'm
going
to
really
quick
jump
around
here-
the
soul,
zero,
zero
hash.
You
know
interfaces
in
etsy.
C
This
was
like
something
where
I
mean
there
was
a
reference
architecture
that
the
etsy
manual
group
put
out
early
on
right,
like
you
have
your
nfo,
your
vnfm
and
your
vim,
and,
like
here's
all
the
main
pieces
and
the
vast
majority
of
us
built
those.
The
interaction
between
those
pieces
was
not
well
defined.
C
I
feel
like
it's
just
now,
finally,
starting
to
get
caught
up,
but
like
there's
a
lot
of
ambiguity
and
we've
already
discussed
that,
like
this,
isn't
a
standards
body
and
we're
not
gonna
be
able
to
dictate
a
bunch
of
stuff
to
a
whole
lot
of
people
for
one
you'll,
never
get
charter
at
t
and
comcast
to
build
the
same
stack.
It'll
just
never
happen.
You
know
so
figuring
out,
like
what
ambiguity
needs
to
be
squashed
out,
because
it
can't
you
know,
be
covered
in
an
api
versus
you
know.
C
What
is
something
reasonable
that,
like
you
know
a
json,
payload
or
a
tosca
model,
or
something
would
be
able
to
reasonably
account
for
those
with
some
flags
to
like
determine
I'm
going
down
path,
x
or
y,
because
I
can
just
tell
you.
We
went
down
this
path
where
you
know
the
the
mano
stack,
the
nfo
and
vnfm
technically
they're,
two
separate
functions
in
the
reference
architecture
and
every
single
service
provider.
We
wanted
to
pull
those
apart.
We
wanted
best
in
class
in
feo,
best
in
class,
v
and
fm.
C
You
know
especially
pre-owned
app
days,
and
you
know
the
apis
for
soul,
3
and
soul.
5
were
very,
very
poorly
defined
at
the
time
you
know
the
concept
of
where
network
creation
lives.
It
gets
really
really
like
political
on.
Is
the
network
tied
to
the
life
cycle
of
the
vnf?
So
it's
you
know
part
of
the
vnfm.
C
Is
it
a
shared
resource
in
openstack
vmware,
the
cloud
wherever?
If
not,
then
it
goes
into
the
creation
of
the
nfeo,
and
you
just
get
into
these
things.
So
then,
when
we
come
and
say
vendor
a
take
your
nfo
and
put
it
on
vendor
v's
vnfm,
I
can
just
tell
you
through
personal
sleepless
nights,
it's
rough
and
it's
not
really
the
vendor's
fault,
because
technically
they
were
building
and
designing
the
specifications.
F
Yeah,
but
that
that
that's
what
we
should
prevent
from
happening
again,
I
think
and
the
and
yes,
I
agree
that
that
big
part
of
that
problem
was
that
the
four
specs
were
started
very
late,
so
for
the
ones
who
don't
know
how
these
skf
specs
are
built,
the
like
the
soul,
specs
are
the
real
specs
which
are
describing
the
electric
one
wire
protocol.
All
other
aspects
are
just
you
know
higher
level
stuff
that
you
can
interpret
in
1000
different
ways.
F
So
we
can
consider
that,
like
the
what
we
would
consider
as
an
api
specification,
those
are
the
the
source
packs
and,
and
it's
an
indication
that
only
the
source
packs
have
a
have
an
open
api
representation.
C
Now
I
mean
the
kubernetes
api
if
you've
ever
just
downloaded
the
api
and
read
through
it
and
get
it
is
like
the
most
sprawling
massive
thing
on
the
planet,
but
you
know
at
the
same
time
we
all
consume
it
and
we're
semi
successful
with
it
right
so
I
mean
like
I
know
it's
possible
for
us
to
get
this
stuff
in,
and
I
think
this
is
one
of
the
things
we're
like
the
cnf
working
group.
C
I
think
it's
more
of
that
focus
is
what
are
the
best
practices
for
you
know
pushing
something
into
kubernetes
for
consuming
it
from
kubernetes,
and
so
this
goes
back
to
the
tool
sprawl
comment
too,
and
this
is
something
there's
been
a
lot
of
lively
conversation
in
the
cnf
working
group,
but
the
concept
of
developing
using
cloud
native
principles,
a
cnf
versus
putting
cnf
practices
in
place
as
an
operator
to
consume
right,
and
if
the
cnf
is
developed
with
the
interfaces
in
a
good
specification
as
you're
pointing
out
gierge
and
I've
done
my
homework
as
an
operator
to
have
the
infrastructure
and
the
orchestration
in
place
to
consume
it
appropriately.
C
Then
some
of
the
you
know
vendor
secret
food
can
be
put
in
place
but
still
be
consumed
like
it
gives
the
vendors
a
chance
to
still
you
know,
maintain
you
know
their
competitive
edge
right
like
at
the
same
one.
You
guys
are
developing
you're
spending
a
ton
of
money
on
intellectual
property.
You
know
on
research,
etc.
So,
like
I
mean,
I
know,
we're
in
the
open
source
group
right
now,
but
not
everybody
just
wants
to
give
away
all
their
money,
making
secrets
right
so
like
how
do
you
keep
your
competitive
edge?
C
This
is
where
why
I
stay
engaged
in
the
tug
right
is
to
talk
to
other
providers
to
talk
to
vendors,
like
you
know,
trying
to
like
emulate
something
that
happened
with
onap
bonap
was
like
this:
like
1.5
million
line
code
dump
at
the
beginning,
it
was
all
over
the
place.
Things
worked
that
didn't
work.
C
I
think
one
thing,
though,
that
onap
can
really
be
seen
as
a
success
is
how,
like
the
conversation
amongst
those
developers,
eventually
evolved
into
them,
collaboratively
coming
up
with
these
kind
of
best
practices
that
taylor's
described
for
cnfs,
like
I
feel
like
it.
Kind
of
slowly
happened
organically
over
there,
where
the
collaboration
and
just
like
the
hey.
This
is
a
good
idea.
This
is
a
bad
idea.
Stuff
really
started
to.
E
A
couple
of
comments:
yep
mano,
is
at
least
the
reference
implementation
is
very
tied
into
the
canonical
ecosystem
and
it's
very
difficult
to
bring
it
out
of
it.
My
bigger
point
is
we
you
have.
E
We
have
all
these
nfe
specific
things
that
we
require,
and
I
think
we
need
to
involve
the
cloud
providers
into
this
effort,
because
today
we
we
use
avx,
512
or
sri
ov,
and
but
we
cannot
natively
use,
aks
or
eks
google
engine,
because
these
the
specific
requirements
that
we
need
they
are
not
available
directly
in
this
course.
E
So
so
we
have
to
install
our
own
kubernetes
platform
or
anything
like
that
and
on
top
of
the
cloud,
even
though
they
provide
you
time,
we
manage
it
ourselves,
so
I
think
to
be
truly
cloud
native.
It
will
be
good
if
we
involve
the
cloud
providers
and
have
these
capabilities
built
into
the
cloud
infrastructure
itself.
C
E
Of
these
things
we
are
facing
right
now
I
mean
we
are
the
cnf
vendors.
C
So
yeah
I
mean
that
was
one
of
my
requests.
I've
been
trying
to
bug
guys
like
hawking
and
a
few
others
at
google.
I
saw
robbie
floating
around.
I'm
hoping
robbie
will
bring
some
of
the
core
like
aws
guys
in,
but
I
mean
there's
certain
vendors
that
are
pitching
the
idea
of
running
my
pack,
a
course
control
plane
in
a
public
cloud
and
then
running
my
user
plane
on
prim
right.
So,
like
to
your
point
like,
I
really
feel
like.
C
If
we're
gonna
talk
about
large-scale,
like
architectures
best
practices,
you
know
infrastructure
decisions
having
both
sides
of
the
coin
are
important
because
they
look
at
things
differently
than
we
do,
and
one
of
the
things,
though,
is
I
feel,
like
they
also
have
a
common
understanding
with
us,
though,
on
like
some
of
the
insane
regulatory
restrictions
we
have
in
place
like
there's
times
where,
like
you
know,
someone
in
like
one
of
the
cnf
cncf
groups
will
be
like
you
know,
jeffrey.
C
Why
are
you
so
hung
up
on
network
segmentation
and
I'm
like
well,
because
I
have
a
legal
obligation
to
do
that
like
it's?
It's
not
like
within
my
like
design
matrix
of
deciding
do
I
want
it
or
not
so,
but
I
mean
like
we
might
get
a
lot
of
people
from
amazon
google
that
tell
us
we
should
never
do
srov.
I
don't
know
right,
I'm
like
that's
why
we
want
those
people
there.
C
We
want
different
paradigms
some,
especially
in
the
tug
versus
the
cnf
working
group,
where
we're
trying
to
like
give
some
like
real,
tangible
benefits
right
now
and
help
people
like
deal
with
the
right
now.
I
really
see
the
tug
is
like
a
place
for
us
to
be
like.
Where
should
we
be
in
five
years?
That's
one
of
the
drivers
right
like
if
my
architecture
in
2025
looks
like
what
it
you
know
does
today,
just
with
different
components.
Then
we've
probably
missed
the
boat
somewhere.
C
C
I
mean
the
big
thing
too,
and
I
know
someone
put
this
in
the
c
f
working
group
is
putting
some
of
those
like
benefits
in
some
of
our
best
practices
right,
like
I'm,
always
harping
on
the
requirements
thing
right,
but
like
I'm
obviously
here
and
I
spend
a
lot
of
time
in
these
groups
because
I
do
see
the
value
right.
C
I
want
uptime
kubernetes
self
heals
containers
right
and
we
got
into
this
discussion
and
is
the
uptime
tied
to
the
container
itself
or
is
it
you
know
tied
to
something
else
and
in
my
mind
it's
is
my
service.
You
know
reachable.
I
mean
the
whole
point
of
doing
replica
sets
you
know
deployments
using
self
healing,
auto
scaling,
etc.
Is
because
you're
more
concerned
with
like
service
up
time
than
you
are
infrastructure
uptime
in
this
space
and
I'll
be
honest
like
once
again
around
like
the
operational
folks
at
my
company?
C
That's
an
interesting
conversation
for
them
to
have
because,
like
in
their
world,
you
know
they
are
infrastructure
providers.
They
provide
network
infrastructure,
server,
infrastructure,
virtualization
infrastructure
and
their
sla
is
to
make
sure
that,
like
the
app
level
always
has
availability,
so
we
like,
showing
in
the
best
practices
like
how
that
translates
into
a
world
if
I'm
someone
who's
actually
providing
the
container
and
kubernetes
infrastructure
and
I'm
less
concerned
with
the
application
layer.
That's
like
hey.
These
three
containers
died
and
then
rescheduled
over
on
these
servers.
C
That's
okay,
because
you
never
had
any
outages
for
like
the
in-service
that
you're
providing
like.
I
know
that.
That's
probably
like
a
comment
for
most
of
the
people
on
this
call,
but
you
know
a
lot
of
the
legacy
network
server
world.
You
know
it's
not
like
something
that
just
like
pops
right
into
their
head
of
like
because
they
may
not
be
looking
at
the
service
layer.
They're.
C
Looking
at
a
bunch
of
you
know,
cabana
dashboards
that
just
showed
a
bunch
of
containers
got
torn
down
and
redeployed
over
here
and,
like
their
stomach,
drops
out
for
a
moment
so
anyways
I'm
going
to
shut
up
for
a
bit
because
you
guys
all
know
how
to
read
and
just
continue
to
let
the
rest
of
the
group
guide
the
discussion
on
kind
of
what
we
would
want
to
do.
As
far
as
like
identifying
some
of
the
challenges
we
have
right
now.
C
I
even
think
that
would
be
a
good
idea
to
like
mark
things
of
like
this
is
a
challenge
the
telco
user
group
sees
in
general,
and
we
think
that
maybe
this
is
something
the
cnf
working
group
or
the
cnf
testbed
might
help
us
solve,
and
we
could
even
identify
some
of
those
things
here
and
then
leverage
the
other
groups
to
help
us
with
them.
A
Yeah,
I
wanna
circle
back
a
little
bit.
The
reason
why
we
reached
out
to
jeffrey
on
this
was
we
keep
having
discussions
around.
What
is
the
motivations
and
what
are
what
are
some
of
the
whys
that
either,
whether
it's
an
operator
or
actual
doing
operations,
whether
that's
a
service
provider,
internal
tank,
ops,
team
or
that's
a
a
group,
that's
working
with
the
service
provider
and
providing
those
services
or
you're
looking
at
from
the
cnf
developers
side
and
how
to
consume
the
resources
or
how
to
actually
create
the
the
internals.
A
So
there's
it's
always
comes
back
to
what
are
the
drivers
and,
and
we
either
get
very,
very
high
level
that
doesn't
apply
enough
within
what
we're
trying
to
move
towards.
As
far
as
like
the
cloud
native
type
of
thinking
like.
What's
our
reason
here
or
it's
very
specific,
so
the
idea
was
to
try
to
get
something,
hopefully
a
white
paper
that
addressed
those
why's
and
maybe
list
some
requirements.
A
So
some
of
the
things
that
was
discussed
right
now
was
items
that
maybe
they
don't
work
on
a
cloud
provider
and
the
cloud
provider
actually
doesn't
want
to
support
it,
and
the
reason
why
they're
doing
is
maybe
because
they
think
designing
the
the
application,
I'm
going
to
say
that
what's
actually
running,
not
the
specific,
maybe
not
even
the
specific
network
functions
or
anything
else,
but
maybe
they're
trying
to
communicate
that
the
entire
design
should
be
different.
A
Maybe
not
maybe
they
just
don't
have
the
support,
but
if
we
can
come
up
with
a
list
of
things
that
are
those
underlying
needs
like
what
the
drivers
section
of
what
this
document
has
and
then
some
of
the
specific
current
requirements
we
could
start
mapping
them
out,
as
jeffrey
was
saying
there
at
the
last.
Is
this
something
that
we're
even
ready
for
within
one
of
the
current
groups,
like
is?
Is
this
something
from
an
applications
perspective?
A
We
can
talk
about
the
cnf
working
group
and
say
here's
a
best
practice,
all
ready
for
that'll
help
with
third-party
integration.
How?
How
can
you
do
that?
What
best
practices
help
with
that?
It's
probably
not
one.
It's
probably
many
things,
but
if
we
have
this,
then
it
could
be
applied
wherever
it's
probably
a
document
that
would
be
useful
in
other
groups
like
I
could
see
it
for
aniket
within
the
ra2
efforts
it.
It
would
be
a
supplement.
A
There's
already
been
work
there,
but
this
would
be
more
content
for
that,
or
maybe
this
cnf
testbed,
if
jeffrey
as
jeffrey
pointed
out.
So
that's
a
whole
tool
set
for
trying
to
experiment
with
various
cloud
native
and
kubernetes
technologies
running
on
a
a
base.
Kubernetes
vanilla,
kubernetes
that
you
could
deploy
currently
to
equinix
metal
was
previously
packet,
but
the
idea
there
is
take
whatever
out
of
this
paper.
A
A
But
that's
the
idea
make
this
a
useful
set
of
the
challenges,
the
drivers,
all
the.
Why
behind
it,
that
we
can
use
in
all
the
other
groups.
G
Taylor
and
jeff,
my
name
is
ike
alison.
I
just
would
like
to
share
one
thing
with
you
with
both
of
you.
First
I
mean
I
have
a
presentation
that
may
provide
you
with
a
little
bit
with
an
insight
about
five
g,
a
little
bit
about
the
core
and
then
and
how
you
actually
add:
alternative,
virtualized
technologies
on
applications
and
how
this
is
handling
the
environment
of
these
alternative
virtualized
technologies
without
actually
going
to
mano.
G
Maybe
the
cloud
native
computing
foundation
technologies
is
actually
very
much
related
to
5g
slicing,
because
when
you
start
looking
in
the
slice,
you
know
sub
instance,
then
it's
very
much
you
know
connected
to
the
set
of
network
functions
and
necessary
resources,
and
then
the
computer
storage
networking
is
added
plus
with
the
5g
you
know
terminal.
You
have
support
to
eight
udp
sessions,
which
is
actually
simultaneous
support
to
eight
slices.
G
Maybe
I
will
share
with
both
of
you
first
a
link
to
a
presentation
and
if
you
find
it
that
it
might
be
useful
that
it
is
providing
some
kind
of
an
insight
about
the
telco
site
of
5g
and
then
a
liberating
okay.
How
can
you
actually,
on
top
of
that,
you
can
add
the
cloud
native
computing
technologies?
G
It's
not
actually
liberating
on
the
network
data
layer
in
which
the
network
functions,
applications
where
the
context
of
the
application
data
is
separated
from
the
business
logic
and
it
is
stored
as
a
structured
and
unstructured
and
when
the
network
functions
actually
are
providing
services
that
can
be
bought,
consumed
and
produced
at
the
same
time,
and
maybe
then
you
can
get
a
little
bit
a
whole
picture.
G
G
If
you
find
it
to
be
useful
about
the
telco
and
a
little
bit,
you
know
the
connection
to
cloud
native
computing
technologies.
Maybe
you
know
we
can
we
can.
I
can
present
it
or,
if
not,
you
know,
we
continue
with
the
work.
A
I
would
be
happy
for
you
to
drop
the
link
right
into.
You.
Can
drop
it
right
into
the
meeting
notes
you
mean
in
the
chat.
You
can
drop
it
in
there
or
the
google
doc
meeting
notes
I'll
drop
that
into
the
zoom
chat
as
well.
But
if
you
put
the
link
to
the
presentation,
then
I'll
put
it
in
the
the
public
meeting
notes.
A
Sorry
for
this
guys-
and
I
do-
I
think
it
would
be
great
if
you'd
want
to
talk
more
about
that
or
we'll
just
say
within
the
telco
music
group,
talking
about
the
relationship
between
5g
and
cloud
native.
What
I'd
say
is-
and
you
can
go
read
about
this-
it's
it's
mentioned
many
times
in
different
articles
and
stuff.
5G
has
adopted
many
of
the
methodologies
that
you
see
in
cloud
native
and
cloud
native,
of
course,
is
a
aggregation
of
many
different
principles
and
methodologies.
A
G
There
is
55
references
to
the
edge
when
it
comes
to
the
cloud
edge
or
to
the
cell
center
and
the
cell
edge,
but
then
in
2017
as
3gpp
start
developing
to
release
15
and
then,
in
march
the
etsy
mac
renamed
mobile
edge
computing
to
multi-access
edge
computing
in.
I
think
it
was
in
march
early
march,
and
if
you
look
in
february
just
the
month
the
month
before,
that
3gpp
actually
made
three
revisions
of
release
15
and
they
made
some
changes.
G
G
Suddenly,
the
customer
user
experience
is
not
any
more
defined
only
with
the
throughput
as
it
was
with
3g
3
and
a
half
and
4g.
It
is
now
a
combination
between
mobility,
latency
and
throughput.
These
are
the
three
actually
variables
that
define
the
customer
user
experience
and
mobility
is
not
anymore.
Only
your
terminal,
your
cell,
the
phone.
G
G
You
have
units
that
are
within
a
constrained
area,
think
about
self-driving
cars
that
actually
have
a
predefined
route
and
they're
going
only
through
this.
And
if
you
look
at
germany,
mercedes
bosch,
bmw,
volkswagen
they're,
getting
private
5g
licenses,
and
then
you
have
the
fourth
group
of
mobility,
which
is
your
cell
phone.
A
I
think
that
this
would
be
a
good
follow-up
discussion.
We
have
10
minutes.
I
want
to
make
sure
that
we
have
enough
time
for
anybody's
feedback.
A
I
do
think
that
I
would
probably
include
from
if
we're
going
to
take
this
from
a
what
is
the
telecom
user
group,
so
the
cloud
native
computing
foundation
telco
music
group.
What
are
we
trying
to
do,
and
I
would
take
that
perspective
for
how
to
pull
in
5g
and
what
I
would
look
at
is
this
would
be
related
to
transitioning.
A
So
how
are
you
going
to
transition
any
brownfield
to
start
taking
on
anything
any
new
best
practices,
any
new
technology,
so
there's
some
people
that
would
want
to
embed
it
within
there.
That's
fine,
so
some
people
would
say
here's
what
we
have,
but
we
want
to
move
something
else
either
way.
It's
a
transition
so
you're
trying
to
map
the
terminology
and
understanding
between
these-
and
this
seems
related
to
what
jeffrey
was
doing
with
regard
to
telco
challenges
and
drivers,
but
a
very
maybe
a
specific
one.
So
we
probably
could
even
have
a
new
white
paper.
A
That's
just
focus
on
how
do
you
relate
5g?
What's
currently
happening
with
regards
to
any
type
of
whether
you
say
kubernetes
native,
like
going
towards
something
that's
more
kubernetes
native
or
cloud
native
in
general,
how
how
do
these
fit
together?
That
could
probably
be
a
white
paper
in
and
of
itself,
but
it's
at
least
a
bullet
point
within
what
jeffrey
was
talking
about
challenges,
so
you
say:
a
current
service
provider
is
already
deploying
5g
technologies
in
a
5g
network
and
starting
to
utilize
these
in
various
places.
A
So
how
do
they
do
that,
while
looking
at
potentially
new,
maybe
even
conflicting
processes
methodologies
as
well
as
technology?
How
do
they
merge
those
together?
I
think
that
would
be
a
challenge
that
would
be
listed
within
the
white
paper
that
jeffrey
was
putting
forward
and
then
maybe
a
more
extensive,
but
I'd
be
happy
to
hear
more.
I
can
I'm
I'm
sure
that
other
people
would
like
to
discuss
this
more
in
a
in
a
future
meeting.
A
D
Oh,
maybe
just
a
quick
comment.
It
was
very
interesting.
Thank
you
jeff.
I
I
guess
my
quick
comment
was
that
you,
you
made
a
reference
about
apis
being
very,
very
large
in
kubernetes,
very
sprawling.
D
C
Just
because
I
don't
want
you
to
go
down
like
I'm
saying
like
despite
the
fact
that
it's
big
it's
manageable
and
all
of
us
have
figured
out
how
to
consume
it.
I
was
saying
in
the
nfv
space
there's
lots
of
weird,
like
siloed
and
vertical
apis,
that
you
know
look
for
very
niche.
Things
require
a
lot
of
finite
knowledge
of
what
the
in
deployment's
going
to
look
like
it's
in
one
of
the
like
the
drivers
is
getting
to
declarative
deployments
like
so
yeah.
C
It's
it's
actually
the
opposite
that
I'm
saying
like,
despite
the
fact
that
the
api
is,
you
know
very
like
large
and
all-encompassing.
In
my
opinion,
it's
pretty
consumable
from
a
kate's
side
where,
in
previous
nfv
stacks
you
know,
like
some
people
are
using
swagger.
Other
people
are
using
homegrown
apis
like
it's
just
kind
of
all
over
the
place
you
as
the
individual
consumer,
better
know
in
granular
detail.
What's
going
in
that
nsd,
you
know
the
just
the
layer
of
abstraction
that
kate
springs
through
its
apis.
C
I
feel
like
is
one
of
the
reasons
why
it's
been
so
successful,
and
so
I
was
like
actually
saying
we
should
just
figure
out
how
we
continue
to
emulate
that
and
avoid
some
of
the
like
traps.
We
got
into
in
the
previous
iteration,
with
nfe.
D
Page
here
I
think
what
I'm
trying
to
fine-tune
is
is
the
language
to
use,
because
the
way
I
see
it,
it's,
the
paradigm
has
shifted
from
talking
about
apis
to
a
scheduling
paradigm,
which
is
a
declarative,
and
maybe
that's
what
you
mean
right.
If
you
look
at
kind
of
I
think,
even
in
the
kubernetes
documentation,
it
is
called
apis,
but
they're
not
really
apis.
D
They're
data
structures
right
that
eventually,
mostly
are
used
expressed
as
yamo,
manifests,
of
course,
behind
the
scenes,
it's
you
know
go,
creates
these
resources
on
the
api
service,
but
but
I
think
that's
the
shift
right.
If
you
compare,
for
example,
kubernetes
to
openstack
right
and
openstack,
you
do
have
apis
for
the
various
services
if
it's
nova,
if
it's
neutron,
all
these
services
have
their
own
apis
that
are
documented,
but
in
kubernetes
that's
not
the
important
part
kubernetes
is
itself
extensible.
The
api
server
is
actually
fairly
simple
in
the
end.
D
It's
it's
those
resources
and
those
data
structures.
So
it's
it's
a
shift
in
language,
but
it
could
be
important
because
and
and
some
of
the
groups
that
I'm
working
on,
if
you
look
at
the
work
being
done
in
oran
and
other
groups,
there's
a
lot
of
people
bring
up
this
issue
of.
Yes,
we
need
to
specify
the
apis
because
we
deal
with
open
apis.
D
D
G
Tao
you,
you
actually
very
very
right,
because
if
you
have
a
shift
actually
from
process
centric
to
data
centric,
I
mean,
if
you
start
looking
how
you
will
utilize
machine
learning
for
closed-loop
automation,
and
you
actually
connect
different
architecture,
because
you
have
ghana,
exes
ghana.
You
know
generic
autonomic
network
architecture
also
with
etsy
any
experiential
network
intelligence.
D
D
I
I
agree
entirely
and
of
course,
and
I'll
also
point
out
that
this
change
has
been
going
on
for
a
while
to
move
to
data.
For
example,
yang
yang
models
are
the
interesting
part,
not
the
the
netconf
apis
right
or
if
it's
resconf.
That
kind
of
decision
is
not
the
important
one.
The
important
one
is
the
yang
models
and
tosca
2
to
an
extent,
the
inroads
that
it
make
in
is
the
way
of
modeling
our
our
resources
in
the
various
clouds
and
the
various
layers.
D
So
kubernetes,
I
think,
fits
in
very
well
in
this
move
into
this
data
paradigm.
So
anyway
it's
it's.
It
may
be
exploding
a
little
bit
on
a
comp
jeff,
a
comment
that
you
made,
but
it's
hopefully
it's
a
supplemental.
C
So
here's
the
thing
right,
like
what
you're
talking
about
tell
me
and
a
couple
other
developers
in
our
company
we've
been
pushing
really
hard
on
the
concept
of
open,
config
standardized
yang
models,
with
our
own
little
translation
in
between
to
get
rid
of
that
tool
sprawl.
So
we
write
our
own
common
data
structures
at
the
top.
We
do
it
mostly
in
yang
with
some
other
modeling
languages
as
well,
and
then
we
push
down
the
corresponding
payloads.
C
I
think
the
like
kind
of
hand
waving
the
apis
away,
like,
like
you
said
yang,
and
how
you
structure
and
build
services
is
way
more
interesting
than
net
conf
itself,
but
at
the
same
time,
without
like
the
transactional
nature
and
like
the
interface
that's
provided
for,
you
like,
you,
still
have
to
have
something
that
can
consume
those
data
structures
like
not
every
tool
is
capable
of
consuming
a
data
structure.
C
But
I
just
I
mean
I
I've
pushed
yang
into
things
that
do
things
very
poorly,
that
don't
have
the
concept
of
the
transaction
and
like,
if
you
don't
have
that,
then
you
get
into
these
issues
where,
like
I
model
how
I
want
bgp
to
look
but
there's
a
lot
of
clis
in
a
lot
of
different.
You
know
network
operator
platforms
that,
like
it,
does
not
accept
transactional
configuration
in
that
manner
like
you
have
to
go
in
and
turn
bgp
on
as
a
process
before
you
can
configure
bgp.
C
D
E
Yeah,
isn't
it
our
goal
to
be?
This
is
our
to
be
state
where
you
don't
have
the
transactional
model
anymore.
A
Okay,
we're
at
the
top
of
the
call
and
thanks
everybody
for
the
discussion,
we're
gonna
switch
over
to
cnf
working
group
for
anyone
that
wants
to
join
us
there
and
I'll.
I'm
gonna
drop
the
link
for
the
meeting
notes
and
the
chat,
if
you
don't
have
it
see
y'all
there.
Thank
you.