►
From YouTube: Kubernetes SIG Network meeting 20210603
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
network
plumbing
working
group
meeting
for
june
3rd
2021.
This
is
doug
smith.
I
am
standing
in
for
dan
williams.
We've
got
a
pretty
packed
agenda
today,
try
to
rip
through
what
we
can
quickly
and
then
we've
got
a
presentation
on
the
external
network
operator
proceeding
to
regular
business.
We
have
one
candidate,
maintainer
and
one
project
candidate
today.
A
A
All
right,
so
the
first
candidate
is
ivan
from
nvidia
for
the
helm,
charts.
A
Right,
excellent
and
for
project
candidates,
we've
got
the
accelerated
bridge
cni
being
proposed
gary.
Would
you
care
to
give
a
brief
overview
of
the
accelerated
vehicle
yeah.
B
Sure
hi,
my
name
is
yuri
I'm
from
nvidia,
so
we
would
like
to
propose
accelerated
bridge
cni
to
network
plumbing
working
group.
Accelerated
cni
accelerated,
which
cni
is
a
fork
of
sri
of
cni,
which
main
difference
is
that
is,
it
is
rely
on
breach
api
and
also
it
reused,
some
parts
of
sara
uv.
So
to
use
this
cri
cni,
you
need
supported
hardware
right
now,.
B
There
are
couple
requirements
for
this,
so
this
hardware
should
support
sriv
in
switch
dev
mode
and
also
hardware
drivers
should
support
linux
breach
of
loading.
As
far
as
I
know
and
know,
no
vendor
support
is
at
the
moment.
If
we
talk
about
nvidia,
we
plan
to
support
this
in
our
driver
in
kernel
5.14,
our
implementation
doesn't
use
any
proprietary
apis
or
any
custom
apis,
so
all
of
them
are
common.
So
we
would
like
to
move
this
project
to
network
plumbing
working
group
and
continue
developing
together
with
community.
B
B
Some
api
description
and
other
stuff,
so
I
think,
if
you
be
very
briefly,
that's
all.
A
A
A
Is
there
anyone
that
would
vote
against
this
project
candidate
or
anyone
that
needs
more.
A
A
All
right,
it
sounds
like
we
will
vote
to.
A
To
both
accept
accelerated
bridge
cni
and
also
accept
ivan
as
a
maintainer
for
the
helm
chart,
so
thank
you
very
much
on
all
fronts.
Hey
yuri!
Is
this
a
situation
where
you'll
transfer
the
repository,
or
should
we
create
a
new
repo
for
it.
B
In
general,
we
have
maintainers
in
nvda
my
in
mind
group
and
we
will
discuss
and
try
to
choose
best
solution.
I
think
we
can
just
transfer
this
repository
to
network
plumbing
working
group,
so
I'm
not
seeing
any
issue
with
this
so.
A
Okay,
I
just
threw
my
email
in
chat.
If
you
decide
to
that,
you
can't
transfer
it
and
you
need
a
repository
created.
You
can
get
in
touch
with
me
and
I'll
and
I'll
handle
that.
A
Once
it
starts
yeah,
actually,
that
probably
is
a
good
idea
and
let's,
let's
also
yeah,
of
course,
in
the
upcoming
weeks.
If
people
decide
that
they'd
like
to
become
maintainers,
that's
also
possible,
but.
B
No,
no
okay!
Let's.
A
A
A
Yeah
no,
I
appreciate
that
billy
thanks,
that's
oversight
for
sure
so
to
bring
it
to
a
vote.
Is
there
anyone
who
would
vote
against
yuri
being
a
maintainer
for
the
accelerated
bridge.
A
All
right
that
concludes
the
regular
business.
All
right
next
up
this
one,
I
think,
is
pretty
minor,
but
I
noticed
that
across
the
entire
network
calling
working
group
github
namespace
that
there's
lgtm.com
integration-
and
I
was
hoping
that
we
could
disable
that
going
across
all
of
the
repositories.
A
It's
totally
fine
if
people
want
to
use
it
per
repository,
but
it
like
runs
too
broad
of
a
test,
and
so,
for
example,
multis
will
fail
on
it
doesn't
pass
the
javascript
check
which
well,
I'm
not
disappointed
by
that.
But
I'd
like
to
turn
it
off
and
make
it
per
repo,
and
I
basically
want
to
let
people
know
about
this
before
and
go
and
disable
something
that
people
were
relying
on
for
other
projects.
A
And
so,
as
far
as
I
know,
lgtm.com
runs
some
kind
of
linking
on
your
code
and
does
some
type
of
conformance
test
and
that's
about
it.
So
it'll
give
you
some.
You
know,
output
on
a
pull
request,
saying
like
you
passed
or
didn't
pass
for
this
link,
slash
conformance
test
of
some
variety.
A
So
it
is,
I'm
not
sure
if
it
was
the
default
when
we
created
the
org
or
or
what,
but
I
didn't
notice
it
until
multis
had
gotten
moved
and
then
it
was
fine
until
it
started
failing
the
javascript
test.
But
that's
about
the
context
I
have
is
anyone
using
it
specifically
on
any
repos
right
now
that
is
relying
on
it.
A
And
I
mean
it's
not
a
bad
thing.
However,
it
does
cause,
for
example,
for
multis.
It's
just
annoying
that
you
go
to
see
that
the
ci
is
passed
and
then
it
says
the
ci
has
failed
one
job
and
you
look
and
then
you
go.
Oh
it's
a
test
that
I
can
ignore,
so
you
can
ignore
it,
but
if
we
lose
our
at
a
glance
if
this
pull
request
pass
ci
or
not
because
every
pull
request
is
now
failing,
one
job
nci.
A
Those
right,
if
no
one
is
using
it,
otherwise
I'm
going
to
set
it
up
to
per
repo,
and
we
can
re-enable
it
on
the
repos
that
may
rely
on
it.
If
we're
missing
somebody
today.
A
I've
got
the
next
topic
too,
and
hopefully
not
too
long
on
this,
but
I
wanted
to
bring
up
verbs
on
secondary
networks
and
I
searched
the
notes,
but
I
only
found
one
mention
of
it
that
paired
mentioned
it
at
the
end
of
last
year
and
I
wanted
to
see
if
anyone
remembered
other
discussions
or
we
came
to
any
other
conclusions
on
this.
A
E
So
I
so
at
that
time
I
mean
at
the
last
year,
at
that
time,
the
we
we
didn't.
We
just
start
to
have
the
birth
plug
in
the
cni
as
a
reference,
cnn,
plugin
and
then
so
so
the
of
course
the.
E
So
as
I
believe
that
yeah
we
met
at
staff,
our
working
group
is
focusing
on
the
cni
province
under
the
the
relative
stuff,
so
the
so
the
last
year
timing
we
we
just
have
the
birth
program.
So
that's
why
we
do
not
have
the
not
so
much
discussion.
C
E
A
Much
all
right,
cool
and
just
for
some
context
here
it.
This
just
happened
to
come
up
in
a
conversation
between
michael
cambria
and
myself,
and
I
went
to
go
see
if
we
had
any
notes-
and
I
didn't
so.
I
was
looking
for
a
reminder
but
trying
to
remember
what
the
rest
of
the
context
is
for
that.
But
that's
what
I
remember
at
the
moment.
A
Well,
cool!
We
don't
have
to
dwell
on
this
one
unless
anyone
has
any
other
thoughts.
Otherwise
I
can
pass
it
off
to
adrian
c.
Do
we
have
adrian
on
the
call.
A
At
any
rate,
his
question
is
which
group
name
prefix?
Should
we
use
so
the
like
dns,
like
name
space,
and
he
says
so
for
the
multi-network
policy,
the
net
attached
fcrd
uses
kates.cni.cncf.io
and
he
proposes
some
alternatives.
D
One
that
make
I
have
similar
topic.
The
next
topic
is
quite
similar,
but
over
there
we
already
have
some
confusion
around
that
one
that
created
a
more
additional
kind
of
chair.
We
have
so
many
people
already
using
the
net
attachment
definition,
crd
that
going
changing
that
one,
that's
that's
gonna,
be
a
big
deal,
wanted.
A
Yeah,
I
think
that's
going
to
be
non-trivial
for
sure,
and
I,
if
I
recall
correctly-
and
I
can
try
to
look
this
up
in
the
notes
too,
but
I
think
the
gist
was
that
there
was
already
out
in
the
wild
cni.cncf.io
and
we
were
figuring
that
this
is
kubernetes
specific
implementation
of
cni
as
it
relates
to
the
cncf.
I
guess,
as
cni
is
part
of
that.
F
A
A
I
moved
your
topic
for
the
srov
operator
into
this
topic.
Is
there
sure
more
commentary.
E
Oh
yeah
yeah,
so
the
sorry,
let
me
add
the
assault
comments
for
that.
So
I
understand
from
the
so
so
the
the
once
we
round
to
the
some
new
project
at
that
time,
the
I'm
yeah.
I
I
suppose
that
we
we
could
use
the
new
new
org
name,
but
also
the
some
stuff
is
right
here.
Let's
imagine
that
the
maltese
network
annotation
or
the
yeah
the
most
network
policy,
the
org
name,
so
this
is
already
the
released
in
the
some
certain
time.
E
We
need
to
thinking
about
how
to
migrate
the
old
version
stuff
new
version,
so
the
maybe
the
we
could
be
decide
which
one
name
but
the
other
time
that
we
it
should
be.
How
do
I
say
not
strict
change
request
to
the
current
running
project?
That
is,
that
is,
I
hope,.
A
We
also
could
officially
become
a
cncf
working
group.
I
don't
think
that
it's
a
heavy
lift.
It's
just
paperwork
that
I
have
not
followed
up
on,
but
if
people
are
strongly
feel
strongly
about
it,
we
certainly
can
boil
that
back
up
and
have
a
contact
at
red
hat,
who
is
part
of
the
cncf
governance
org
and
understands
how
that
process
works.
So
I
can
bring
that
back
up
to.
If
that
was
a
concern
I
mean
I
I
think
at
the
time
when
we
came
up
with
it.
A
I
don't
think
that
we
were
considering
putting
never
plumbing
working
group
in
the
thing
and
that
the
scope
was
fairly
narrow.
At
that
point,
when
we
were
focused
purely
on
the
net
attached
death
itself
and
kind
of
expanded
in
scope.
D
No,
that's
fine,
but
just
bit
of
a
context.
There
was
a
comment,
and
this
is
a
link
on
doubting
whether
we
or
whether
we,
whether
that
project,
could
use
that
group
and
basically
I
would
like
to
have
a
like
consensus
from
this
meeting
on
on,
like
an
approval
for
that,
we
can
that's
one
part
and
the
other
part
looks
like
we
would
have
to
define
whether
the
previous
that
we
just
discussed
whether
we
should
consider
is
the
consideration
that
we
had
just
what.
D
As
you
just
mentioned,
we
want
to
push
forward
and
then
push.
If
we
were
to
move
to
the
new
group
name,
then
it
should
be
that
and
npwg
something
group
or
just
stick
to
what
we
have
right
now,
because
the
initial
years,
so
someone
created
a
pr
that
was
exactly
use.
The
group
that
was
for
use
from
in
multus
and
the
network
attachment
definition
crd.
So
so
so
that's
the
proposal,
I'm
not
stuck
to
any
name
by
the
way.
D
It's
just
a
matter
of
me
to
kind
of
er
on
the
srov
operator
to
move
from
the
kind
of
vendor
specific
one,
and
there
is
a
discussion.
How
can
that
be
done
in
that
pr,
and
this
is
not
the
topic
here-
I
just
want
to
get
a
confirmation
that
it's
okay
for
everyone
that
uses
our
are
maybe
that
everyone
that's
inside
the
and
plumbing
group
in
github
should
be
able
to
use
that
group
wherever
whatever
we
decide
on
of
course,
and
right
now.
This
is,
I
think,
the
default
is
this
cncfio
right.
A
Yeah
correct
so
yes,
cni.cnf.io.
A
A
Yeah
I
mean
this
is
a.
This
is
a
fair
change,
of
course,
because
this
is
part
of
our
charter
of
this
group
is
to
remain
vendor
neutral
and
also
the
namespace
that
you're
using
is
the
the
one
that
we
have
historically
used
so
there's.
That
is
this.
We
can
put
it
to
a
vote.
A
So
certainly
some
one
of
the
items
in
the
in
the
governance
is
that
if
for
a
given
project,
you
can't
come
to
a
consensus
between
the
maintainers,
you
can
bring
it
to
the
group
for
a
vote.
So
I
guess
to
me,
there's
kind
of
two
parts
to
that,
one
of
which
is
the
name
space
that's
used
and
the
name
that's
used
itself.
A
D
No
that
and
that's
perfectly
fine,
that's
something!
That's
sorry!
That's
something
that
the
pr
discusses
around
so
and
that's
that's.
I
think
this
is
the
secondary
part
of
how
and
technical.
How
can
it
be
done?
And
I
just
want
to
confirm
here
that
we
have
the
mandate
to
use
that
group.
This
is
this
is
my
kind
of
concern
and
that's
what
that's
how
I'm
bringing
to
this
meeting
here
so
that
we
can
say
that
everyone,
that's
inside
this
network
plumbing
group
in
github,
can
use
this
group
and
basically
that's
it
right
later.
D
I
think
there
will
be
the
voting
that
you're
referring
to,
I
think,
if
it
gets
to
it.
I
think
that
should
be
done
in
the
the
monday
meeting,
where
we
have
the
discussion
about
the
srv
operator.
A
Mache,
I
know
this
is
awesome,
thanks
for
the
clarification
on
the
on
the
scope
here
and
thank
you
for
bringing
this
to
the
group
and
also
please
definitely
bring
it
back
to
the
group.
If
you
can't
come
to
a
consensus
on
the
other
considerations,
so
I
think
let's
open
the
floor.
If
anyone
has
any
comments
about
using
a
different
namespace,
my
my
introductory
comment
would
be.
A
I
think
that
it's
going
to
be
difficult
to
change
the
one
for
net
attached
f.
I
think
that
we're
going
to
see
that
around
for
a
while,
so
my
general
recommendation
would
be
to
stick
with
it.
But
I'd
like
to
hear
if
people
have
another
opinion,
because
naming
things
is
difficult.
C
So
I
guess
my
opinion
would
be
unless
anyone
feels
super
strong
about
it
would
be
leave
it
as
it
is,
and
I
do
think-
and
the
second
point
is
other
anything
any
repo
underneath
the
network
plumbing
working
group
should
be
allowed
to
use
whatever
we
decide
on
whether
it's
what's
existing
or
if
we
change
the
name.
E
So
to
me,
yeah,
I,
I
suppose
the
wish
so
so
now
we
should
keep
the
name
space
we
are
already
using,
so
they
are
this
is
for
so
they
it's
pretty
hard
to
changing
suddenly.
But
on
the
other
side
here
once
here,
we
we
launch
the
new
project
or
newly
purchasing
the
our
working
group.
At
that
time
there
we
should
use
their
newer
name
and
then
the
maybe
in
the
future
right
there.
E
If
we
we
publishing
or
the
net
attack
dev
spec
in
the
new
version
and
the
major
version
change
at
that
time,
we
we
could
think
about
to
changing
the
the
stuff
for
existing.
I
mean
that
yeah.
If
the
big
major
changes
happen,
then
their
current
project
can
changing
the
name
to
the
new
default,
but
otherwise.
G
D
D
A
Yeah,
I
think,
that's
correct.
It
sounds
like
of
the
three
voices
that
it
was
plus
ones
for
using
the
current
name
and
also
with
billy's
mention
that
you
can
use
an
arbitrary
sub
name.
Subdomain.
A
A
All
right,
excellent,
any
other
commentary
before
we
move
on
to
this
presentation
from
a
lot.
C
A
Yes,
let's,
let's
carry
it
over
to
the
next
next
meeting.
A
A
All
right
I'll
stub
that
in
hey,
lock,
you've
got
the
floor.
Let
me
make
sure
that
there's
permissions
to
allow
you
to
share
your
screen.
H
H
We
started
up
last
year
with
a
study
internally
within
ericsson
about
how
we
could
do
the
orchestration
and
automation
of
primarily
the
secondary
external
networks,
the
multis
secondary
networks.
Basically
to
begin
with-
and
we
come
up
with
a
proposal
and
a
concept
called
external
network
operator-
it's
a
kubernetes
operator
which
can
manage
and
automate
such
secondary
networks
dynamically
whenever
the
cnf
instantiation
occurs
and
requires
some
additional
networks
to
be
configured
before
it
becomes
fully
functional,
and
that
was
a
bit
of
a
history
and
then
from
breadhead.
H
I
think
tal
luron
started
a
discussion
in
cncf
and
there
we
have
a
channel
for
telecom
user
group
and
that
kind
of
match
up
the
concepts
of
what
we
are
trying
to
achieve
and
trying
to
fill
it
in,
and
we
started
up
our
discussions
there
as
well
and
later
on.
We
decided
to
have
that
discussion
in
cnf
working
group.
H
That's
another
working
group
under
the
cnc
umbrella
and
then
also
we
had
some
discussions
with
the
red
hat
folks
and
doug
was
all
so
there
last
week
about
the
you
know,
modeling
and
the
api,
how
it
should
look
like
and
what
are
the
different
constructs
that
are
being
involved
from
both
northbound
as
well
as
southbound
that
we
can
park
it.
Maybe
for
some
other
time
as
a
follow-up
thing,
and
so
this
is
about
the
background
and
the
bit
of
gun
context.
H
Like
we
all
know,
it's
a
single
network
interfaces
and
the
state
for
load
balancer
when
it
comes
to
the
net
external
networking,
which
doesn't
fulfill
and
also
there
are
certain
performance
requirements
that
the
telco
vnf's
acquire
right.
So
the
interface
is
based
on
the
kernel,
ib
stack,
which
doesn't
fulfill
those
requirements
and
also
the
proper
network
separation
that
doesn't
exist
within
the
default
service
model.
H
So
the
maltese
is
the
the
way
to
overcome
to
fill
that
limitation
in
the
telco
applications
and
that
allows
the
multiple
external
networks
that
can
be
attached
and
yeah
can
be
consumed
through
the
network
attachment
definitions
that
can
be
given
in
the
power
manifest
so
that
we
all
aware
about
right,
but
the
problem
is
or
the
the
gap
which
we
are
we
have
identified
and
trying
to
fill
it
in
is
we
still
have
the
pre-configured
fabric,
underneath
the
underlay
networks
have
been
all
configured
primarily
in
a
static
manner
on
a
day,
zero
kind
of
a
thing
when
you
deploy
your
cc
your
kubernetes
clusters
and
basically
yeah.
H
You
then
have
to
maintain
it
manually
and
during
the
life
cycle,
and
so
it's
sort
of
a
can
say
the
appliance
kind
of
a
model
that
we
should
be
aware
about
all
applications
that
will
be
deployed
and
what
will
be
the
network
requirements
and
we
have
to
plan
considering
the
future
about
the
expansion
and
the
scalability
aspects
of
what
all
vpns
that
they
will
be
requiring
in
future
and
plan.
Accordingly,
when
it
comes
to
configuring,
those
underneath
underlay
networks
on
on
a
fabric
level,
so
yeah.
H
So
the
admin
can
then
instantiate
the
network
function,
which
can
then
consume
the
created
tenant
networks
right.
So
that
has
like,
I
said
all
been
done
manually
or
at
least,
and
the
underly
networking
part
it
still
be
done
mostly
manually
or
through
some
bash
scripts
or
with
some
terminal
commands
directly.
H
What
we
say
as
a
solution
is
the
automation
of
those
on
those
tasks
through
the
external
network
operator
that
will
automate
those
external
network
creation
and
basically
the
management
of
such
the
the
life
cycle
of
those
external
networks
can
be
managed
through
that
operator.
So
what
does
you
know
like?
I
said
it's
a
kubernetes
operator
that
will
be
running
inside
the
kubernetes
cluster
for
automating
those
external
networks.
H
H
It
has
a
pluggable
architecture,
which
kind
of
supports
the
multi-vendor
fabric
agnosticity
that
allows
different
vendor
fabrics
to
be
plugged
in
through
the
corresponding
plugins
for
configuring,
their
fabrics
for
the
underlay
networks
and
also
it
follows
the
two
facade
architecture,
one
internally,
managing
the
cluster
custom
resources
like
the
the
nat
creations
based
on
the
and
the
ap.
The
constructs
that
the
northbound
api
expects,
as
well
as
the
external
one
that
will
be
managing
the
underneath
networks
and
the
the
fabric
through
the
corresponding
plugin.
H
So,
let's
revisit
the
same
workflow
with
the
introduction
of
eno
with
the
sense
of
automation.
So
it's
the
end-to-end
flow.
So
we
start
with
some
orchestration
layer
like
it
could
be
an
nfvo
manu
layer
which
could
on
board
the
csar
packages.
Those
are
the
tosca
based
templates
that
will
be
onboarded
when
we
have
to
instantiate
a
network
service,
so
that
could
then
translate.
H
But
then
in
step,
two
those
can
feed
into
eno
northbound
api
and
then
based
on
those
objects.
The
eno
will
then
triggers
the
fabric
orchestration,
which
is
configuring.
The
vlans
in
the
data
fabric,
using
the
corresponding
fabric
plug-in
through
the
southbound
api,
and
once
that
has
been
done.
Eno
can
then
create
the
net
attachment
definitions
and
then
followed
by
nfvo
or
some
other
orchestration.
H
H
We
have,
as
I
said,
modeled
it.
How
and
what
all
constructs
it
has
to
expect
from
as
a
for
orchestration
of
those
networks
so,
and
that
I
put
it
as
a
link
in
the
in
the
meeting
notes
if
anyone
would
like
to
see
and
then
that
there
is
a
component
called
fabric
agnostic
operator
which
basically
has
an
interface
towards
the
southbound
api
of
eno
and
that
calls
the
corresponding
fabric
plug-in
based
on
the
fabric
that
have
been
used
in
the
customer
deployment.
H
Or
controller
like
an
sdn
controller,
so
to
configure
that
fabric
and
we
currently
running
a
park
for
eno,
where
we
are
testing
it
out
with
ovs
bridge
and
we
have
developed
dummy
obvious
plugin
if
I
call
it
like
and
I
put
a
link
for
the
get
a
github
repo
as
well
for
that,
it's
still
the
work
in
progress,
the
the
plugin
itself,
but
yeah
it's
in
yeah.
H
D
H
G
G
H
H
I
mean
if
malta
supports
that
cni,
so
I
I
think
that
that
can
be
doable
and
would
be
supported
by
you
know
as
well.
Yeah.
I
I
That's
what
you
call
the
fabric,
any
members,
networking
solution
that
you
then
basically
have
had
some
way
to
coordinate
server,
to
switch
port
on
the
fabric
and
then,
when
a
pod
gets
scheduled,
cni
gets
called
and
it's
time
to
attach
networks
to
the
pod.
You
can
go
to
the
fabric
and
pull
up
those
networks
and
then
attach
them
to
the
pod
in
the
expected
way.
H
H
We
expecting
some
support
and
like
if
this
proposal
could
be
the
way
to
orchestrate
the
multi-secondary
networks,
so
yeah
with
the
with
the
intentions
to
make
it
a
sort
of
a
de
facto
api,
so
to
configure
such
external
networks
so
basically
for
managing
and
automating
those
external
networks
that
are
being
required
by
the
network
functions
during
the
lifetime
and
and
it's
to
disintegrate.
With
the
cluster
life
cycle
like,
as
I
mentioned
in
the
beginning,
will
you
share
these
slides?
I
F
So
is
I
have
a
question:
is,
is
the
intent
of
this
to
help
with
bare
metal
clusters
right
so.
H
H
Basically,
that
has
an
interface
with
your
real
fabric,
but
in
case
of
bare
metal
like
if
I
take
the
example
of
this
fabric,
a
you
have
the
the
interface
from
the
south,
the
southbound
interface
from
eno
to
using
the
the
fabric
plug-in,
which
then
has
a
direct
interface
with
the
fabric
to
configure
those
networks
as
the
underlay.
So.
F
Yeah,
so
basically
it's
a
it's
a
kubernetes,
primitive.
Where
I
could
say
I
could
define
a
vlan,
I
can
define
a
subnet
and
then
that's
going
to
go
and
configure
it
on
fabric
a
b
whatever
it
is
right
right
and
then
it's
also
going
to
create
the
network
attached
definitions
and
everything
like
that.
But
what
about
like
ipam
like
do
you
need
to
know
like
as
an
operator
right,
do
I
need
to
know
like?
Oh
okay,
this
is
the
vlan.
This
is
the
ip
space
that
I
want
to
use
ahead
of
time.
H
Yeah,
so
that
all
those
attributes
we
are
expecting
it
to
be
aware
or
fill
it
in
by
the
network
architect
when
they
basically
so
we
are
expecting
it
from
the
northbound
apis.
The
objects
like
it
might
not
be
readable,
but,
let's
say
the
l2
service.
They
have
an
object
like
the
routes
and
the
and
the
subnet
and
that
those
objects
expects
some
attributes
to
be
fitted
in
by
the
network
architects.
So
that
should
be
well
aware
before
to
configure
the
items.
I
But
you
do
l2
stitching
right
and
then
you
assume
that
ipam
is
there.
What's
there
sort
of
from
the
spec
from,
I
can
never
remember
it
with
a
plumbing
group
specification.
I
That's
what
it
looked
like
to
me
and
then
you
can
shoot
in
routes
into
on
the
raspberry
interface
then,
or
that
you
had
in
your
on
git
lab.
I
H
No,
so
the
vrf's
are
like
the
those
are
on
the
gateway
levels,
so
the
vrfs
will
be
then
stitched
with
the
created
vlans
in
the
fabric
through,
let's
say
the
orchestration
layer
or
something.
F
H
Thank
you
for
listening
in
and
yeah
I
mean
I
can
get
in
touch
with
doug
about
the
follow-ups
regarding.
Maybe
we
can
plan
something
around
understanding.
The
data
model,
like
I
said
about
what
all
constructs
and
maybe
that
could
even
make
it
more
understandable
from
from
the
ipam
and
the
routable
things.
I
can
go
in
details
around
that
concept
and
we
can
give
a
quick
demo,
maybe
but
yeah.
H
A
I
H
A
Great
all
right!
Well,
thank
you
very
much.
I
appreciate
it
and
yeah.
Thank
you,
everybody
for
the
time
today
and
excellent
yeah.
So
let's
see
everybody
in
two
weeks
that
will
be
on
the
17th.
So
oh
and
thanks
awesome.
I
will
put
that
in
the
agenda
too,
all
right!
Well,
everybody
have
a
nice
day.