►
From YouTube: Multi-Network community sync for 20230830
Description
Multi-Network community sync for 20230830
A
Right
to
work
out
right
at
the
next
multi-networking
Community
sync,
there
is
August
30
and,
to
be
honest,
I
don't
have
much
topics
for
today.
I
just
started
refactoring
the
the
the
draft
of
our
cap.
The
one
thing
that
I
run
into
is
the
issue
is
how
to
name
the
the
other
object
that
we
wanted
to
Define
in
the
Pod.
So
I
want
to
talk
about
that
and
then
I
want
to
identify
any
other
phase,
one
gaps
that
we
might
have
missed.
A
If
anyone
else
seen
something
anyone
else
have
any
other
topics,
if
maybe
otherwise,
if
you
have
any
other
topics
to
talk
about,
please
add
I,
don't
expect
it's
gonna
take
longer
than
what
we
expect
one.
Maybe
you
think
of
announcement
is
that
we
managed
to
get
a
talk
about
multi-network
into
the
next
kubecon.
A
So
me
and
Doc
are
going
to
talk
about.
The
topic
is
about
multi-tenancy
using
multi-networking,
so
yeah
we
managed
to
finally
get
get
into
cubecon
and
talk
or
whatever
we
do
here
in
kubecon
to
the
kind
of
whole
community.
So
that's
kind
of
a
win
for
us.
Hopefully
we
will
get
some
of
those
changes
merged
in
as
of
this
cap,
so
that
we
can
talk
about
that.
One
I'm,
not
sure
about
the
implementation,
but
we'll
see
how
that
goes.
A
In
terms
of
the
Pod
Network
naming
so
we
have
our
main
object
called
pod
Network,
which
basically
represents,
is
the
main
kind
of
handle
for
Network
representation
in
nuclear
cluster.
A
But
then
we
want
to
have
additional
glandular
purple
configuration
capabilities
right
and
it's
that's
what
the
Pod
into
pod
network
interface
is
supposed
to
be
and
I'm
calling
it
like
that,
because
I
think
this
is
what
I
initially
proposed
in
terms
of
names
but
I
think
the
meaning
at
that
point
was
slightly
different
pod
network
interface
supposed
to
provide
pair
interface
per
network
interface
inside
the
Pod
per
attachment
information
right,
which
currently,
it's
lost
its
functionality.
A
That
probably
is
going
to
be
subsided
to
the
probably
param
that
you
can
provide
in
pod,
Network
and
maybe
I
will
kind
of
share
that
so
that
we,
you
know
what
I'm
talking
about
this
time.
A
Has
basically
just
two
fields,
the
network
it
belongs
to,
so
the
product
network
interface
will
belong
to
a
has
to
belong
to
some
Network,
because
it
cannot
be
any
only
one.
It
cannot
belong
to
multiple,
so
you,
you
would
just
create
multiple
other
pod
network
interfaces
if
it
has
to
belong
to
other
one
and
then
a
reference,
a
similar
pointer
to
an
ecrd
like
important
Network.
A
So
that's,
basically,
that's
what
the
object
has
any
proposals
of
what
we
should
call.
This
object.
B
I
mean
I'm
not
opposed
to
pod
network
interface
as
it
stands,
however,
I
guess
my
one
concern
is.
The
interface
is
maybe
a
little
overloaded.
B
A
pod,
Network
attachment
I
mean
I,
think
that
that
is
maybe
good,
because
it
is
like
yeah
how
it's
like
the
the
pointers
to
additional
yeah
is
here
right.
There.
A
C
Attachment
with
common,
you
know
a
more
idiomatic
noun
for
the
community
across
the
entire.
C
Is
used
in
what's
it
called
maltus
and
also
the
cni
specification.
A
C
A
A
A
That's
quick,
you
see
five
minutes
and
we're
almost
done
so
I'm,
not
sure
anyone
had
the
chance
to
to
look
at
the
gaps
or
read
through
the
dock
as
it
stands
so.
A
And
identify
any
gaps,
I
I
was
thinking
about
this
a
bit
and
and
I
think
I
did
found
one
one
Gap
and
the
one
thing
that
we
didn't
mention
since
going
back
to
the
Pod
Network
spec.
So
we
have
parameters
reference
provider
and
we
have
all
those
two
options.
Right
ipam,
one
of
the
ipam
options
is
here
is
external.
Where,
basically
we
don't
do
internally,
we
don't
do
another
thing,
but
we
do
expect
an
IP.
So
basically,
this
will
be
done
by
some
external
means.
A
Kubernetes,
where
we
expect
a
kubernetes
will
do
a
node
I
pump
for
us,
and
then
we
can
expect
a
specific
place
which
we
would
have
to
Define
similar
to
node
pod
cider
node,
spec
Bots,
either
similar
like
that
place,
where
you
have
a
pair
node
cider
defined,
and
you
will
pull
from
there
to
for
your
network,
that's
the
kubernetes
style
and
then
the
last
one
is
known.
Where,
basically
we
expect,
we
don't
expect
an
IP.
A
So,
basically-
or
let's
say
it
might
be,
some
sort
of
just
basic
L2
connection-
or
you
maybe
want
to
have
some
dpdk
and
you
manage
the
iPhone
internally
or
some
of
them
so
basically
a
case
where
I
don't
know,
I,
don't
expect
any
IP
from
that
attachment
and
the
Gap.
What
I'm
seeing
is
this
guy,
because
the
other
one,
the
other
external
and
known,
are
basically
just
do
nothing
from
core
point
of
view.
But
this
one
is
is
on
us,
so
basically
I
think
we
didn't
discuss
on.
A
How
would
we
expand
the
current
per
node-sider
story
on
to
how
to
would
support,
kubernetes
and
I'm
trying
to
convince
myself
with
we?
We
can
punt
it
to
another
cap
since
some
of
the
things
like
the
cluster
side,
there
is
being
there
is
a
I'm,
not
sure
how
much
you
folks
are
following
Sig.
Networking
Antonio
is
currently
working
on
sorting
out
the
cluster
side
there
and
there
is
another
object,
called
service
cider
that
is
trying
to
kind
of
get
in
to
the
to
the
core
as
an
objects
right.
A
So,
basically,
since
those
are
still
in
flight
on
how
they're
gonna
behave,
I
want
to
make
myself
an
excuse
that,
okay,
until
that
guy,
is
ready.
This
is
when
we're
gonna
after
that,
and
that's
what
I
want
to
kind
of
put
in
the
dock
here
as
well,
that
until
that,
basically,
this
kubernetes
objects
will
not
be
supported.
Yet
it's
going
to
be
a
future
thing
in
a
separate
cap.
A
A
I
will
keep
it
just
to
kind
of
indicate
that
we
wanted
to
have
that
and-
and
I
will
call
this
out-
that
we
would
do
it
after
the
cluster
cider
comes
with
service
side
there,
which
is
currently
in
in
flight,
and
when
that's
done,
then
we
will
create
a
separate
CAD
that
will
introduce
this
and
connect
with
his
work
right.
A
That's
kind
of
one
Gap
that
I'm
seeing
and
an
excuse
to
not
pursue
it.
Currently
any
comments.
Any
other
comments.
A
All
right,
I'll
take
silence
as
an
acceptance.
I
think
someone
was
any
other
gaps
that
you
can
identify.
I
I
think
Vim.
Last
last
time
mentioned
about
the
gaps
on
how
we
are
going
to
behave
for
the
attachment
and
some
Corner
cases
for
those
and
I
I
do
remember:
I
I,
unfortunately,
I
didn't
had
I
I
didn't
have
time
to
look
at
the
volumes,
but
this
time
I
added
my
myself
a
a
a
a
AI
item
to
look
at
it,
so
that
I
will
don't
forget
at
this
time.
A
I'm
just
I
cannot
I
lost
track.
I
couldn't
look
into
this,
but
beside
that
any
other
gaps
that
you
might
see
for
the
first
phase.
A
And
to
kind
of
summary,
we
we're
going
to
end
up
with
two
objects
with
two
new
objects:
the
Pod
Network
and
the
Pod
Network,
and
the
Pod
Network
attachment.
Those
two
objects
will
allow
us
to
unify
the
apis.
A
Those
will
be
able
to
be
specified
in
the
Pod
spec
and
then
we
will
have
a
basic
handling
of
that
object
inside
kubernetes.
Basically,
that's
what
this
change
will
will
phase.
One
of
this
change
will
allow
us
I
think
for
starters,
anyone
seeing
seeing
like
path
forward
from
that
point
for
on
for,
like
I'm,
not
sure
Nathan,
I,
think
from
from
your
side
and
a
dog
or
his
stomach.
A
B
I
I
think
that
we
can.
We
can
move
forward
with
what
here
and
yeah.
Mostly
my
concerns
I
think
have
been
addressed,
which
is
mostly
that
one
we
don't
have
too
many
required
parameters
that
go
unused
in
my
particular
case,
which
I,
don't
think
is
there
and
between
the
param
giraffe
and
the
Pod
attachment
pod
Network
attachment
parameters.
B
Reference
I
think
that
we
can
fit
in
the
the
pieces
that
we
need,
because
at
least-
and
you
know
I
I'm
hopeful
to
see
something
evolve-
kind
of
at
like
a
wider
Community
level
for
cni
type
of
implementations,
I'd
like
to
see
some
Evolution
there.
B
But
if
we,
you
know,
I
see
a
possibility
that
maltes
can
be
modified
to
use
this
particular
methodology
because
I,
you
know
I'm
hopeful
that
people
want
to
use
this
Improvement
that
we've
made
and
I
think
that
it's
going
to
be
better,
for
you
know,
say
in
particular
a
user
who's
or
I
should
say
a
developer
who's
say:
writing
a
controller
that
has
you
know:
Network
intelligence,
I,
think
that
this
is
going
to
be
a
better
experience
to
use
so
yeah
I
think
that
we
can
work
with
this
from
a
multis
perspective
and
then
I
can
see
possibilities
where,
when
we
have
maybe
a
more
holistic
solution
that
takes
into
account,
you
know
say
like
next
Generations
of
cni
and
what
we
want
to
see
as
a
community
there,
because
something
I'm
certainly
concerned
about,
is
you
know,
keeping
the
rich
community
that
we
have
created
around
cni
and
having
this
sort
of
you
know,
framework
and
standard
API
to
work
with
I.
B
Think
is
really
good
for
the
community
and
I
think
that
it's
really
good
for
administrators
and.
D
B
People
creating
a
managed
service
that
you
have
like
this,
like
kind
of
common
ground
to
to
handle
these
type
of
questions,
so
yeah
I'm,
okay,
moving
forward
from
here,
and
at
least
my
experience
with
the
Network.
B
Definition,
work
is
that
you
know
it's
probably
a
reality
that
we
get
to
like
a
certain
point
where
we're
starting
to
you
know,
build
some
implementations
and
people
will
come
back
to
the
table
and
be
like
hey.
This
could
use
a
tweak
that
I
could
do
it
use
a
tweak.
B
So
you
know
I
think
that
we're
on
you
know
fairly
Solid
Ground,
that
we
have
stomped
on
quite
a
bit
here
so
yeah,
I'm,
I'm,
okay,
to
move
forward
from
where
we're
at
and
I'd
be
curious
to
hear
what
other
people
think
too.
E
C
Was
gonna
just
say,
plus
one
for
Doug
also
and
continue
like
the
current
Community,
is
definitely
will
be
useful
here.
E
Yeah
postpone,
on
my
end
as
well.
I
agree
with
the
dogs
comment
and
regarding
all
implementation
in
Calico.
I
think
that
that's
the
we
can
definitely
move
forward
and
we
will
probably
try
to
move
forward
as
soon
as
possible
because
we
were
already
doing
something
you're
visiting
Cinema
but
with
trd's.
You
know,
recommendation
so
yeah.
That's.
A
E
A
So
one
thing
that
that
I
will
play
David
devil's
advocate
here
is:
are
we
introducing
just
an
API
without
any
implementation?
I
think
the
example
here
being
Network
policies
which
exist
today,
which
is
an
API
that
doesn't
have
a
real
implementation
in
the
back
end?
B
Well,
I
kind
of
think
yeah,
and
maybe
it
makes
sense
to
think
about
our
sort
of
phased
approach
here,
but
I.
B
Yeah
no
worries
I,
think
that
you
know
more
people
will
become
interested
in
giving
their
input
when
they
can
kind
of
see
the
implementation
in
action.
B
I
think
it
gives
sort
of
a
like
Hands-On
bends
to
it.
So
you
know
I
I,
think
that
we
should
be
open
to
the
fact
that
we
might
see
I,
don't
think
that
we'll
see
like
Revolution
when
we
have
implementations
but
I,
think
that
we
should
be
open
to
having
Evolution.
B
When
we
have
implementations
and
I
mean
it's
I
I
think
it
would
have
been
foolhardy
to
try
to
create
the
implementations
kind
of
any
earlier
than
yeah
now,
or
maybe
it's
even
still
a
little
too
too
premature
and
I
I
think
outside
of
some
of
the
stuff
that
we're
gonna
have
in
here
like
services.
B
B
So
we're
gonna
kind
of
see,
like
non-linear
timeline
here,
a
little
bit
with
how
implementations
come
in
so
I
would
say
you
know
I'm
especially
happy
moving
forward.
If
we
say
you
know
this
is
this
is
our
like
stable
platform?
B
We
don't
want
to
change
the
whole
shebang,
but
if
people
come
back
and
are
making
implementations-
and
we
can
like
make
a
good
case
for
you
know-
tweaks
I
think
that's,
okay
and
you
know
also
the
like
life
cycle
for
kubernetes
itself,
Works
in
our
favor
right,
because
we're
not
saying
this
is
going,
you
know
direct
to
GA.
You
know
this
will
be
in,
like
you
know,
beta
phase
for
a
bit
too
right.
B
A
Totally
enough,
so
the
other
aspect
to
this
is
that
we
will-
and
maybe
I
will
add,
that
element
we
will
have
to
you
say
it
in
like
core
kubernetes
implementation.
We
we
don't
publish
one,
but
if
I'm
not
mistaken,
Antonio
has
some
cni
in
the
CI
today
or
for
kubernetes.
There
is
some
Cube
cni
or
something
I'm,
not
familiar
I'm,
going
to
look
into
that,
and
maybe
we
will
we
this.
A
This
change
should
implement
some
some
sort
of
multi-networking
in
that
guy,
so
that
the
C
we
could
at
least
test
it
right,
because
otherwise
and
and
something
that
is
open
source
and
it's
not
it's
owned
by
kubernetes
itself
right
by
the
core
and
the
cni
is
very
basic
from
what
I
know.
A
It's
I
think
it's
I
need
to
look
at
it.
I
I'm,
not
sure
there
is
something
so
maybe
at
least
that
that's
something
that
we
would
have
to
have
some
somehow
sort
of
maybe
a
bit
of
a
reference
implementation
for
for
all
others
right
so
maybe
I
will
I
will
look
into
that.
I
will
add
myself
an
AI
to
to
look
at
yeah.
A
So
I'm
I'm
I'm
I'm,
referring
to
some
sort
of
a
cubelet,
cni
and
I'm,
not
sure
about
the
name
that
is
being
used
inside
inside
kubernetes
end-to-end
tests.
Okay,
today
in
the
in
the
CI
continuous
integration,
oh.
A
B
A
F
I
just
wanted
to
check
when
we
talk
about
not
producing
an
implementation.
I
think
that's
fine
up
to
a
point.
The
bit
that
I
think
I'm
less
clear
on
is
what
happens
in
the
kublet,
because
we
can
specify
all
this
stuff
in
the
kubernetes
API.
All
these
resources
come
into
existence
and
somewhere
beneath
that
something
has
to
set
them
up
now.
To
get
that
information
something's
going
to
have
to
read
these
resources
and
pass
whatever
information
it
needs
through
and
I
was
assuming.
That
was
the
kublet
and
therefore
that
would
be
core
kubernetes.
F
D
F
A
Pete
is
that
there's
something
that
we
discussed
for
for
the
discussion
for
the
CRI
API,
because
what
you're
calling
in
and
when
we
looked
at
it,
it's
purely
the
is
there
a
six
CRI,
Michael
I'm,
not
sure.
Is
there
something
like
that.
A
Signal
just
basically
all
right,
but
but
what
I'm
thinking
is
this
is
a
change
in
the
Cris
apis
themselves,
all
right
and
it's
fully
over
there
because
down
to
cube
down
where
cubelet
passes
things
to
the
CRI
cubelet
has
access
to
the
Pod
spec.
So
basically,
we
already
provide
from
cubelet
and
core
kubernetes.
We
already
provide
all
the
parameters
there
so
because
we
change
the
Pod
spec
itself.
A
So
what
has
to
next
have
changed
is
the
Cris
apis
so,
and
this
is
what
I
called
out
in
the
in
the
dock,
so
so
what
you're
calling
out
and
what
we
decided
is
that
this
part
should
be
done
by
a
separate
cap
done
in
with
the
discussion
with
the
Cris
themselves,
because
you
need
to
change
the
Cris
API.
So
then
you
have
to
have
discussion
with
cryio
I'm,
not
sure
what
is
cryio,
container,
D
and
others
right.
So,
basically,
that's
not.
A
We
I
I
hear
you,
but
basically
from
our
point
of
view
from
core
this
is
this
is
now.
If
you
want
to,
you
know
change
the
yeah
I,
don't
think!
That's
that's
part
of
this
Camp!
That's
what
I.
F
A
F
A
C
I
was
going
to
comment
on
that.
I
did
add
an
agenda
item
to
discuss
that
so
once
we're
done
with
what
our
current
agenda
item
is.
What's
circle
on
that.
A
I
had
it
exactly,
let
me
just
add:
okay
and
let's,
let's
any
other
so
so
before
we
we
jump
to
that
CRI
staff.
Any
other
comments
on
what
we
have
today,
what
we
we
are
trying
to
today,
what
we
are
trying
to
propose
to
the
community
in
terms
of
so
yes,
our
staff
doesn't
have
like
the
full
implementation
and
excuses
that,
because
it's
hard
it's
it's
not
like
we
can
have,
we
cannot
impose
anyone
to
have
a
cni
implementation.
Everyone
wants
to
have
their
own.
A
That's
one
aspect
and
the
other
aspect
is
we
still
phase
one?
So
we
we
are
going
to
expand
on
this.
It's
not
like.
We
are
that's.
We
are
one
off
and
on
it's
basically
just
the
first
phase
out
of
the
few
that
we
have
so
that's
another
argument
too.
That's
why
we
are
having
here
I
will
look
into
the
reference
implementation
that
we
do
in
the
test.
Infra
and
maybe
that's
the
only
place
where
we
will
have
some
sort
of
a
reference
implementation.
A
So
we
can
enable
a
testing
for
this,
because
we
should
have
some
sort
of
testing
in
here,
so
that
and
have
some
sort
of
reference
implementation.
So
maybe
that's
one
aspect
where
we
will
present
something
in
terms
of
like
implementation.
Wise
is
that
a
good
summary
I
think
that's
that's.
A
All
right,
yeah
Michael,
go
ahead,
see
you
right
caps.
C
Yeah
I
was
that
was
just
the
action
item
from
last
week,
because
I
didn't
know
yeah
what
the
plan
was
and
then
I
guess
somehow
hey
I,
don't
mind
doing
it.
These
caps
are
horrible
and
I'll
totally
throw
myself
on
that
grenade.
But
that's
the
plan
to
use
what
we
have
in
this
cap
to
facilitate
the
CRI
changes
in
support
of
multi-network.
C
Is
I
think
that
will
be
what
I
think
would
be
useful
here?
In
you
know,
just
throw
it
out
here
is
that
we
probably
should
join
the
next
Sig
node
meeting
with
a
little
bit
of
a
presentation
just
to
discuss
what
multi-network
is
and
discuss
kind
of
the
road
map,
especially
like
you
know
what
we
intend
to
solve
with
MVP
version,
one
V2
V3,
and
then
we
can
propose
that
you
know
maybe
they'll
also
push
back
and
say,
keep
this
all
in
one
cap
or
have
two
caps.
C
I
would
assume
they're
gonna
break
this
up
into
multiple
two
caps
here,
so
I,
don't
know
how
you
want
to
like
organize
that
like
I,
could
we
can
start
at
Google
Slides
or
whatever
it's
called.
This
is
like
a
quick
10
minute
presentation
of
the
who,
what
where
it
went
for
signode
and
kind
of
get
their
guidance,
and
also
so
they're
more
aware.
So
when
they
do
see
this,
because
regardless
yeah.
A
No
I
agree
and
thank
you
for
bringing
us
up
Michael
yeah
yeah.
Let's,
let's
definitely
do
that.
Do
you
know
when
is
the
next
one?
Is
it
this
week
or
or
next
week,
I'm.
C
Not
certain
we'll
we'll
also
have
to
I
think
what
the
to
kind
of
cover
all
of
our
bases.
It
should
be
Sig,
node
and
Sig
architecture,
because
those
will
all
be
involved
with
you
know
the
long.
You
know
the
Long
Haul
of
this
effort,
so
the
sooner
we
get
them
on
board
the
less
Turtles
we
have
to
jump
through
now.
A
Let's,
let's
start
working
on
the
slides
themselves,
exactly
and
and
maybe
let's
start
from
yeah:
let's
do
the
a
a
road
trip
around
with
around
this,
with
this
around
all
the
six.
So,
let's
start
with
Sig
network,
if
anything,
let's
start
over
there,
any
any
feedback
from
The
Wider
group
from
Sig
network
on
on
what
we
are
thinking
to
do
and
then
and
then
move
on
to
the
other
other
teams,
cool.
C
A
A
All
right
any
other
topics,
any
other
comments
on
that
CRI
stuff
Pete
did
we
kind
of
addressed
your
concern
here?
I
hope
right
with
the
CRI.
F
A
So
no
I
think
we
have
time.
We
don't
have
much
much
else
to
discuss,
but
any
other
comments
on
the
CRI
stuff
from
anyone
else
all
right.
We
have
some
more
time
so
next
steps,
Next
Step,
will
be
to
find
finalize
the
draft
of
the
dock.
Do
the
slice?
A
That's
a
good
point
as
well:
let's,
let's
work
on
that,
so
that
we
can
now
show
what
we
did
for
the
rest
of
the
community
and
then
I
think
for
this
group
I'm
not
sure
what
what
will
be
the
best
course
of
action.
Should
we
move
on
to
the
next
phase
and
keep
discussion
on
the
second
phase,
which
I
not
sure,
or
should
we
just
focus
on
phase
one?
So,
basically
in
phase
one
yeah
I
will
try
to
work
on
the
dog.
But
after
that
we
need
to
kick
off.
A
Some
changes
in
in
KK.
Try
to
kind
of
what
we
decided
here
at
least
prepare
changes
for
that
and
then
have
have
maybe
the
implementation,
ready
and
and
in
place
as
soon
just
just
when
we're
gonna
push
the
the
draft.
When
we're
gonna,
of
course,
have
a
bunch
of
comments
address
those,
but
then,
when
we're
gonna
merge
the
cap
have
the
implementation
in
place.
I
think
that's
that
will
be
the
focus.
A
Our
meetings
can
be
probably
right.
Now,
sync
on
where
we
at
with
some
of
those
pieces.
A
A
So,
basically,
in
terms
of
like
design
work,
I
would
put
the
pause
button
on
that
one
to
kind
of
focus
more
on
kind
of
the
implementation
side
of
the
thing,
rather
than
just
keep
going
with
the
discussion
on
design.
C
Something
that
came
to
mind
there,
mache
for
the
the
cap
is,
do
we
have
sufficient
coverage
on
the
the
resource
life
cycle
AKA
like
have
it
pretty
explicitly
defined
in
the
cap
of
like
what
happens
when
you
schedule
a
pod
and
there's
no
pod
Network
resource?
What
happens
if
you
try
to
to
delete
the
the
resource
if
it's
attached,
I
believe
those
kind
of
questions
do
get
asked
quite
a
bit.
D
A
No,
no
so
so,
basically,
the
first
one.
What
we
said
is
POD
network
is
not
present.
We
call
this
out
right,
so
basically
we
would
fail
and
we
will
say
that
if
pod
network
is
not
present
or
not
ready,
then
we
will
error
out
in
a
similar
way
what
volume
would
fail.
So,
basically,
if
there
is
an
error
in
your
spec,
so
we
will
see
how
the
I
need
to
check
how
the
volume
fails
and
do
the
same
thing.
That's
on
those
aspects,
and
then
you
cannot
delete
pod
Network
right.
We
you
see
the
con.
A
The
in-use
condition
is
for
that.
So
basically
an
object
that
is
attached.
So,
basically,
if
there
will
be
a
pod
referencing,
a
a
pod
Network,
we
will
set
an
in-use
condition
yeah
and
that
will-
and
that
will
mean
that
that
you
cannot
delete
the
object.
So
basically,
there
will
be
a
finalizer,
at
least
on
that
object
until
the
Pod
is.
A
A
Very
important,
thank
you
for
that.
Any
other
comments
on
this
I
will
try
to
come
up
with
some
list
of
tasks
that
we
can
kind
of
split.
If
anyone
wants
to
join
in
to
do
and
help
out
right
now,
I
do
have
a
branch
on
a
KK
which
I'm,
probably
gonna,
have
to
update,
or
we
can
just
start
from
scratch
on
where
we
could
kind
of
collaborate
any
implementation
of
this,
or
we
can
start
even
at
PR,
because
it
doesn't
have
to
be
what
I
want
to
say
that
we
right
now.
A
A
D
The
grm
resources
new
resources,
it's
just
a
text
yeah
that
one
yeah
so
point
bullet
two.
It
says
Port
Network
may
have
overlapping
subnets.
D
D
A
Through
it
and
then
then
yeah
go
go
crazy
and
and
even
comment
on
everything
because
yeah
this
is
the
the
more
feedback
I
get
the
better,
it
will
be
readable
and
the
more
something
is
the
last
thing.
That's
understandable
that
someone
else
gonna
reading
this
will
run
into
the
same
issue.
So.
C
A
You
see
any
any
gaps
or
something
is
unclear.
Point
that
out
comment
in
the
doc
and
then.
A
Yeah
yeah.
Please
wait,
though.
Let
me
do
the
first
pass
on
this.
Let
me
refactor
this
properly
add
what
the
lifecycle
description
as
well
and
then
we
can
yeah
and
I
will
I
will
try
to
bring
you
guys,
hopefully,
maybe
this
week
I
will
be
able
to
to
finalize
it
on
this
and
and
tag
you
all,
but
yeah,
okay,.
A
A
Yeah
and
basically
the
on
the
top
of
the
maybe
I'll
share
this
stuff
for
a
second
right.
So
there
is
the
requirements
cap
here
right
at
the
top,
where
you
will
have
the
no
that's,
not
the
pr
itself.
You
go
to
the
pr,
and
there
is
on
the
bottom
of
that
PR.
You
have
the
phases
split
apart
right,
so
we
we
I
think
covered.
A
Last
week,
I
went
through
the
bullets
interface,
one
we
colored
all
the
required
requirements,
items
there
and
then
basically,
what's
next
is
going
to
be
phase
two
right.
Whatever
is
in
phase
two,
we
we
will
push
some
elements
to
face,
so
I
haven't
updated
PR
with
the
push
yet
so
I
will
have
to
do
that,
but
basically
device
2
will
kind
of
indicate
what
we
what's
on
the
plate
for
the
next
phase.
Okay,.