►
From YouTube: CNCF Network Service Mesh Meeting 2020-02-18
Description
CNCF Network Service Mesh Meeting 2020-02-18
A
A
A
A
A
A
A
B
B
B
B
We
also
have
an
Asia
friendly
meeting
which
occurs
I
believe
at
3
a.m.
Pacific
time
every
other
week,
and
we
should
have
had
one
this
week.
So
next
one
should
be
in
two
weeks.
We
also
participate
in
the
scenes
of
telecom
user
group,
which
occurs
every
first
Monday
at
8
a.m.
Pacific
and
every
third
Monday
at
3
a.m.
Pacific.
The
next
call
will
be
on
the
first
Monday
of
next
month.
B
B
B
There
are
still
some
sponsorship
opportunities
available,
so
please
consider
sponsoring
if,
if
you
have
the
ability
to
or
if
you're
in
somebody
to
do
so,
we
also
have
open
networking
and
edge
summit
in
North
America
Los
Angeles
the
CFP.
For
that
is
already
closed,
the
schedule
will
be
announced
early
next
month
and
that
will
occur.
April
20
through
21st.
B
B
We
have
Q
Khan
and
cloud
native
and
Khan
North
America
this
the
CF
piece
for
that
will
open
on
April,
22nd
and
close
and
June
12th,
and
so
very
the
we
should
also
be
having
an
assignment
of
that
particular
time
as
well.
But
nothing
is
announced
yet
so
make
sure
you
keep
that
as
you
keep
that
in
mind
and
the
couple
announcements
oh
we
had.
B
C
C
Everybody
I'm
starts
being
a
busy
week
as
far
as
social
media
goes
now
that
the
cute
con
schedule
has
been
announced,
and
it
was
the
deadline
for
an
attempt
on
c
of
peas
or
last
week
Friday.
It
was
really
busy
with
that
being
said,
we
gained
11
followers
on
Twitter,
followed
four
account
and
had
a
total
of
34,
tweets
and
retweets,
and
as
mentioned
a
lot
of
that
was
CFP
deadline.
Reminders.
C
There
was
a
tweet
that
went
out
trying
to
gather
some
people
to
sponsor
in
a
thermicon,
as
well
as
a
tweet
for
thanking
everyone
that
did
submit
cf,
peers
and,
and
that
seemed
to
eat
just
announcing
that
the
schedule
will
be
out
on
Friday.
There
were
also
individual
tweets
per
network
service
Nash
sessions
that
will
be
presented
at
cube
con.
A
D
B
B
D
F
Okay,
no
problem,
no
problem
yeah,
so
hello,
everyone
I'm,
actually
getting
close
to
to
hit
the
PR
on
both
what
we
call
community
operators,
repo
and
yep
stream
community
operators.
Repo
with
that
NSM
affair,
will
be
kind
of
shipped
by
default,
with
red
hat
open
shift
so
and
also
you
can
install
it
automatically
using
using
the
operator
by
you
so
I'm,
almost
there
I'm
changing
a
few
things.
F
I
should
have
documentation
on
that
by
the
end
of
the
week,
and
it
will
be
really
really
easy
to
install
everything
so
yeah,
basically
to
say
that
the
PR
is
is
to
come
any
day
in
this
week
and
to
say
also
that
that
animated
gif,
that
you
guys
provided
me
that
thing
gets
giant
when
I
try
to
convert
it
to
base64.
So
it
makes
like
a
file.
A
That
to
whatever
at
this
point
so
just
ping
me
on
slack,
you
know
and
we
should
be
able
to
sort
that
out
and
also,
let
me
know
it's
like
what
size
you
would
like
to
be.
Okay,.
F
A
B
F
B
F
B
F
Cool
cool,
so
when
you,
when
you
log
into
openshift
you
you
you
get
this
home
page
with
dashboards
and
everything.
If
you
go
into
operators
and
go
into
operator
hub,
you
can
see
that
we
have
this
kind
of
app
store
like
experience
where
you
have
a
lot
of
operators
here,
installing
by
the
full
many
applications
using
the
operator,
lifecycle
manager.
So
now,
if
I
type
NS
Sam
here,
I
find
that
work
service
measure
operator.
So
with
that
see
the
icon
is
free,
is
more
here:
I!
D
A
So,
okay,
you
couldn't,
if
you
can,
if
you
can
give
me
a
size,
I
can
jump
into
whatever
size
you'd
like
okay,
I
think
you
that
you
were
dealing
with
sort
of
a
thing
which
will
let
you
you
know,
which
yeah
sure
get
to
whatever
size,
but
give
me
give
me
an
icon
size
when
I'm
sure
someone
has
specified
the
icon,
size
and
yeah.
There
is
it's
literally
two
minutes
to
go
and
export
to
that
icon,
size,
yeah,.
F
I
know:
I,
don't
think
this
is
the
problem,
though,
because
we
have
like
a
yellow
file,
which
is
the
closer
service
version
that
we
use
to
implement
this
this
whole
infrastructure,
and
on
that
yellow
file
we
have
what
we
call
a
spec
descriptors
with
respect
descriptors.
We
are
describing
the
feuds
that
we
have
on
under
any
operator.
So,
for
example,
here
I
would
install
the
service
manager
PD
we're
just
using
this
screen.
We
have
some
instructions.
We
have
like
repository
a
lot
of
other
information.
F
It's
already
installed,
because
I'm
testing
now
and
when
income
entering
the
install
operators
out
see
the
operator
running
and
I
can
see.
The
provided.
Api
is
here
so
here
when
I
click
in
network
service
mesh,
which
is
the
one
I
want
to
like
to
implement
and
run
I
I,
can
click
create
an
NSM,
then
I
see
the
file,
but
the
X
descriptors.
What
they
do
is
something
like
this
I
can
transform
this
llamó
into
a
like
a
form,
a
web
form.
This
is
broken
by
the
way.
F
This
is
why
I
am
messing
with
that
right
now.
This
is
why
I'm
not
able
to
fully
demo
the
installation
here,
because
some
of
those
types
are
not
exactly
accurate,
with
the
code
underneath
but
but
yeah
like
those
X,
descriptors
and
and
the
images
they
have
a
field
inside
with
I
think
it's
PNG
and
gif.
F
F
B
F
So
just
my
vs
cold,
it's
not
like
processing
the
image,
but
because
it
becomes
a
text
file
with
7.5
megabytes
so
that
that's
my
main
concern
when
I
saw
that
okay,
it
would
be
a
little
bit
weird
working
with
this
file,
because
I
have
content
above
the
image
and
below
the
image,
and
there
is
tons
and
tons
and
tons
of
base64
code
and
writing
the
middle
of
the
file
yeah.
It's
kind
of
hard
to
to
manage,
but
I
guess
right.
I
can
try
even
even
people
in
my
team
do
they
were
curious
about
it.
F
Let's
try
it
that's
right,
that's
right,
but
first
I
need
to
fix
a
few,
a
few
types
on
those
fields
and
and
finally
run
and
you
tomates
testing
to
the
score
cards.
That
will
probably
put
me
into
a
place
where
I
need
to
to
put
this
status
field
on
the
NS
same
object
as
a
whole.
So
well,
I'll
put
a
very
simple
one,
because
you,
in
order
to
be
in
OpenShift
I,
need
a
status
field.
It's
the
one
it
will
say
like
when
I
have
something
installed.
F
B
F
Yeah
sure
sure
I
I
would
be
glad
to
have
just
a
fork.
If
you
put
on
under
the
NSM
org
it's
not
a
problem,
then
we
can
transfer
everything
to
there.
That
will
be
a
little
bit
of
a
pain
to
because
there
is
the
go
path.
There
are
some
things
that
will
need
to
change,
but
it
will
work.
We
can
change
everything
and
animate
put
there
and
then
I
just
need
to
fork
and
that's
that's.
That's
enough.
B
Yeah
they'll
be
fantastic,
and
so
that
mean
said
like
we
have
resources
from
the
CN
CF
and
packet
and
a
few
others
who
have
been
very
generous
in
giving
us
resources
to
CI
these
type
of
things,
and
so
I
think
this
is
a
planning
it
in
the
in
the
repo
and
then
getting
it.
Getting
those
things,
wired
and
I
think
is,
is
a
high
value
so
well
we'll
start
working
towards
that
sure.
F
E
E
How
are
we
moving
and
what
are
the
actual
plans
with
the
refactoring
that
is
going
on?
I.
Think
that
it's
worth
that
you
know
people
hear
about
our
plans
and
where
we
are
what
we
I
mean.
We
went
through
the
initial
presentation
about
the
process,
but
maybe
as
a
reminder
and
kind
of
intermediate
report
of
what's
going
on,
I
guess
it
would
be
good
to
to
discuss
a
little
bit
about
this
to.
E
A
And
I'll
go
through
kind
of
fast.
Do
you
ask
questions
if
you
have
them?
We've
talked
about
this
in
the
community
before
but
I
I'm,
very
much
against
the
notion
that
you
know
you
should
have
to
obtain
to
the
community
call
or
if
you
miss
a
community
call,
people
should
I
hate
it
when
people
are
like.
Oh,
we
already
discussed
this.
No,
no!
No!
No.
We
can
keep
discussing
this
right
so
on
the
repo
pipelining.
This
is
stuff
that
we
sort
of
have
already
started
undertaking.
A
So
we've
got
our
unit
that
our
service
missionary
service,
measured.
It's
gotten
to
be
kind
of
very
largely
complex
and
the
CI
for
the
repo
is
very
long
which
encourages
larger
changes
since
it
takes
an
hour
to
have
to
run
the
CI
people,
trying
don't
do
smaller
things
and
also
I
think
probably
discourages
contribution
and
it
slows
development
velocity
and
if
you
go,
look
at
our
dev
site
reports
and
I
can
bring
them
up.
A
You
can
sort
of
see
that
and
then,
when
we
originally
put
the
stack
together,
I
done
sort
of
an
initial
proof
of
concept
with
some
of
this
stuff.
So
you
had
the
you
know,
repo
it
took
about
an
hour
and
20
minutes
to
run
CI,
and
then
you
had
a
pipeline
of
repose
or
you
had
an
API
which
effectively
we've
managed
with
github
actions
to
be
able
to
not
only
run
CI
on
these
but
to
auto
push
PRS
to
update
the
down
streams.
A
So
API
runs
takes
about
a
minute
and
20
seconds
to
run
at
CI
about
30
seconds
later,
a
PR
turns
up
in
SDK,
basically
to
update
it
to
where
API
currently
is
that
takes
about
a
minute
and
20
seconds
to
run
it
CI.
Once
you
merge
that,
then
about
30
seconds
later
stuff
pops
up
an
sdk
deviation,
STK
kernel,
they
can
run
their
CI
and
so
the
total
end-
and
you
know
not
counting
the
human
review.
A
Time-
ends
up
being
very,
very
quick
and
all
of
this,
and
so
it
becomes
very,
very
doable
to
go
through
and
do
rapid
development.
And
so
the
proposal
was
that
we
go
to
a
pipelining
scheme.
Sort
of
like
this.
Where
you
know,
epi
has
the
top-level
api's
those
get
Auto
propagated
to
SDK,
as
PRS
SDK
then
can
get
out
appropriated
to
either
various
platform.
A
And
then,
when
things
merge
into
the
commands,
they
could
then
auto
propagate
to
update
a
repo
with
home
charts
or
to
update
an
operator
repo,
which
could
then
run
their
own
CIS
on
those
things
but
but
at
the
level
of
C
of
helmand
operator,
because
when
you're
talking
about
integration
tests,
then
you're
talking
about
having
per
platform
integration
repos
like
integration,
Kade
stash
pocket
integration,
Kade
stash
EWS
that
could
fit
all
have
integration
tests
in
you
know,
and
so
things
propagate
through
the
system.
So
the
I
wanted
to
go
and
fix
the
bug
in
SDK.
A
That
ends
up
being
a
very
quick
CI
cycle
that
propagates
the
system
that
you
may
discover
that
you
have
a
problem
downstream,
and
so
we
do
want
to
talk
about
failure,
detection
and
remediation.
So
one
of
the
things
that's
actually
true
of
this
model
is
that
it
actually
encourages
stronger
unit
testing.
A
You
chase
back
the
failure.
You
discover
that
actually
it's
this
change
in
this
decay
that
caused
it.
So
you
fix
it
in
SDK.
You
add
unit
tests
to
make
sure
we
don't
have
quite
that
failure.
We
get
an
SDK,
the
PR
merge
is
fixed,
it
propagates
to
the
system,
and
integration
gets
a
PR
that
actually
can
be
merged
and
pump
bring
it
up.
Now,
please
note
at
each
step.
A
We
can
choose
to
only
merge
these
if
and
only
if
they
actually
are
passing
the
local
CI,
and
this
has
advantages
in
that
it
gives
us
a
clean
roadmap
to
introduce
new
platforms.
You
just
try
to
be
repo.
It's
got
much
faster
CI
experience
for
users,
biases
towards
catchy
things
early
rather
than
late,
and
it
allows
for
the
formation
of
sub
communities.
So
you
know
we've
already
got
sort
of
some
of
this
going
on
at
an
early
former
on
the
SR
e
OB
stuff,
Alex
I.
Think.
A
A
So
that's
the
repo
pipelining
do
folks
have
any
questions
on
that
is
this
sort
of
what
you
were
looking
for?
Nicolai,
oh,
yes,
yes,
and
where
we
are
right
now
is
we're
still
getting
the
pieces
put
in
place
for
API
SDK
SDK
BB
pH
investigate
kernels,
so
we
haven't
quite
gotten
to
the
command,
yet
that
I'm
hopeful
we'll
get
to
our
first
commands
this
week.
A
So
the
other
two
things
around
refactor
so
as
you're
talking
about
moving
innocent
forwarder
to
being
just
another
cross-connect
Tennessee.
This
is
sort
of
talking
about
the
current
state
where
we
have
the
network
service
API
where
you
go
through,
and
you
say:
okay
I'd
like
to
make
a
request
for
the
network
service
or
close
the
network
service.
We've
got
the
registry
API,
but
then
we
also
have
this
cross
connect
API,
and
this
forward
or
registration
API
and
the
cross
connect.
A
Api
is
just
bringing
two
two
connections
together
and
and
as
we
gained
experience,
we've
realized
that
this
makes
things
very
complicated
and
into
you
know,
the
current
sequence.
Diagram
is
essentially
a
client
comes
in
the
manager
or
makes
a
request.
The
manager
makes
a
request
to
the
network
service
in
point
gets
back
its
connection.
It
then,
since
a
cross
connect
request
to
the
foreigner
gets
that
back
and
that
sends
the
connection
back
to
the
NSC.
A
A
So
the
proposals
going
forward
was
to
keep
the
network
service
and
registry
api's,
have
the
sequence
diagram
basically
run
as
a
chain,
so
you
go
to
the
manager.
The
manager
makes
a
request
the
forwarder,
by
the
way
these
are
color
coded.
So
when
colors
match
that's
a
request
in
a
return,
the
forwarder
basically
puts
its
mechanisms
that
it's
willing
to
do
for
that
particular
connection
into
the
network
service
request
that
gets
into
the
network
service
in
point
and
every
service
in
point
responds
with
its
selection
that
gets
sent
back
to
the
forwarder
the
foreigner.
A
Then
you
know
basically
having
gotten
the
piece
that
goes
towards
the
NSC.
It
will
then
make
its
selection
of
where
it
wants
to
send
things.
You
wouldn't
wants
to
drop
at
the
NSC
based
on
its
preferences,
and
then
it
comes
back,
and
so
this
this
has
the
advantages.
That
is
this
simplification.
There
are
fewer
api's
and
the
for
just
becomes
another
pass
through
that
offers
cross-connect
as
a
service.
It
allows
forwarders
to
do
resource
reservations.
A
So
if
I
get
an
incoming
request,
I
can
reserve
the
resource
when
that
incoming
request
comes
in
then
on
the
outgoing
request.
I
can
hold
that
resource.
What
I
send
outputting
request
to
the
service
manager
and
every
service
manager,
and
then,
when
the
network
service
manager
comes
back
and
tells
me
what
the
far
end
and
I
see
once
I
can
assign
or
release
that
resource.
A
You
know
it
basically
becomes
what
to
do
in
the
next
hop
and
the
chain
goes
down,
and
so
you
also
get
no
special
cases
for
forward
or
versus
any
other
NSC,
which
means
you
can
use
common
SDK
elements
for
both.
So
if
I'm
writing
a
you
know,
virtual
router
or
I'm
writing
a
yo,
cross-connect
and
I
see
is
a
forwarder
they're,
both
gonna
use,
for
example,
the
mechanism
SDK
pieces
that
are
in
common,
multiple
T
forward
or
simply
becomes
iterating
through
the
local
available
foreigners.
So
you
can
have
local
for
specific
to
particular
nodes.
A
They
don't
the
be
a
diamond
set.
This
is
particularly
important
for
us
re
OB,
where
one
node
may
need
a
for
that
can
program.
The
SiO
v-neck
at
another,
node
may
not
or
one
node
may
need
one
that
can
program
the
particular
smart
NIC,
that's
particular
to
that
node
and
another
one
may
not
and
again,
as
I
mentioned
you
to
use
the
same
SDK
for
anesthesia
borders
and
I
won't
walk
through
the
activity
diagram.
A
A
So
this
is
just
setting
the
stage
and
then
the
path
stuff.
Our
healing
is
complex,
as
we
refractor
from
the
for
the
cross
connect
tendency.
We
need
to
rethink
the
healing,
because
the
the
current
healing
with
lots
of
timers
is
rooted
in
the
cross-cut
api
and
still
path
emerges
from
this
rethink,
so
essentially
just
says.
Well,
we
keep
that
resurface.
A
Mesh
we've
got
a
nor'easter,
especially
connection
on
the
api,
so
the
proposal
is
to
keep
network
service
mesh,
basically,
as
it
is
and
then
introduce
a
path
into
the
connection
where
the
path
is
a
list
of
path.
Segments
and
those
path.
Segments
have
tokens,
as
well
as
the
name
of
we're
passing
through
in
the
ID,
and
we're
also
looking
at
adding
metrics
to
them,
and
so
the
net
result
of
this
is
that
you
can
authenticate
it
every
step.
You
can
authenticate
the
entire
chain.
A
So
I've
got
an
example
here
on
restart,
so
the
client
is
talking
at
the
endpoint,
so
the
endpoints
and
by
the
way,
there's
an
activity
diagram
with
the
whole
thing
back
here,
but
I
don't
spend
all
the
time
to
walk
through
it.
The
client
talks
to
the
endpoint,
the
endpoints.
You
know.
Let's
say
this
is
to
connection
the
employed
restarts,
so
the
input
restarts
client,
smart
client
gets
its
connection
back,
gets
this
initial
state
transfer
and
discovers
the
connection
it
believes
it
has.
Is
it
at
the
endpoint?
So
it's
simply
re
requests
it.
A
It's
fairly
straightforward
on
the
client
restarting
so
the
client
restarts
the
endpoint
still
believes
it
as
a
connection.
Each
pass
segment
has
an
expiration
timer
on
it.
So
when
that
passes,
the
endpoint
basically
says:
okay,
we're
done.
This
also
means,
by
the
way
that
clients
are
constantly
refreshing
themselves
and
refreshing
their
credentials
and
policy.
So
if,
for
example,
that
you
decide
to
change
your
policy
worst
case,
exposure
for
someone
being
in
violation
of
that
policy
is
the
expert
timers.
After
that
you're
everything
fixes
itself
up.
A
I'm
sorry,
this
is
network
service
manager,
restart
some
for
Network
Service,
Manager
restart.
It
ends
up
being
much
the
same
way.
If
your
network
service
manager
restarts
your
client
discovers
that
it
wants
the
connection
that
isn't
there,
it
actually
asks
for
the
connection
back
and
since
it's
got
the
path
the
network
service
manager
knows
which
forwarders
ended
to
the
forwarder
may
not
even
know
that.
A
There's
a
problem
yet
hey
and
it
goes
ahead-
and
you
know,
since
its
request,
by
to
the
network
service
manager,
who
sends
it
back
to
the
NSC,
because,
again
that's
in
the
path
and
you
end
up
being
healed.
You
could
also
get
the
case
with
a
forwarder
initiates.
The
healing,
but
I
suspect
that
would
be
less
likely
and
so
advantages
here.
Are
it
ends
up
being
a
simplification,
because
you've
got
a
single
behavioral
flow
everywhere.
A
Robust,
auto
healing
is
a
property
of
a
system,
so
you
can
heal
if
all
components,
but
the
leaf
client
restart,
which
is
kind
of
cool.
We
sometimes
call
this
place.
Guerilla
healing
only
flows
forwards,
not
backwards.
This
is
actually
really
important,
because
if
you
try
and
make
it
for
flow
backwards,
you
get
all
kinds
of
crazy
proliferation
of
timers
and
timers
are
really
hard
to
manage.
So
we
try
and
keep
timers
very
localized
and
simple
and
also
healing,
is
indistinguishable
from
refreshing.
Your
authentication
token
right.
A
So
what
you
do
routinely
works,
and
so,
if
the
question
about
healing
becomes
well
does
healing
work
well,
we're
doing
this
behavior
all
the
time.
It's
also
more
secure
connections
expire
unless
they
were
refreshed.
So
if
policy
changes
or
authentication
expires,
then
the
connection
goes
away
and
then
robustus
connections
did
not
get
torn
down
unless
they
expire.
A
So
if
the
client
goes
away
well,
the
plant
could
always
come
back
and
whenever
they
get
around
to
coming
back,
we're
happy
to
replumb
them
to
where
they
need
to
be
in
the
connection,
and
those
are
sort
of
the
pieces
we've
been
talking
about
here.
So
you
were
asking
sort
of
where
we
are
with
this
nicolai.
E
A
A
Basically,
if
you
move
to
forwarders,
we
have
an
extraordinarily
complicated
human
mechanism
right
now
that
uses
the
cross
connective.
Yes,
you
need
a
way
to
heal.
That
is,
doesn't
require
the
cross
connect
API,
and
it
turns
out
that
the
path
approach
appears
to
be
both
robust
and
simple,
which
is
good,
so
moving
the
cross
connects
to
the
forwarder
stuff.
A
You
need
the
path
piece
and
it
turns
out
that
them
all
the
mono
repo
problems
that
we
discussed
become
even
worse
as
you
try
and
do
this
refactor,
because
if
it
takes
an
hour
and
a
half
to
run
CI
for
every
little
thing,
which
is
what
it
currently
does,
then
you
have
sort
of
a
serious
problem.
We've
been
wanting
to
break
up
the
monotony
way
and
that's
kind
of
how
these
are
interrelated.
Does
that
make
sense.
E
D
A
So
I
mean
I,
guess
we
have
this
stuff
that
continues
to
go
on
in
the
unit,
but
while
we're
making
the
transition
still
there
and
it's
functional
I
think
that's
actually
very,
very
good,
and
while
this
stuff
is
coming
up,
but
my
guess
is
that
we
will
eventually
get
to
a
point
where
the
pieces
are
coming
out
of.
You
know
the
commands
are
all
coming
out
of
their
command
repos.
The
integration
testing
is
coming
out
of
the
integration,
testing,
repos,
etc.
A
E
B
So
one
question
perhaps
I
missed
this
is
based
only
on
how
things
are
currently
going
is.
Is
there?
Is
it
a
you're
either
using
the
old
stuff
for
the
new
stuff,
or
is
there
some
form
of
transition
compatibility
units
there
we're,
like
maybe
I
I
write
a
new
for
Tory
in
the
new
SDK?
Is
that
easy
at
this
point
to
integrate
with
the
current
monolithic,
repo
NSM?
That's.
A
So
I
mean
it's,
but
it's
not
like
we're
saying:
go,
stop
everything,
because,
quite
frankly,
if
you
want
to
do
a
new
forwarder
and
we've
had
a
couple
of
cases
of
this,
go
by
already
what
you
discover
very
quickly
is
you
can
learn
a
lot
about
the
process
poking
at
the
monolithic
repo
and
then
we
do
have
folks
and
I
see.
A
Some
of
them
have
actually
turned
up
enough
for
the
call
which
is
awesome,
who
are
looking,
for
example,
at
building
the
SR
EO
v
for
Durst
off,
and
you
know,
that's,
that's
goodness,
and
you
know
also
migrating
over
and
building
a
Colonel
Porter.
So
that's
basically
kind
of
where
we're
at
I
mean
the
part
of
the
reason
this
this
came
about
was
the
realization
that
we
have
this
thing.
This
working.
A
We
have
ongoing
work
where
people
are
doing
things
to
learn
how
to
do
like,
for
example,
new
forwarders
and
there's
no
point
in
halting
that
learning
process.
You
know
because
it
becomes
fairly
straightforward
to
come
and
bring
that
back
over
here.
So,
for
example,
the
way
the
SDK
is
written
if
I
wanted
to
write,
say
a
new
mechanism,
same
mechanism,
I,
don't
know
for
wire
guard
now.
I'm
gonna
have
to
have
figured
out
a
lot
about
how
I
regard
works
already
and
I
can
do
that
either
in
the
monolithic
repo
or
in
the
new
repo.
A
B
And
the
reason
I
asked
that
particular
question
is
to
set
up
the
next
thing.
One
of
the
concerns
that
people
may
have
is
the
there
was
quite
a
bit
of
work
put
into
maybe
creating
your
forwarder
and
NSM
and
then
doing
the
work
to
get
it
migrated
into
the
SDK.
Is
you
know
people
maybe
think
of
it
as
a
caste,
similar
quantity
of
effort,
but
a
reality
and
is,
and
one
of
the
people
who
is
reviewing
most
of
the
PR
just
coming
in
I,
can
tell
you
it's.
B
It
seems
to
be
the
exact
opposite,
so
people
are
are
having
a
very
easy
time
actually
implement
instruction
in
the
USDA
and
I
suspect
that
shifting
like
once,
we
get
like
si,
really
works
with
monolithic
and
then
getting
that
to
work
in
SDK
is
like
we've
already,
we've
already
done,
will
already
have
done
the
hard
work
of
getting
it
working.
So
first,
we
already
have
that
advantage,
and
the
second
thing
is
the
new
API
is
incredibly
simple
and
very
easy
to
to
test
and
keep
modular.
B
So
so
I
want
to
make
sure
that
people's
fears
are
around
this
type
of
thing.
Our
are
reduced,
and
so
I
know
that
it's
not
going
to
go
away
until
you
see
it
all
work,
but
you
know
definitely,
but
I
definitely
feel
confident
with
the
with
the
current
the
current
path
and
so
I
think,
as
you
get
more
people
who
ramped
up
and
do
SDK
we'll
see
a
lot
more
momentum,
just
just
because
of
the
simplicity
of
the
of
the
API
send
and
getting
them
wired
in.
B
A
Well,
what
do
you
pass
to
an
endpoint
new
endpoint?
You
pass
its
name
and
you
pass
the
the
piece.
That's
implementing
the
thing.
That's
that's
actually
the
work
your
endpoint
does,
and
so
all
the
machinery
around
timing
out
all
the
machinery
around
authentication
and
authorization
all
of
those
things.
Those
are
not
things
you
have
to
think
about.
You
just
have
to
think
about
the
piece.
That
is
what
is
it
that
my
particular
network
service
does.