►
From YouTube: Network Service Mesh WG - 2018-11-06
Description
Join us for KubeCon + CloudNativeCon in Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
Okay,
so
events,
we
have
cube
con
on
December
10th,
so
just
a
month
away
from
10th
to
13th,
so
I
believe
the
10th
is
the
day
before
the
conference
and
there's
some
there's
some
many
summits
that
are
going
on.
So
we
have
a
Fido
mini
summit
that
we
have
a
couple
things
that
are
going
to
be
presented
and
there's
also
a
open
source
Commons,
which
we
may
end
up,
giving
a
talk
there
as
well.
A
So
we
have
two
talks
at
the
main
session:
an
intro
and
a
deep
dive
so
feel
free
to
to
join
in
and
and
listen
or
get
involved
and
help.
If
you'd
like
we
have,
we
are
currently
working
on
a
NSM
demo
for
Google
on
there's.
Actually
a
few
demos
that
we're
working
on
one
of
them
is
the
is
trying
to
get
things
set
up
for
the
VMF
CNF
comparison.
We
have
more
to
discuss
later
in
the
agenda.
A
B
A
Okay,
so
a
couple
of
couple
announcements
Volk
has
put
out
a
second
network
service,
mesh
video,
this
one's
a
five-minute
one
and
the
link
has
been
added
to
the
to
the
list.
So
sometime
after
the
meeting
go
ahead
and
and
take
a
look
at
it,
it
I
have
not
seen
it
yet
so,
but
I
suspect
it's
probably
going
to
talk
a
little
bit
about
the
problem
that
we're
trying
to
solve
with
network
service.
A
A
Okay,
so
onto
the
agenda
board,
which
so
the
agenda
board.
Actually
we
haven't
added
the
stuff
from
the
the
past
week
that
we
were
that
we
were
adding.
So
my
proposal
to
this
is
instead
of
doing
the
agenda
board.
Today
we
have
a
section
where
we're
talking
about
the
changes
that
we're
making
and
then
next
week
we'll
be
back
to
the
agenda
board.
Does
that
sound
reasonable
to
you.
A
B
So
the
slides
actually
on
that
link
haven't
particularly
changed.
The
there
was
some
thinking
about
basically
a
fairly
basic
format
for
that
being
first
television
story,
probably
a
trimmed-down
version
of
Sarah's
story,
then
literally
just
have
a
very
simple
one
or
two
to
control
applies
that
make
it
real
and
then
the
third
part
is
being
able
to
visualize
it,
hopefully
with
the
skydive
integration,
so
that
people
can
see
the
result
that
was
kind
of
the
the
sort
of
thinking
that
would
that
I
was
having
about
that.
For
that
I
think
we've
got
basically
three
things.
B
One
is
that
we
will
need
someone
to
help
with
getting
a
sort
of
a
shrunk
down
version
of
Sarah's
story
that
can
be
used
for
you
know
for
the
demo
to
present
the
sort
of
what
are
we
talking
about?
The
second
one
is-
and
this
is
stuff
that
we're
all
working
on,
which
is
delivering
the
working
network
service
smashed
off
and
we're
getting
quite
close
to
the
first
drop
for
that,
and
then
the
third
is
the
skydive
integration.
I
know:
we've
got
folks,
including
David,
who
are
looking
at
that
as
well.
A
Think
that
sums
it
up
quite
quite
well.
So
for
the
narrative,
the
one
thing
that
we
need
to
be
a
bit
careful
with
is
to
make
sure
that
the
narrative
that
we're
giving
matches
the
the
code
for
the
network
service
endpoint
that
we're
providing
mm-hmm.
So
a
little
bit
of
collaboration
on
that
side.
I
suspect
that
the
components
in
the
skydive,
because
of
the
way
that
it's
visualizing
don't
need
to
have
as
quite
as
tight
of
an
event
and
great
integration
from
that
perspective.
A
A
Let's
say
we
have
a
section
on
the
on
the
code
as
well,
so
let's,
let's
go
ahead
and
talk
a
little
bit
about
that
and
then
we'll
jump
to
that.
We
will
continue
on
with
the
with
the
agenda
for
for
Aundre.
So
in
terms
of
the
code,
we've
done
quite
a
lot
of
work
to
simplify
that
the
network
service
mesh.
So
do
you
want
to
do
you
want
to
start
off
it
and
talk
about
some
of
the
changes
that
we've
that
we've
made
yeah.
B
So
effectively
network
service
measures
become
much
more
micro
surfacey
in
the
sense
that
you've
got
a
bunch
of
small
components
that
are
talking
to
each
other,
with
well-defined
G,
RPC
api's
and
the
biggest
change
that's
going
on
right
now
is
trying
to
get
so.
It
turns
out-
and
we
found
this
out
last
week-
embarrass
that
we
didn't
find
this
out
last
week.
It
turns
out
that
VPP
actually
has
a
something
that
exposes
a
gr,
PC
API
for
it
already.
B
It's
called
PvP
agent,
and
so
we
can
simply
just
run
the
PB
agent
and
point
to
it
as
a
gr,
PC
client-
and
this
makes
the
data
playing
part
of
the
story
for
us
so
much
easier
because
literally
all
we're
doing
is
translating
from
the
network
service
mesh
data
plain
API
to
the
EPA
agent
API
and
they're,
both
G
RPC.
So
it
becomes
super
super
easy
to
do
that,
so
we're
in
the
process
of
making
that
transition.
Currently
on
the
good
news,
is
the
VPP
agents
been
pretty
well
heartened?
A
What
kubernetes
says
these
CRTs
have
what
three
sections
it
has
metadata.
Where
you
put
in
things
like
names
and
labels,
it
has
the
spec,
which
is
things
that
that
are
properties
of
the
system,
that
you
don't
expect
to
change
it
or
that
configuration.
So
the
spec
could
include
things
like
what
is
the
payload
of
a
network
service
and
you
have
the
status
and
the
status
is
like.
Is
this
thing
online?
A
What's
the
IP
address
and
so
on
and
so
we've?
So
it's
very
much
it's
more
closely
aligned
with
what
kubernetes
now
expects
from
from
its
CR
DS
and
so
effectively.
Now
you
can
do
Q
control,
get
network
services
and
you'll
get
a
list
of
network
services.
Q
patrol
get
network
servers,
endpoints
and
yeah
I
get
Q
control,
get
network
service
managers
and
you'll
get
a
you'll,
get
a
list
and
status
of
these
various
things,
and
it
all
just
works
it
and
you
can
also
access
them.
A
Programmatically
I've
also
written
something
so
that
when
the
first
network
service
manager
comes
online,
it
checks
to
see
if
the
CRT
has
been
created
and
auto
creates
it,
and
so
spinning
up
and
adding
the
series
is
just
as
simple
as
as
running
the
application,
and
so
we
have
so.
We
have
quite
a
few
things
that
have
been
added
in
from
from
that
respect.
But
at
the
end
result
is
we
end
up
with
something?
That's
a
bit
more
simple
because
we
don't
have
to
worry
about.
Is
this
thing?
A
A
And
beyond
that,
I've
also
we've
also
isolated
the
portions
of
the
registration,
so
I've
called
right
now,
I'm,
calling
it
a
registry
and
effectively
that's
what
publishes
the
network
services
and
endpoints
and
so
on.
So
this
information
doesn't
have
to
live
on
kubernetes,
but
we
implemented
a
kubernetes
component,
the
micro
service.
That
knows
how
to
publish
this
on
on
kubernetes
and
then
uses
kubernetes
for
the
for
the
bookkeeping.
A
B
A
B
That's
probably
the
piece
right
there
that
comes
to
mind
on
on
sorely.
What's
going
what's
been
happening,
there's
been
a
lot
of
stuff
moving
in
the
codebase
if
you've
been
sort
of
watching
a
ton
of
really
cool
stuff
has
happened
in
the
last
couple
of
weeks,
and
things
are
really
starting
to
fire
on
all
cylinders.
One
of
the
other
nice
things
by
the
way
about
this
refactor
is
that
it
makes
it
possible
to
run
component
to
component
integration
tests
without
having
to
stand
up
an
entire
kubernetes
cluster.
B
A
And
we
got
go
build
working
again
so
before
there
was
a
C
project,
a
C
go
project
that
got
included
M.
That
would
break
the
go,
build
and
so
there's
a
pull
request
that
pending
that
fixes
all
that
as
well
and
also
gets
us
off
of
the
Segoe
and
gets
us
on
to
the
native
go
runtime,
which
is
absolutely
huge
because
that's
where
90%
of
the
work
can
go
in
the
run
time
is
focused
on
the
go
runtime,
not
the
C
go
runtime,
so
so
we're
good
company
there
yeah.
A
B
B
B
B
But
we've
broken
these
down
into
very
small
steps
that
actually
end
up
being
pretty
fast
to
build
and
also
gives
you
really
granular
visibility
into
what's
going
on,
because
if
you're
like
okay
well,
what
fail?
Okay,
well,
building
this
container
fail.
Okay,
that's
not
good!
You
can
do
just
scroll
down
and
to
precisely
what
failed
for
that.
B
Compare
so,
as
CI
gets
more
complicated
and
one
of
the
things
that
drives
me
frankly
completely
crazy
personally,
is
when
you
get
the
giant
monster
log
of
doom,
and
you
have
to
be
very
skilled
to
figure
out
why
the
hell
that
CI
broke
and
one
of
the
nice
things
with
circle
CI,
is
that
you
don't
have
that
Frederic?
Are
you
back?
I
am
back
cool,
so
do
folks
have
do
folks
have
other
feelings
on
the
on
disabling
Travis.
A
B
I'm
here
this
is
Taylor,
you
have
several
bucco.
Can
you
hear
me
I,
hear
you
Tyler
great.
D
Ok,
so
let's
say
we
created
this
aggregate
project
view
since
there's
so
many
things
going
on
I
just
posted
in
the
chat
and
we
have
sub
projects
for
each
of
the
I
guess
larger
components.
There's
a
project
for
the
open
sack,
that's
in
progress,
but
now
the
testing
for
the
VPP
Neutron
plug-in
is
one
of
the
main
items
that's
being
worked
on
most
of
the
rest
of
the
open
site.
Cluster
that
we're
going
to
be
using
for
the
test
is
is
done.
D
Deployments
are
automated
and
being
documented
and
everything.
So
it's
the
stuff
with
the
VPP
Neutron
and
then
once
we
have
access
to
an
environment
that
we
feel
stable
enough
and
we're
going
to
start
doing
some
of
the
updates
and
for
the
actual
test
case
or
we'll
be
connecting
all
the
VNS
through
the
vvvv
switch
on
the
kubernetes
side
and
we've
added
support
for
Ubuntu
and
as
a
host
OS
to
Cross
Club,
so
that'll
be
something
that
innocent
for
any
of
y'all's
testing.
D
It's
been
core
OS
before
so
you
can
use
core
OS
or
a
bun
at
this
point
for
host
OS
containers.
So
the
CNS-
don't
matter
so
just
the
host
to
us
and
let's
see
the
a
lot
of
the
host
configuration
that
we'd
like
to
use
for
performance
and
stuff,
has
been
done
as
part
of
the
packet
generator.
So
the
system
that's
actually
sending
the
traffic
from
tier
X
and
in
Fe
bench
driving
that
has
been
done
in
that
and
that's
going
to
be
rolling
into
all
of
the
worker
nodes
and
make
that
available.
D
So
right
now,
that's
we
can
deploy
and
provision
a
packet
system
with
dual
Mellanox
mix
and
it
gets
updated
with
all
of
the
host
settings
and
kernel
configuration
reboots
and
the
boxes
unavailable.
We
can
also
provision
a
quad-core
Intel,
so
this
is
something
that
right
now
we
only
have
access
to,
but
quad
4
Intel
Nix
and
we
have
provisioning
working
for
both
of
those
configurations.
We
have
some
reserved
systems,
so
this
is
kind
of
early
access
before
that
configuration
is
made
publicly
available
for
everyone
else.
D
I
think
all
of
that
provisioning
software
is
going
to
be
useful
as
NSM
gets
past
a
lot
of
the
functional
testing
and
wants
to
target
real
specific
things.
So
all
that's
publicly
available
and
we've
been
working
on
the
VP
P
V
switch
set
up
for
the
test
case
and
the
provisioning
of
that
being
able
to
support
both
the
so.
D
For
that,
we
also
have
a
lot
of
results
from
ma
check,
Peter,
Michael
and
a
bunch
of
people
that
have
been
working
on
testing
the
software
on
the
cset
lab
side
and
and
validating
that
they're.
Getting
the
expected
results
from
the
daily
runs
that
happen
and
in
the
cset
lab
and
comparing
this
what
we
have
and
then
we're
rolling
and
merging
anything.
That's
ready
back
into
the
codes
so
that
we
can
optimize.
A
A
B
You
know
we
can
call
it
monitor,
connections
for
one
of
the
better
term
that
is
available
from
the
network
service,
the
NS
MD
that
basically,
we
would
just
provide
information
about
those
connections
and
changes
in
them
that
could
go
northbound
and
be
aggravated
with
by
a
variety
of
things.
One
of
the
beings
dies.
A
A
B
My
guess
is
that
it
kind
of
leads
up.
Looking
like
the
following,
which
is
my
guess,
is
that
the
you've
got
two
sets
of
problems
with
sky
dive.
The
first
problem
is:
how
do
you
find
network
service
managers
that
you
can
ask
for
a
collage
acclamations
and
then
the
second
do.
The
second
set
of
information,
then,
is
okay.
Having
gotten
that,
how
do
you
actually
go
and
ask
them
for
topological
information?
B
It
strikes
me
that
the
how
to
find
a
network
service
manager
is
something
that
probably
is
best
on
the
the
series,
because
those
are
sort
of
clearly
visible
it
out
there
and
then,
although
the
discovery
piece
of
that,
you
could
do
via
gr
PC.
If
you
were
to
bring
in
something
like
the
NMS
via
MSN
and
kate's,
which
will
give
you
a
G
RPC.
B
A
Yeah
I'm
not
sure
what
what
that
would
that
would
look
like
it
could
be.
It
could
look
like
like
a
node,
because
there
is
something
there.
So
we
could.
We
could
show
a
node
that
is
isolated,
that
nothing
has
any
connections
to
yet
and
then
the
actual
connections,
when
someone
says
create
a
connection
or
close
connection,
it
can
become
edges
from
clients
to
to
those
endpoints
yeah.
B
B
The
following
things,
which
is
you
wind
up
with
network
service?
Yes,
these
the
nodes
of
the
edges,
the
edges,
we
can
discover
from
the
edges.
We
can
discover
from
the
the
monitor
connections
that
stuff
you're
suggesting
possibly
getting
the
nodes
from
Kate's.
That
would
give
you
the
nodes
from
the
number
service
endpoint
nodes,
but,
of
course,
network
service
clients,
don't
actually
advertise
themselves
from
Discovery.
So
right,
you
know
that
would
have
to
be
sorted
out.
F
B
A
So
so
I
think
so
we
could
create
a
a
graph
node
on
demand.
Would
we
see
a
new
connection
come
in
because
technically,
until
a
client
makes
a
connection
request,
it's
not
technically
part
of
the
network,
sir
smash
world,
and
so
it
would
be
reasonable
to
say
that
the
creation
of
a
of
a
connection
yeah
adds
it
to
it.
B
So
I
always
this
out.
There
we've
actually
got
a
lot
of
folks
working
on
this
fall
with
a
pretty
deep
networking
experience.
I'd
love
to
hear
some
other
opinions
about
sort
of
at
what
point
you
would
find
it
useful
to
know
about
the
various
nodes
and
links
anthropology,
that's
being
visualized
for
you
for
Network
Service
mash
could
some
of
the
folks
who
speak
up
a
little
less
frequently,
but
you
have
a
quite
a
bit
of
depth
of
experience,
speak
up.
B
The
question
again
it
the
question
is:
we've
been
debating
whether
or
not
it's
helpful
to
represent
the
network
service
endpoints
as
dangling
nodes,
in
other
words,
nodes
that
have
no
edges
yet
even
representing
a
topology
graph
nodes
in
this
case
graph
nodes
with
no
edges,
and
so
we've
got
a
lot
of
people
with
a
lot
of
depth.
Of
the
experience
on
this
call
in
networks
and
varieties
of
ways,
so
I
was
sort
of
asking
okay,
so
you're
on
network
doesn't
help
you
to
see
that
there
is
a
graph
node
in
your
topology.
C
Right
so
I'm
doing
this
week
actually
found
the
meeting
link
the
what
you're
really
thinks
you're
talking
about,
if
we
compare
it
to
the
physical
world,
is
his
device
in
my
rack.
It's
not
no
wise
to
anything,
should
I
represent
it
in
my
topology
and
the
usual,
and
that
would
be
no.
That
would
be
silly.
Don't
do
that
it's
an
inventory,
but
it's
not
it's
not
topology.
No.
B
C
Yeah
and
I
mean
so
also
in
this
I
think
it's
obviously
two
questions.
How
do
you
expose
this
through
API
so
that
that
Skydive
can
consume
it,
and
how
do
you
display
it
now,
you,
as
you
say,
it's
exposed
through
api's
and
and
skydive
could
consume
it
and
display
it.
But
the
point
is
that
it's
not
a
technology
view
that
you
would
want
to
display
it
upon
me.
It
would
be
something
slightly
different.
F
B
A
couple
things
sort
of
priorities-
and
the
second
is
what
despite
apparently
offers
so,
for
example,
I'm
currently
sharing
the
skydiving.
It
has
a
tapper
topology.
It
also
has
a
path
for
discovery
and
so
I.
One
of
the
questions
I
think
we
may
want
to
address
this
guy.
That
people
is.
Is
this
discovery
table
really
in
inventory
right?
Because
if
it's
really
an
inventory,
then
obviously
you
know
getting
to
IANS
very
succinctly
main
point.
We
might
want
to
go
feed
that
inventory
in
skydive
and
then
the
second
point
is
sort
of
one
of
the
priorities.
B
I
would
maintain
that
visualizing
topology
in
the
immediate
term
is
going
to
be
higher
priority
than
capturing
and
visualizing
inventory
instead
I.
If
that
is
a
feature
instead
of
because
what
we're
gonna
hope
to
show
people
puke
on
very
shortly
is,
in
fact,
visualization
of
an
inventory
they're,
a
visualization
of
its
apology,
not
visualization,
of
their
inventory,
typically
visualize
yeah
when.
C
It
comes
to
actually
showing
this
off
you're
going
to
want
to
show
them
what
Wyatt
you
know.
I
would
emphasize,
whenever
I've
been
doing
NSE,
that
those
topologies
actually
turn
out
to
be
actually
pretty
boring
in
practice.
But
so
you
know
the
one
you're
showing
there
is
actually
a
lot
more
complex
and
actually
usually
turns
up
in
reality,
but
it's
yeah
I
mean
you're
really
trying
to
get
what
what
exists
is
a
beautiful
nose
graph
in
your
head
is
not
necessarily
very
easy
to
communicate
using
words.
That's
where
I
was
going
to
help.
B
A
A
C
To
Oscar
a
slightly
meta
question
on
this
is
somebody
documenting
for
future
consumption.
How
SkyDrive
is
is
learning
and
unlearning
these
things,
because
it
seems
to
me
that
one
of
the
things
that's
missing
here
that
kind
of
takes
a
backseat
to
learn,
you
know,
is
how
we
intended
to
be
used
and-
and
the
example
here
is
right
eyes
getting
holes.
Metrology
is
a
fine
thing
where
we
should
say.
This
is
how
we
did
this,
because
this
is
what
we
intended.
You
know
a
work
pay
bump.
Oh
yeah,.
B
I
think
this
really
comes
down
to
you
and
I
think
this
is
getting
to
your
question,
which
is
I
feel
like
you're,
saying
visualizing
on
skype
is
all
well
and
good.
But
what
exactly
are
we
going?
How
are
we
going
to
expose
the
things
that
skydive
is
consuming
because
there
going
to
be
other
more
sophisticated
consumers
or
going
to
consume
them.
B
A
Her
current,
our
current
approach
that
were
looking
at
building
is
to
have
a
monitor
connection
and
each
in
each
network,
service,
mesh
or
network
service
manager.
So
each
agent
on
on
each
node
would
have
a
monitor
endpoint
that
you
could
connect
to,
and
this
monitor
will
stream
you
a
list
of
of
connections
as
they're
created
and
destroyed,
but.
B
So
I
think
what
we'd
be
representing
here.
Mancha
is
the
network
service
managers
viewpoint
of
the
links
that
it's
dealing
with
and
that
every
service
manager
knows
quite
a
few
things.
It
knows
who
the
network
service
client
is.
It
knows
who
the
network
service
endpoint
it's
connecting
on.
The
other
end
is-
and
it
knows
your
various
details
about
that
particular
cross-
connect
event
that
it
can
share,
given
that
it's
sort
of
like
your
to
do
about
DNS
analogy
of
a
data
center.
B
Yes,
you
can
rent
a
little
DP
and
sometimes
that's
helpful,
but
if
literally
the
guy
who
connected
the
cables
between
the
two
servers
has
a
perfect
eidetic
memory,
and
one
hundred
percent
of
the
time
knows
precisely
where
those
two
years
the
people
are
connected,
you
have
a
very
powerful
tool
that
doesn't
require
those
kinds
of
things
for
things
like
LTE
we
may
or
may
not,
actually
have
both
into
the
connection
from
a
data
packet
carriage
point
of
view
in
the
hands
the
network
service
manager.
It's
just
the
one
who
set
it
up.
B
A
So
so
I
think
we
we
have
a
wealth
of
information
off
of
a
single,
even
just
a
single
network
service
manager.
On
note,
and
so
one
of
the
things
that
will
that
will
be
able
to
do,
is
be
able
to
just
monitor
the
connections
on
that
and
the
the
one
challenge
as
I
can
see,
though,
is
that
when
you're
listening
to
suppose
you
have
a
cross
connect
that
crosses
node
boundaries,
then
we
may
want
to
have
something
that
can
D
duplicate
in
unify.
B
Yeah
I
mean
effectively
what
you're
getting
in
that
case
is
a
report
from
both
into
the
link
and
I
I'm,
almost
certain
that
the
the
sky
that
people
have
way
to
deal
with
that,
because,
if
you're
trying
to
do
this,
skydive
is
a
series
of
probes
and
then
something
that
collects
from
those
probes
and
so
I
know
that
skydive
does
have
situations
in
which
uses
lldp
and,
if
you're,
using
LDP.
What
you
get
is
the
report
of
one
side's
view
of
a
link
and
the
second
site.
B
A
Okay,
so
so
in
that
scenario
the
action
item
that
comes
out
is
we
have
to.
We
have
to
build
the
AI,
monitor
or
sorry
nai
to
build
a
monitor
for
publishing
the
local
state
that
the
network
service
manager
knows
of
when
the
connections
are
created
or
destroyed.
So
I
think
that's
the
yeah.
That's
the
one
thing
that
we
have
to
add
into
this
yep.
A
A
A
Let's
see
and
are
there
any
other
questions
on
the
skydivers?
Should
we
move
on
to
our
last
item
on
the
agenda.
A
G
Yeah,
it's
it's
already
in
your
screen,
so
I've
heard
sorry,
you
could
three
sure
I'm
not
in
a
position
to
share
my
computer
is
overloaded
with
stuff
and
sharing
will
basically
kill
all
the
four
cores
that
are
currently
processing
in
max.
So
if
you
could
reach
sharpies
one
second,
is
your
hi
Kelly
quickly
walk
for
the
points.
B
G
So
I
do
like
sharing
a
screen
today,
so
there
is
a
team
formed
and
we
had
all
sorts
of
stuff
churned
or
team.
Current
churns
churning
due
to
various
things,
mainly
personal
health,
related
on
the
family
side,
that's
Michael
and
AD,
but
Michael
is
coming
back
online
at
K,
we'll
be
back
on
Wednesday
and
Peter
and
Michaels
and
me
from
the
FDA
assisted
computer
team
are
driving
it
and
coordination
and
supervision
is
with
Taylor
and
celestina.
G
The
Alec
hold
on
joined
the
team
as
his
the
the
author
of
the
anamarie
bench
from
optionrally
project,
so
he's
now
plugged
in
and
he's
fixing
a
one
specific
issue,
a
back
which
is
the
commanded
in
the
link
that
you
will
be
able
to
click.
In
a
moment
the
service
topologies
are
V,
is
C,
CSC,
CHP
I
think
we
discussed
this
last
week.
G
If
you
click
on
this
link
on
the
second
bullet
at
you
should
get
to
the
you
should
get
to
the
HTML
version
of
the
current
status
map,
as
of
6th
of
November
they're
gonna
be
now
issued
every
day
and
the
last
number
is
basically
as
light
in
November.
So
it's
0-6
today,
so
if
you
go
you're
the
world
either
you
don't
need
to
open
all
them.
Okay,
so
you
can
see
that
we
do
have
a
multi
chain,
t-rex
latency,
not
measured
and
crashing
heavy
bench.
That
is
the
issue
that
it's
hard
to
reproduce.
G
It
is
only
reproducible
in
in
a
larger
scale
environments.
So
I,
like
ease,
is
diagnosing
that
now
he's
in
Pacific
time
zone
in
the
European
time
zone
at
Peter
is
doing
the
tests.
All
the
results
are
being
pumped
into
them
as
CNC
every
pub
and
as
per
the
other
following
links.
All
code
is
now
done.
We
can
basically
do
any
combination
of
the
CN
f's
in
the
service
chain
or
in
a
service
pipeline
with
vnf.
G
G
So
we
are
pretty
much
done
with
the
assisted
version
and
it
is
now
really
doing.
Dry
runs
exactly
stellar
set
and,
and
then
we
are
looking
at
fine
tuning.
The
the
setup
michael
is
partially
back
and
I
believe
michael
is
coordinating
with
you
Tyler
and
apologies
for
missing
the
the
call
earlier.
But
my
calendar
is
not
picking
up
slack
yet
so
it
needs
to
pick
up
the
slack
I'm
really
using
calendar,
my
calendar
as
a
single
source
of
truth
for
my
time
during
the
day,
not
multiple
apps,
but
hopefully
we
can
fix
that.
G
But
I
understand
from
the
slack
conversation
that
and
from
Peter.
That
is
also
using
slack
with
Michael.
That
Michael
is
now
basically
taking
the
work
done
on
the
VIU
assisted
part,
and
he
is
doing
the
glue
with
you,
the
uncivil
glue,
so
that
things
can
be
automated
for
full
one
button
or
one
key
press
and
let's
call
it
the
green
button
press
to
run
those
environments
in
Pocket
applet,
and
that
will
allow
you
to
then
further
on
and
abstracted
in
your
construction
stack.
G
So
it
can
be,
you
know,
used
in
the
future
for
driving
larger
systems,
so
so
I,
let
tell
it
to
you
if
you
okay.
Let
me
just
finish
my
outlet
and
I'll:
let
you
call
it
so
that's
that's
where
we
are
at
park,
admit
I
now,
I
think
I
should
have
in
my
inbox
or
on
the
slack,
the
location
of
the
of
the
Knicks
and
the
way
to
access
them
for
the
packet
at
met.
G
I
think
the
biggest
risk
I
see
is
the
packet
dotnet
part,
because
it
is
a
third-party
operated
system
and
I'm
gonna
know
whether
it's
all
working
but
according
to
tailor
for
your
communication
on
slack-
and
you
know
the
Intel
nicks
are
there
and
they
are
working.
So
so
maybe
I'm
just
paranoid,
but
when
we
do
a
baseline
calibration
test,
we
should
know
in
a
day
or
two.
If
we
are
good
and
then
it's
just
a
question
of
dry
runs,
that's
it!
Thank
you.
D
Then
we
prioritize
and
merge
in
amend
it's
optimization
and
tuning
for
the
performance
side
and
then
making
sure
that
we
can
reach
rates
to
us.
So
I
said
earlier:
we
have
testing
with
the
Mellanox
and
bill
port
and
we
just
added
the
quad
port,
so
the
quad
port
Knicks
were
not
available
fully
until
yesterday
and
some
of
them
are
available
last
week.
So
we've
got
started,
but
that's
in
there
and
you
know
we'll
keep
doing
testing
and
validation
and
make
sure
everyone
else
can
recreate
any
of
those
tests.