►
From YouTube: Network Service Mesh Meeting - 2019-05-14
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
So
we
so
we
have
referring
for
events.
We
have
a
few
recurring
calls.
We
have
this
one.
We
have
the
NSM
doc
use
case,
which
occurs
every
one
we
which
occurs
every
Wednesday
at
8:00
a.m.
Pacific
time,
and
we
have
the
NSM
use
case,
which
occurs
every
second
fourth
and
fifth
Monday
at
8:00
a.m.
Pacific
time
as
all.
We
are
also
participating
in
the
CN
CF
Telecom
user
group,
which
occurs
every
first
and
third
Monday
at
8:00
a.m.
Pacific
time.
D
Yes,
sorry
I
was
muted,
yes,
this
is
it
and
that
title
probably
needs
to
just
be
updated
from
bofto
tag,
so
the
intro
for
the
antelco
user
group
will
be
happening
in
there
as
well
as
you
seen.
A
testbed
is
going
to
be
related,
but
it's
not
it's
one.
Initiative
related
to
the
tag
I'm,
not
sure
what
all's
going
to
be
going
on
on
that
and
still
getting
updated
and
with
Cheryl
and
Dan
I'll
be
mainly
focused
on
the
testbed
myself.
A
A
We
have
next
week
coming
up,
we
have
Q
Khan
and
the
cube
con
itself
is
from
May
21st
through
23rd.
We
also
have
a
few
co-located
events.
Those
Coleco
located
events
are
vital,
mini
summit
cloud
native
network
service
day,
and
we
have
talks
in
both
of
them
as
well.
In
the
main
session
and
the
main
queue
concession
we
have
an
intro
and
deep
dive
in
in
the
maintainer
Trek.
So.
A
If
you
do
so,
if
you'd
like
to
learn
more,
if
you're
want
to,
if
you
want
to
join
in
and
help
describe
what
networks
or
as
much
as
to
others,
feel
free
to
feel
free
to
join
in,
if
you
will
be
in
Barcelona,
we
also
have
to
con
China
coming
up
at
the
end
of
June.
In
Shanghai,
we
have
a
intro
talk
that
will
be
given
by
me
and
Nikolai.
A
A
We
have
2019,
we
have
and
cube
con
in
November
well
on
both
of
them,
unfortunately
located
at
the
same
time
in
different
cities,
but
one
of
them
is
in
Los
Angeles
and
GUP.
Khan
is
in
San
Diego.
The
call
for
papers
for
cube
con
is
currently
open,
so
again
same
thing
as
ons.
If
you'd
like
to
give
a
talk
there,
you
know
definitely
speak
up,
we
will
definitely
be
submitting
a
few
network
service
talks
there
as
well.
If
you
have
an
event
that
is
not
listed,
definitely
speak
up
and
also
open
a
pull
request.
E
You
in
the
last
week
we
gained
13
followers
we're
up
to
185
people
following
and
service
mesh
on
twitter
we've
followed
200
more
so
we're
over
1,000
following
accounts,
and
we
have
21
tweets.
That's
four
over
last
week
and
retweeted
about
10
things
and
I
scheduled
five
tweets
in
HootSuite
to
go
out
kind
of
one
a
day
to
promote
the
intro
to
network
service
mash
that
deep
dive,
as
well
as
the
talks
during
the
FCI
o
mini
summit
and
the
elephant
session.
F
F
The
other
thing
I
do
suggest
is,
if
you
have
some
other
and
network
service
related
thing,
for
example,
I
know
we
have
people
that
give
you
toxin
and
demos
and
booths
et
cetera,
two
things
that
I
would
strongly
suggest
you
do.
One
is
update
the
events
page
on
the
website
and
the
other
would
be
let
lucena
know
so
that
she
can
schedule
things
because
correct
me,
if
I'm
wrong,
loosing
out
adding
at
your
leisure
something
to
the
HootSuite
for
a
schedule,
tweeting
is
not
hard
asking
you
to
actually
pay
attention
in
live
tweet
things
is.
F
A
Which
also
brings
us
to
the
frequently
asked
questions
page,
so
that's
something
that
I'm
going
to
get
started
with
today
and
I'll
shop
it
around
on
the
NSM,
endianness
and
Bev
slack
channel,
so
that
people
can
add
or
change
their
views
on
things.
So
that
way
that
that
way,
we
have
something
ready
for
for
cube
con.
A
G
A
G
First,
one
also
just
calling
out
in
case
of
anyone
else
interested
please
let
us
know,
please
read
it
no
reach
out
to
me
and
then
we
can
definitely
look
on
how
if
you
want
to
host
it
or
if
you
want
to
partner
and
doing
the
Meetup,
that
would
be
good
yeah.
So
that's
update
and
as
a
next
step,
I'm
working
with
the
with
our
company
to
get
funding
for
the
meetup
guitar.
G
A
H
G
A
I
think
that
sounds
good
and
then
once
we
run
the
first
one
on,
we
gain
a
sense
of
the
size
of
the
community.
We
can.
We
can
start
to
look
at
like
what
is
the
cadence
that
we
want
to
to
run
because
one
of
the
tricks
to
attracting
a
lot
of
people
is
to
have
very
predictable
set
of
meetups
with
high
quality
content.
And
so
here
it
is,
if
you
predictable,
people
know
to
block
their
calendars
at
a
specific
time,
and
it
just
becomes
part
of
that
ritual.
What
so
all
right.
A
F
A
F
F
A
I
F
A
B
D
A
A
So
we
ran
the
stirred
the
docks
coal
as
well.
So
this
is
the
this
is
the
initial
set
of
release,
notes
that
were
that
we're
looking
at
so
right
up
front.
The
first
thing
that
we
want
to
do
is
make
it
easy
for
people
to
work
out
how
to
get
started.
So
this
is
where
we
describe.
How
do
you?
How
do
you
install
it?
How
or
how
do
you
link
to
the
material
it
describes
you?
How
do
it
how
to
install
it?
A
We
want
to
make
it
very
easy
to
find
out
where
it
is
and
link
them
to
two
working
demos,
and
so
once
we've
done
that
and
then
we
describe
we
jump
straight
into
what
is
what
is
network
service
right?
So
I
took
a
departure
from
from
the
norm,
not
that
we
have
norms
yet
on
our
own
releases,
but
three,
it's
very
common.
If
you
look
at
like
kubernetes
and
and
various
other
communities
that
what
they'll
do
is
they'll
just
list
all
the
pull
request
saying
here's
what's
changed.
A
The
problem
is
that
if
we
do
that
we'll
probably
list
around
nine
hundred
commits
from
day
one
to
today,
and
so
rather
than
do
that,
I
decide
opted
for
I
opted
so
eight
hundred
cubits
little
video,
privy
nine
hundred
by
the
time
we
get
to
it,
but
III
opted
for
describing
what
network
service
mesh
is
and
what
the
major
set
of
components
are,
and
so
there's
a
couple
to
do
of
tasks
that
are
that
are
in
there
that
we
need
to
that.
We
need
to
fix
up,
but
basically
describing
this
is
network
service
mesh.
A
This
is
the
this
is
the
reference
architecture
that
we're
that
we're
releasing
and
also
make
it
very
clear
that
the
reference
architecture
is
not
it's
not
all
of
NSM
that
it's
just
a
small
part
of
an
SM
and
that
the
big
part
is
going
to
come
as
people
continue
to
build
on
top
of
it
and
integrate
their
own
other
things
into
it.
Call
out
that
it
they
said
is
now
a
CNCs
and
proud
bucks
project.
We
now
have
a
logo
when
I.
A
So
we
will
also
have
known
issues
and
describing
describing
that
this
is
an
alpha
release
and
don't
run
it
in
production.
Yet
if
you,
if
you
do
run
in
production
and
tell
us
everything
that
breaks,
we
go
back
to
how
to
get
involved.
So
we
need
to
fill
in
like
where
to
find
a
meeting
sort
of
finds
for
where
to
find
us
on
slack
mailing
list,
etc,
etc,
and
one
less
part
is.
F
H
A
F
Issues
is
there
a
bunch
of
known
issues
about
limitations
in
the
kernel,
some
of
which
we've
bounced
into
so,
for
example,
the
Linux
kernel
appears
to
have
a
global
limit
of
128
MAC
addresses
it's
table,
and
so,
if
you're
doing
something
where
you're
programming
neighbors
as
part
of
your
connection
context,
I
can't
make
you
mean
you
could
go
tweak
your
system
to
increase
that
limit,
but
I
can't
make
a
default
Linux
system
actually
scale.
It's
just
not
a
thing.
It
does
so.
F
A
Yeah
thanks:
oh
that's
a
good
call
out.
We
will
have
the
same
problem
as
well,
not
only
with
with
ARP
request,
but
systems
with
how
many
interfaces
you
can
have
on
a
single
system
as
well,
which
is
part
of
the
reason
whether
we
really
need
to
bring
in
well,
which
is
not
really
bringing
it.
We've
already
brought
them
in,
but
to
encourage
the
use
of
things.
Things
like
shared
memory,
yeah.
F
You
know
the
aluminous
how
much
memory
you
have
on
the
system.
The
kernel
interfaces
I
think
the
limit
is
a
thousand
24
in
total.
You
know,
and
that
shouldn't
be
bad
for
most
nodes
at
this
stage,
but
that
that
limitation
is
going
to
become
all
kinds
of
intractable
as
servers
go
up
and
not
just
for
network
service
mesh.
For
frankly,
everybody.
A
F
A
F
Think
we'll
just
put
them
in
known
issues
because
they
are
just
known
issues
they're
in
some
sense
their
curl
bugs
and
I.
Don't
think
they're
gonna,
like
most
people
at
this
point,
most
people
are
going
to
be
using
us
for
l3,
so
they
won't
use
the
IP
neighbors
stuff,
so
they
won't
hit
that
limit
on
our
people.
F
A
F
One
of
them
was
that,
as
we've
added
new
things
like
AWS,
we
haven't
always
figured
out
the
figuring
out
the
right
set
of
limits
on
quotas
for
thickness
and
I
think
those
are
mostly
fixed
now,
so
we
would
hit
a
place
where
we
couldn't
create
the
clusters
in
order
to
do
testing,
because
we
were
hitting
limits
on
quotas
on
various
cloud
providers.
The
second
thing
that's
happened:
is
we
weren't
cleaning
up?
F
C
A
A
So
if
you
were
looking
to
actually
gain
that
experience
like
CI
is
by
definition
that,
like
it's,
you're
you're
spitting
up,
assists
you're
building
a
system,
you're
spinning
it
up
you're,
seeing
you're,
seeing
how
it
works
and
verifying
that
it
works,
and
so
so
you
have
all
of
the
code
there
that
that
describes
how
how
these
mechanisms
work.
So
it's
it's
actually
a
really
good
way
to
to
get
involved
with
with
the
with
the
project.
F
It's
just
one
of
those
things
where
I
noticed
it
wasn't
on
the
agenda
and
I
know
that
if
I,
if
I
wasn't
as
involved
in
sort
of
getting
to
the
small
number
of
her
issues,
I
would
be
super
concerned
with
what
I'd
seen
MSCI
in
the
last
week,
and
so
I
did
want
to
make
sure
that
we
brought
it
out
front
and
center.
We
talked
about
it.
We
talked
about
what
was
going
on
and
the
fact
that
we
think
we've
actually
shaken
this
out
at
this
point.
C
I
think
that
we
should,
we
should
say
it
clearly
that
we
have
three
public
clouds
plus
pocket,
which
is
again
kind
of
public
cloud
deployments
in
in
our
CI
and
synchronizing
between
these
and
making
sure
that
all
this
work
out
the
way
that
we
expect
them
to
work.
At
the
same
time,
it's
I
mean
it
just
takes
some
I
think
before
before
every
two.
A
Yeah
and
in
terms
of
in
terms
of
best
practices
in
the
application
world,
so
they
there's
this,
there's
this
push
towards
what
they
call
continuous,
continuous
deployment
and
continuous
delivery.
So
this
one
is
not
continuous
deployment,
but
it
is
continuous.
It
is
pushing
towards
continuous
delivery
towards
that
direction,
and
it's
just
exactly
as
you
described
any
any
committee
you
go
through
runs
through
a
large
battery
of
tests.
Things
are
fully
automated.
C
Yeah,
so
I
did
and
I
believe
a
couple
of
other
people
also
did
some
grooming
around
the
back
lock
here
and
what's
in
progress,
we
also
merged
some
things
and
thing
that
we
are
I
mean
okay.
Today
we
were
supposed
to
do
the
the
release,
but
yeah
still
we
have
some
issues
but
still
I
think
that
we
are
in
in
a
good
in
a
good
shape
relative
to
where
we
are
I
mean
what.
C
C
C
I
I'm
fine
with
that
I
mean
I.
Think
that
that
we
are
we're
at
the
stage
where
we
can.
We
can
do
that.
I
mean
it's,
at
least
regarding
the
demos
I,
don't
think
that
we
have
any
issues
there
and
they
should
be.
They
should
be
completely
fine,
I
mean
if
we,
if
we
branch
today,
that's
one
thing
on
the
other.
On
the
other
hand,
we
also
there
is
this
push
around
consolidating
the
images
that
we
use
in
our
CI
and
I
am
also
I.
C
Don't
know
how
many
of
you
remember,
but
there
is
on
another
repo
code,
examples
where
I'm
also
pushing
some
things
there
and
I'm,
also
working
on
moving
or
at
least
replicating
the
same
example
that
we
have
in
the
main
area
put
there.
So
my
question
here
would
be:
do
we
think
that
the
examples
could
be
this
source
of
truth,
where
we
actually
do
the
demos
from
or
we
should
just
keep
the
idea
with
the
branch
and
continue
with
it?
Yeah.
F
Throughout
there
is
I
kind
of
like
to
see
CI
stable
for
a
couple
of
days
before
we
pull
the
branch.
Okay,
I
really
want
to
see
the
branch
pole
not
only
for
the
demos
at
cube
con,
but
also
because
that
frees
up
master
again.
You
know
for
development
and
there's
a
lot
of
good
reasons.
I
would
see
the
branch
bold
but
I
kind
of
like
to
see
the
CI
relatively
stable,
at
least
on
the
map
yo.
F
E
C
I'm
not
sure
if,
if
this
should
be
showstopper
for
us,
I
mean
like,
should
we
consider
ipv6
for
for
the
base
or
should
be
not
I
think
that
for
the
time
being
we
can
but
maybe
down
the
road
other
than
that
I,
don't
see
anything
that
she's
really
particularly
outstanding,
probably
mostly
things
around
the
CI
and
I
would
agree
with
you
right
here
that
maybe
maybe
maybe
this
dish
would
be
our
our
main
point
here.
I
mean
just
makes
the
ice
table
and
then
branch
and.
F
What
I
would
really
love
to
see
is
I
still
would
like
to
see
us
push
to
get
ipv6
working.
If
we
can't,
we
have
a
little
bit
of
time
before
we
pull
the
branch,
because
it's
gonna
need
it's
ice
table.
We
keep
pushing
on
v6
and
see
if
that
comes
together.
If
it
comes
together,
it
comes
together,
but
you
know:
how
do
you
b6
working
I
think
would
be
really
really
a
good
idea.
So.
C
Ipv6
payloads,
it
depends
on
that
just
fixing,
Chris
figuring
out.
What's
what's
going
on
why
test
arts
is
failing
when
two
point
one
I
get
tin,
but
for
the
Custer
I
think
that
that
it
would
be
I
mean
our
our
CI
and
everything
is
so
complicated,
already
dyadic
ipv6
on
top
of
it
before
we
consider
a
substantial
pre
factor,
or
at
least
some
kind
of
more
consolidating
this
whole
big
homophile
that
we
do
I
know
that
under
a
hit
some
ideas
around
changing
the
CI
but
I
mean.
F
C
C
F
F
Let's
go
ahead
and
see
if
we
can
get
the
get
that
patch
updated
to
what's
currently
on
top
of.
What's
currently
no
matter,
it
just
feels
that
I'm
seeing
there
or
look
like
some
that
we
solve
with
some
of
the
fixes
that
went
in
and
lost.
You
are
really
sure
issues.
So
let's
go
ahead
and
see
if
we
can
get
him
to
update
this
to
the
latest
on
master
and
then
let's
rerun
the
CI
again
and
see
where
it
stands.
F
We
have
two
things
on
a
TV,
6and
and
I
would
like
to
get
both
of
them,
but
priorities
are
always
a
thing
and
so
that
I
find
it
most
productive
to
ask
people
to
priority
order
to
break
things
so
for
the
v6
proponents,
which
do
you
care
more
about
having
v6
payloads
running
across
the
network
service
mesh
or
having
the
network
service
mesh
running
on
a
kubernetes
ipv6.
Only
cluster.
I
One's
tough,
so
once
again,
the
service
provider
redheaded
stepchild,
I'm
I,
have
like
it's
like
dictated
like
anything
I.
Do
must
have
my
PB
six
support,
so
I
need
both
if
I
had
to
prioritize
I
would
probably
pick
payloads,
because
for
the
cluster
itself,
I
can
cheat
and
put
an
ipv6
nip
in
front
of
the
services
and
then
I'm
just
do
I
PB
for
local
and
I
mean
long-term
though
I
want
both,
but
I
would
probably
start
with
payloads.
Personally,
so.
A
L
L
J
A
A
A
N
F
Okay,
that's
good
to
know
that
I
misunderstood
your
initial
statement,
so
that
that's
good,
it's
just
like
anything
else
for
the
way
all
of
us
who've
been
around
the
block
with
ipv6.
We
did
things.
You
know
that
the
way
that
network
service
measure
has
been
architected,
it
should
work
perfectly
fine
with
ipv6,
with
no
problems
whatsoever.
We
also
know,
as
we've
already
found,
a
problem
with
the
payloads
that
will
be
false
until
we
actually
test
it
and
find
the
little
myths.
C
F
O
F
Yeah,
so,
basically
you're,
assuming
that
the
stickers
that
I
ordered
are
delivered
properly.
To
me,
literally
the
day
before
I
get
on
plane
to
Cuba
con,
we
will
have
500
of
each
of
these
stickers
available
to
spread
around.
You
know.
We've
got
two
stickers
that
we
did.
One
is
the
very
sensible
you
know
circular
logo,
with
network
service
Messiah
on
it,
the
other
one.
Everyone
knows,
I've
got
a
problem
with
you,
our
codes,
and
so
the
second
one
is
my
problem
with
QR
codes
manifesting
itself.
So.
F
The
second
one,
actually,
if
you
scan
the
QR
code,
it
will
actually
take
you
to
our
website
and
it
takes
you
to
our
website
in
a
way
that
can
be
tracked
by
Google
Analytics.
We
could
see
what
the
response
is
on
the
sticker
so
but
like
I,
said
well
how
about
500
each.
So
if
you
find
me
a
cute
con
and
having
to
give
them
to
people
to
hand
out
themselves,
I'm
happy
to
get
people
to
hand
out
other
talks,
I'm
happy
to
give
them
to
people
to
hand
out
of
their
booths.
G
B
You
you
can
actually
just
project
open
the
use
case
document
again
that
area
awesome.
Thank
you
so
perfect.
So
you've
been
having
very
good
discussions
as
part
of
the
use
case,
call
and
narrowing
down
to
a
specific,
near
term
use
case
which
you
can
work
on
so
for
the
first
responder
use
case.
Kamo
does
something
which
is
essentially
top-notch
and
the
high
interest
and
an
area
where
MSM
can
show
concrete
value
and
what
we
also
did
was,
rather
than
just
drawing
out
a
top-level
use
case.
E
B
Many
I
mean
all
the
operators
for
the
discussions,
and
so
basically,
as
part
of
it,
we
said
the
first
sub
use
case
or
function
would
be
the
mobile
client
site
right.
The
nest,
sub
use
case
would
be
sort
of
what
happens
in
the
mobile
network
packet
Cole
right
so
basically
at
the
second
and
and
also
the
third
one
was
mobile
network
ran
right.
So
basically,
these
are
the
use
cases,
the
sub
use
cases
we
broke
it
down
to
and
indict
what
we
did
an
analysis
with
city
from
hey.
B
How
can
you
make
concrete
progress
on
these
sub
use
cases
or
functions?
What
do
you
realize
was
there
is
a
fantastic,
open,
EPC
implementation
available,
so
this
is
basically
from
Sprint
and
Intel
they're,
the
primary
contributors.
It's
called
ohmic,
it's
a
full-blown
EPC
implementation,
every
component
exists,
and
notably,
what
is
of
interest
would
be
a
cloud
native
control,
plane
and
data
plane.
Even
that
is
fully
disaggregated
right
now.
This
implementation
is
4G
and
then
they're
moving
to
5g
very
quickly
and
specifically
in
the
4G.
B
What
is
very
nice,
as
they
have
a
poor
G
is
Gateway
ap
gateway
control.
They
ended
up
in
a
fully
fully
cloud
native
high
performance
with
PPD
KS
r
EO
v.
All
the
options
are
available
right
and
you
can
even
separate
out
the
sk,
p
and
p
gateway
peters
here.
It's
our
you
can
package
them
together
fight,
and
so
basically
the
idea
was
so
far
we've
been
looking
at
hey.
B
Besides
the
k8s
interface,
there
is
one
interface
towards
ran,
which
will
process
gtp
you
packet,
and
there
is
another
interface
towards
the
internet
or
STI
interface
little
you
know,
emit
out
IPSec
tunnels
or
just
standard
VLANs,
whatever
you
choose
right
so
now,
with
this
view,
what
we
are
seeing
is,
it's
probably
worth
you
know,
for
the
team
and
for
the
community
to
work
together
on
this.
You
know
on
driving
this
specific
sub
use
case,
and
here
the
goal
is
hey.
This
project
is
coming
from
Intel,
so
the
questions
were
asked
right.
B
You
know
Intel
is
a
heavy
proponent
of
Malta,
so
here
our
message
is
very
clear:
we're
going
to
compliment
Maltese
in
you
know
in
automating
the
necklace
network
service,
as
he
clearly
see
multiples,
has,
is
a
good
specification
but
doesn't
drive
automation.
So
basically,
you
know
everything.
Even
the
IP
addresses
are
all
manual
right
so
and
then
it
basically
gets
into
a
plug-in
specific
network
plug-in
specific
exercise,
so
we're
NSM
can
really
help
us.
B
Essentially,
you
know
in
network
automation,
in
the
case
of
hardware
and
software,
with
different
capabilities,
for
example,
the
hardware
may
have
SRO,
we
are
smart,
Nick
or
maybe
may
not
have
any
high
performance.
You
know
attributes
right.
So
how
do
we
automate
it
seamlessly
and
keep
things
simple
from
a
CNS
perspective?
Right,
that's
word!
That's
what
initial
value
comes
in
and
regarding
making
even
more
specific
progress.
Our
thoughts
were
hey.
Why
not
come
work
very
closely
with
the
telco
working
group
right?
We
have
failure
here.
B
So,
basically,
a
there
always
been
looking
for
good
bnx,
and
this
is
something
which
we
can
help
drive
closely
working
with
them
and
in
terms
of
not
just
working
together,
but
also
utilizing
the
set
up
right,
common
set
up
and
then
making
rapid
progress,
which
could
also
lead
to
sort
of
you
know
the
next
step
around
how
we
can
deploy
this
on
the
packet
infrastructure.
They
take
me
to
the
further
next
level,
but
there
again
there
seems
to
be
bigger
interest
in
tying
to
other
sub
use.
B
You
know
this
use
case
and
sub
use
cases
in
SM
in
packet
right.
At
least
this
is
a
thought
process
which
came
out
of
you
know
several
use
case
meetings
and
really
thanks
to
the
use
case
cream.
For
you
know,
helping
get
here
to
level
of
concrete
detail
which
we
have
here
and,
of
course,
frame
has
been
besides
me.
A
key
proponent
of
this
use
case
it's
like
to
thank
him.
G
B
And
also
one
closing
thought
here:
Nicolai
brought
up
a
very
good
point
on.
You
know
the
road
map
right
there,
several
items
use
cases
CI,
so
we
do
think
this
can
be
concrete
driver
for
several
other
activities.
We
start
off
with
a
concrete
use
case
and
especially
sub
use
case
with
a
net,
and
then
then
that
could
be
the
driver
for
other
specific
tasks
around
CI
or
integration
of
what
we
are
trying
to
do.
F
Schedule
is
going
to
be
insanely,
packed
so
doing
a
happy
hour.
I
think
it's
likely
to
be
problematic,
but
what
I'd
like
to
do
is
figure
out
a
place
at
a
time
where
we
could
all
get
together
around
whiteboards
and
as
a
community
sort
of
brainstorm.
Some
of
the
things
going
forward,
because
there's
a
lot
of
cool
things
going
forward
that
we
can
and
should
do
as
a
community
that
I
would
love
to
talk
through
with
everybody.
Does
that
sound
reasonable.
A
Sounds
reasonable
and
so
will
I
think
we
can
do
two
things
so
number
one
for
the
people
who
are
here
hop
on
to
the
NSM
slack
channel,
because
your
slack
should
still
work
in
in
Barcelona.
The
second
thing
is
will
also
announce
times
and
dates
for
any
of
these
type
of
events
that
we're
coming
up
both
on
slack
on
black
and
on
with
that
I.