►
From YouTube: Network Service Mesh WG Meeting - 2018-09-07
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
C
Think
we're
sort
of
good
at
this
point
it
might
be
useful
to
have
an
item
for
a
little
bit
of
cross
discussion
around
getting
some
of
the
hardware
BCI
working
at
Packard,
because
I
know
that's
something
I've
been
looking
at
it
and
I
know.
We've
got
other
folks
on
the
call
who've
been
looking
at
that.
So
that
might
be
something
we're
talking
about.
If
for
nothing
else,
then
just
sort
of
this
right.
A
D
C
A
So
if
you're
going
to
Amsterdam
or
you
have
to
live
nearby
Amsterdam-
and
you
want
to
talk
to
either
me
or
Kyle
I-
don't
I,
don't
believe
ed
will
be
there,
but
Kyle
and
I
will
definitely
be
there
then
come
along.
We
have
a
presentation
that
we're
that
we're,
giving
and
and
hopefully
we'll
drop
up
a
lot
more
visibility
and
support
of
what
we're
working
on.
A
Yeah,
that's
a
that's
a
good
point,
and
so
we've
added
a
couple
submissions
to
to
cube
con,
so
we'll
have
to
see
what
happens
in
the
process
of
that.
My
so
one
of
the
recommendations,
I
have
I,
don't
see
them
on
the
meeting
right
now.
Okay,
yeah
he's
here.
Okay.
So
what
do
I
suggestion?
Just
Thomas
has
been
working
on
a
VPP,
a
cross
connect
with
VPP
and
I
think
that
that
would
be
an
excellent
submission.
So
if
you
haven't,
if
you
haven't
added
that
already,
I
would
definitely
recommend
you
to
do
so.
Are.
E
You
saying
Tom
Herbert
yeah,
that's
right,
he's
not
on
the
call
he's
trying
to
he.
He
are
exceed
me
he's
in
the
DPD
case
in
Ireland
and
he's
having
trouble
connecting
from
from
a
cafe
or
something
there.
So
he's
trying
to
figure
out
the
right
number
to
call
in
at
from
Ireland,
so
he's
not
actually
on
the
call
but
trying
to
get
in
at
this
moment,
yeah
well,
he
signed
in
to
the
document.
But
then
he
couldn't
get
in
to
the
blue
jar
to
the
zoom
window.
A
A
F
Sure
so,
basically
most
of
the
work
is
done
and
right
now
there
are
two
components.
One
component
is
a
binary
so
far,
it's
a
standalone
binary
which
are
basically
scans
the
host
and
prepares
the
config
map,
which
is
then
used
by
a
controller
which
is
running
on
on
the
host
providing
sre
the
services.
F
The
the
beats
are
running
on
one
of
the
packet
net
server.
It
seems
to
be
ok,
there
were
some
like
a
minor
tests
done
to
talk
to
the
VF
device
or
vfi.
Your
devices
seems
to
be
ok
but
there's
a
part
missing
with
which
is
the
actual
data.
Plane
and
I
mean
right
now,
there's
effort
somehow
to
bring
either
VPP
or
GP
DK
to
be
able
to
actually
test
the
data
plan.
Part
of
this
solution.
C
Exactly
and
I
know
that
the
you
know
this
is
part
of
what
I
was
wanting
to
talk
about
about
hitting
CI
working
on
packet,
because
I
know
that
there
are
a
bunch
of
different
efforts
going
on
between
network
service
mesh
and
some
of
the
V
NFC
enough
comparison
stuff
at
CN
CF,
where
I
think
we
both
have
a
really
strong
interest
in
getting
minimally
t-rex,
which
is
the
packet
generator
and
VBP
working
in
the
packet
environment.
Because
you
know
between.
C
Some
of
the
issues
that
are
going
on
I
know:
we've
got
Michael
on
the
call
who
can
save
and
more
things
that
I
can
because
he's
been
bleeding
into
that
a
lot
but
I
I
think
the
the
challenge
that
I
think
I've
seen
has
been
most
of
the
packet
mixer
Mellanox
NICs,
which
are
wonderful
mix.
Unfortunately,
their
drivers
are
problem
in
particular
of
the
DPD
Katie.
You
know,
five
drivers
were
broken
and
so
there's
a
patch
that
fixes
them
in
VPP.
C
So
if
you
go
get
the
latest
VPP
1807
from
the
signal
you
know,
seven
branch
or
from
master,
you
can
build
with
Mellanox
drivers,
although
that
that's
challenging
I
know,
Michael
spend
a
lot
of
effort
trying
to
get
that
going
and
and
so
they're
they're
just
challenges
with
using
Mellanox
in
general
that
we're
sort
of
working
through
and
I
think
it's
not
so
much
that
the
drivers
don't
eventually
work.
Well,
it's
just
the
consume.
Ability
of
them
is
tricky.
E
C
C
Release
we
actually
found
out
because
we
rolled
the
release
out
and
a
whole
bunch
of
people
started.
Saying
hey
you
know,
what's
going
on,
Mellanox
isn't
working
and
in
what
it
turned
out
was
Mellanox
have
done
things
in
a
DPD
18:05
to
break
their
their
drivers
around
the
iommu
stuff,
and
so
everybody
got
their
heads
together.
There
was
a
big
collaboration,
everyone
moved
fast
and
we
got
patches
upstream,
where
VPP
will
patch
its
version
of
DP
DK
when
it
builds,
but
I
know
that
you've
got
some
connection
Billy
with
the
packaging
of
DP
DK.
C
E
C
I
know
I
mean
I
I
tend
to
think
of
this
problems
directly
like
the
first
was
the
stop
the
bleeding,
with
VPP
by
patching
DP
DK
when
we
build
it,
but
the
second
thing
is
to
make
sure
the
patches
go
upstream
to
DB
DK
and
that
the
back
ports
go
upstream
to
the
packagers,
because
DB
k
18:08
has
already
come
out,
but
we
can
fix
the
18:08
that
goes
into
distress
and
we
should
write
alright.
Do
you
have
things
you
would
like
to
add
to
this
whole
discussion.
E
C
G
C
Well
think
about
this
list
that
I
will
note,
by
the
way
in
poking
around
with
core
OS,
which
is
what
cross
on
CI
is
using
by
default.
It
does
appear
that
that
core
of
us
has
iommu
on
by
default
and
has
huge
pages
set
equal
to
to
make
by
default.
So
it
looks
like
at
least
with
core
OS.
You
don't
actually
have
to
twerk
tweet-tweet
kernel
parameters
in
order
to
get
the
right
things
available
at
the
kernel
level,
but
I'm
not
entirely
completely
certified.
Oh
that's.
D
H
There
we
like
what
we
did
initially
with
them
is
that
we
could
work,
because
it's
still
an
ongoing
process
with
manna
Knox
to
support
the
PDK
by
default
because
of
their
Moffat
dependency.
It's
always
because
of
the
effect
dependency.
So
one
thing
we
could
look
at
is:
can
we
work
within
an
ox
like
they
did
for
us?
Is
they
pre
build
their
images?
They
Dupree
build
the
packages
for
either
Debian
or
Ubuntu
like
what
they
did,
and
we
used
this
one
so
di.
C
Then
that
would
be
good,
but
there
would
be
two
things:
I
want
to
really
strongly
press
them
about.
What
is
include
the
flippin
fix,
so
the
drivers
are
usable
and
then
the
second
one
is
they
actually
do
publish
that
in
packages,
but
in
a
very
unhelpful
way.
Right
now
you
can
go
click
through
a
bunch
of
stuff
to
download
them,
but
I
would
really
love
to
see
them,
make
them
available
via
something
like
a
package
cloud
after
yum
repo,
so
that
you
could
just
flippin
point
to
and
after
yum
repo
and
install
them.
G
Yes,
if
anything,
I
can
add
a
few
more
details
about
my
package
generator
then
so
what
I
ended
up
doing?
There
was
since
I'm
running
everything
in
a
container
I
found
that
t-rex
the
version
of
t-rex
that
I'm
using
I
think
it's
2.30
to
only
supports
an
older
version
of
Ohrid,
so
I
actually
went
and
installed
that
one
and
then
for
the
container
I
had
to
share
all
the
hosts
libraries
with
the
container,
and
then
it
looked
like
it.
It
works.
C
The
reason
I
was
asking
is
I
Henoch
who's,
the
PTL
for
a
t-rex
is
a
good
friend.
So
if
the
thing
that's
actively
broken
in
t-rex
that
have
been
fixed,
I
can
go
talk
to
him
about
it,
but
if
it's
something
he
has
fixed
in
more
recent
versions,
yeah
I'm
gonna
get
a
very
interesting
response
from
him.
If
I
complain
about
problems
that
are
already
fixed,
yeah.
G
I
know
how
most
of
them
feel
about
doing
things
for
older
versions.
I
guess
they
have
quite
a
short
support
window.
Why
not
just
use
t-rex
directly,
I
know
that's
typically.
What
Fido
does
it's
just
an
the
problem
doing
that
if
we
want
to
make
any
like
actual
measurements,
any
PD
on
indie
artists,
then
we'll
still
need
to
write
the
cases
for
for
setting
up
the
traffic
and
setting
up
the
flows.
G
C
G
C
With
all
the
issues
were
hitting,
we
may
be
blowing
more
time
into
working
with
antiquated
things
solution
for
now.
Awesome,
that's
good!
When
you
get
a
little
more
settled,
if
you
could
add
more
comments
to
this,
because
this
is
a
lot
of
the
breadcrumbs
and
basically
following
the
breadcrumbs
from
this
and
from
the
instructions
that.
C
Sure
you're
not
saying
things
that
are
less
awful
rather
than
more
cool
yeah,
so
that
this
is
this
is
goodness
cool
and
you
want
to
have
anything
else.
They
want
to
add
on
this.
A
You
know
we
still
want
to
have
confidence
in
our
in
our
changes
over
time.
So
so
there's
going
to
be
quite
a
bit
of
work.
I
had
on
my
part
in
order
to,
in
order
to
do
that,
Kyle
did
you
want
to
add
anything
on
your
side
about
some
of
the
stuff
that
you're
or
that
you're
doing
because
I
know
you're,
looking
at
some
of
the
the
CI
stuff
and
I'm,
not
seeing
anything
else
on
the
agenda
that
that
you
that
you've
been
working
on.
I
So
it
helps
if
I
unmute
as
well
I
realize
now.
But
can
you
all
hear
me
now?
Yes,
yep,
okay,
yeah,
the
stuff
that
I
worked
on
this
week
was
EDI
and
Ed
was
looking
at
the
CR
decode
a
bit
last
week
and
of
course
everyone
was
traveling,
so
I
never
had
a
chance
to
circle
back
with
him
until
Tuesday,
but
but
essentially
I
I
push
the
patch
and
Ed
and
Sergey
reviewed
and
merged,
which
basically
now
not
now
we
we
automatically
generate
open,
API,
v3
validations
for
all
our
CR
DS
as
well.
I
I
think
it's
the
yes,
the
third
one
there
the
fix
here,
decode
auto-generate,
so
that
one
so
that
one
was
pushed
and
merged
and
that
that's
actually
pretty
slick,
because
it
now
means
that
that
you
know
all
of
that
validation
code,
which
was
written
by
hand
or
and
required.
You
know
syncing
with
you
know.
If
any
of
that
ever
changed
in
our
series.
I
Now
it's
all
just
auto-generated
and
automatically
should
get
validated
the
correct
way
as
well,
so
well
yeah
that
that's
basically
what
I
worked
on
this
week
and
then
after
that
I
made
our
CRD
creation
a
little
bit
more
robust
I
know:
I
talked
to
Serge
a
Serge's.
You
know,
I
think
he's
going
to
push
out
a
patch
to
modify
the
CRD
creation
a
little
bit
more,
even
yet
again
but
yeah.
So
basically,
it
was
all
about
this
way.
C
But
if
we
could
do,
we
can
also
make
sure
that
the
the
documentation
is
clear
enough
today,
because
I
was
trying
to
follow
the
mutation
it
looked
like
there
were
missing
steps
that
doesn't
mean
there
are
missing
steps.
It
just
means
that
you
know
there
was
confusion
on
my
part,
but
it's
worth
revisiting
from
there,
because
I
know
that
folks
can
get
all
right.
No
we've
been
a
couple
of
prohibitions
about.
In
fact,
it's
probably
we're
talking
about
the
the
the
this
stuff
about
channels
versus
not
channels.
That's
in
this
call.
I
I
C
One
of
the
things
that
I've
been
noticing
is
when
I
originally
originally
started.
Talking
about
network
services,
we
were
sort
of
cueing,
very
close
to
the
patterns
that
existed
incriminating
services
and
you
look
at
a
kubernetes
service.
You've
got
a
service,
it
has
names
and
that
it
has
ports
and
an
eternity
service
can
have
multiple
ports,
and
so,
at
the
time
we
said
well
calling
something
in
networking.
C
A
port
is
probably
not
a
service
to
the
well,
given
how
other
many
other
things
are
named
reports,
and
so
we
sort
of
call
them
channels
and
in
thinking
through
the
whole
thing
I'm
coming
to
be
of
the
opinion
and
I.
Think
Kyle
is
as
well
that
we
should
just
have
you
know,
get
away
with.
You
know,
do
away
with
the
channel
concept
and
just
have
a
network
service
support,
a
single
kind
of
payload,
so
that,
if
you,
if
you
need
multiple
of
them,
you
just
have
multiple
network
services.
C
It
kind
of
simplifies
a
bunch
of
things
in
the
architecture
to
go,
that
route
channels
end
up
integrating
a
lot
of
complexity
and
a
lot
of
like
weird
questions
about
how
you
do
them
and
I'm
not
sure
how
much
value
they're
actually
bringing
us.
But
I
did
want
to
sort
of
raise
that
here
and
see
what
everyone
else's
opinions
were
before.
We
started
hacking
through
code
and
stuff.
F
C
C
A
Yeah
and
what
I
was
thinking
through
these
through
these
scenarios,
I
wasn't
able
to
think
of
any
scenario,
we're
not
having
multiple
channels
didn't
like
where
things
were
unexpressible
and
I
know
that
Ed's
gone
through
this
exercise
several
times
as
well.
So,
in
fact,
most
of
the
exercises
that
we
do
when
we
discuss
about
what
we're
doing
request
a
service
and
accept,
and
so
on,
one
of
the
things
that
one
of
the
things
that
I
noticed
was
we're
not
there's
we're
not
making
a
mention
of
any
channels
in
there
whatsoever
and.
F
C
Okay,
now
I
mean
this
is
probably
for
the
good
I
think
it's
also
important
for
us
to
have
these
conversations
as
communities
as
a
community,
because
I
can't
tell
you
the
number
of
times
I've
been
in
situations
where
people
run
off
and
do
things
they
think
are
smart
and
come
back
and
someone
is
like
wait.
Wait,
wait
that
is
actually
really
important
to
me.
Uh-Huh.
Let
me
explain
why
this
is
not
where
we
want
to
go.
So
it's
better
to
sort
of
talk
these
things
through
ahead
of
time
cool
awesome.
A
A
20
or
25
minutes,
but
simultaneously
it
didn't
feel
like
93
slides,
which
is
good.
The
presentations
with
93
slides
in
that
short
amount
of
time
usually
feel
rushed,
but
now
it
would
really
really
well
and
got
the
point
across
I
think
and
it's
actually
really
uncanny,
because
the
person
from
Telus
I
think
it
was
Santa
who
gave
a
talk.
She
gave
the
talk
and
listed
all
of
her
problems,
and
then
Ed's
presentation
was
like
here
are
all
the
solutions
to
your
problems
like
one
for
one.
A
A
C
Effectively
people
were
incredibly
incredibly
happy.
They
identified
really
closely
with
the
kinds
of
problems
that
we
had.
There
were
folks
who
commented
they
I'd
very
closely.
I
was
telling
sort
of
the
Sara
and
the
secure
a
turn
at
connectivity
story,
lots
of
people
identified
with
character
and,
in
particular,
with
the
sort
of
serious
definition
of
hell
problems
that
everybody
was
running
that
everyone
runs
into.
So
you
know
there
was
a
very
strong
sense
from
folks
in
the
audience.
This
is
really
where
they
wanted
to
go,
which
is
always
a
positive
thing.
C
A
C
Think
that's
probably
yeah
it
was.
It
was
nice
to
have
the
live
questions
and
answers
available
on
the
website
and-
and
it
also
turns
out
to
be
massively
handy
to
the
QR
code.
Stuff
still
makes
me
so
happy
because
I
had
a
couple
of
places
where
various
people
pulled
me
aside,
asked
you
for
pointers
and
I
could
just
bring
up
the
mobile
phone
and
give
them
the
QR
code
directly
to
the
website.
I
I
C
C
Are
proxy
network
service
managers
and
immediate
SM
will
let
you
have
a
participant
in
the
network
service
mesh
that
is
external
to
your
cluster,
whether
that's
a
different
cluster
or
some
something
that's
managing
physical
network
stuff
and
APN
SM
will
let
you
basically
insert
a
control
plane
helper
into
the
service
function
chain?
So
if
you
had
things
like,
if
you're
doing
segment
running
B
six-
and
so
you
know
you-
you
end
up
a
segment-
writing
B
six.
Does
your
underlying
carriage?
C
C
You
know
that
you've
been
allows
you
to
insert
some
wisdom
about
the
physical
network,
so
those
are
two
exciting
things.
We
don't
have
a
lot
of
good
collateral
on
yet,
although
I'm
not
quite
sure
with
PMS
M,
how
to
represent
it,
prom
pianist
ends
are
sort
of
a
lightsaber,
the
network
service
mesh
world.
They
can
cut
through
anything,
but
you
should
be
strong
in
the
force
you're
going
to
cut
off
your
own
arm.
A
Yeah
I
found
pianist
sounds
really
really
useful
when
someone
tries
to
to
say
that
network
service
domestic,
but
look
at
the
general
pattern
of
the
CRT
and
and
look
at
the
general
pattern
of
of
the
standard
en
SM
and
one
of
the
questions
constantly
sometimes
we
brought
up
is
both
it
doesn't
really
handle
use
cases
were
some
form
of
omniscience
or
some
where
something
really
advanced.
That
requires
very
tight
coordination
from
all
parts
of
the
chain
and
have
to
be
have
to
speak.
A
A
Okay,
well
in
terms
of
in
terms
of
the
time
and
priority
I
think
we
should
discuss
the
packet
net
and
what
we
want
to
do
to
get
continuous
integration
running
on
there.
So,
first
to
note
we
have
packet
on
that
account,
so
that
was
racial
graciously
provided
by
both
CN
CF
and
him
packet
on
that,
and
so
one
of
the
things
that
that
we
need
is
well
first
make
sure
that
everyone
who
is
going
to
work
on
it
has
access
to
to
packet
that
they
have
a
username
that
that
and
X
to
the
group.
A
A
A
You
can
run
it
as
anivia
mode
or
in
a
container
mode
right
now
we're
running
it
in
the
VM
mode,
because
we
have
requirements
that
that
require
route
and
require
capabilities
outside
of
the
container
so
for
systems
of
slowing
down
our
initial
starting
time,
because
as
to
spin
up
the
VM
and
simultaneously
we're
running
kubernetes
within
within
that
vm,
ni
vm
is
very
so
and
so
the
so
the
the
initial
set
up
from
from
my
view
was
that
we
continue
to
keep
packet.
But
we
switch
package.
A
Sorry,
we
keep
travis,
but
we
switch
travis
or
container
mode.
So
we
get
a
very
fast
start.
Travis
can
then
send
the
appropriate
commands
to
to
pack
it
in
order
to
spin-up
or
spin-down
a
cluster,
so
we're
gonna
have
to
think
a
little
bit
about
how
we
want
to
deal
with
this
in
terms
of
authentication
and
so
on,
make
sure
that
we
don't
get
exposed
credentials
and
and
think
about
how
we
want
to
approach
this
and
the
other
thing
that
we
need
to
work
out
as
well.
A
All
right
so
in
terms
of
so
in
terms
of
details,
how
do
we
want
to
approach
this
like?
Do
we
want
to
split
up
the
the
tasks
into
a
series
of
of
smaller
bites
that
we
can,
because,
like
one
of
these,
I
want
to
be
careful
with
as
I
don't
want
to
end
up
without
a
CI
system
as
well,
while
we're
making
this
transition
and
say:
hey,
there's
no
CIO
until
we
get
it
a
lot,
Pablo
yeah.
C
Yeah
we
have
going
until
we
actually
have
something
working.
That's
different
I
was
actually
kind
of
toying
with
the
notion
because
you
know,
as
we
make
this
transition,
the
real
work
of
the
CI
is
going
to
be
happening
and
things
like
packet,
but
the
control.
You
still
want
a
control
point
with
my
sweat
hooks,
so
I
was
actually
literally
looking
at
you
know.
Maybe
playing
with
this
in
circle,
see
well
keeping
going
the
stuff
we
have
in
Travis
and
just
not
making
the
circle
see
stuff
loading
until
we
get
it
working.
A
If
it
doesn't
one
option
that
we
have
as
well
as
we
can
fork
the
the
repo
and
focus
on
getting
a
Travis
setup
that
that
works
with
it
as
well,
so
I
give
it
a
circle,
CI
version,
so
that
might
be
an
option,
but
my
preference,
of
course
would
be
non-voting,
and
then
we
switch
the
voting
from
one
to
the
other.
So
so
that's
something
we
need
to
look
into
is:
can
we
get?
Can
we
get
voting.
A
Okay,
so
another
so
another
thing
as
well
is
another
approach
we
can
look
at
is
see
if
there's
any
web
hooks
that
we
can
use
to
ship
some
of
this
stuff
off
to
our
to
pack
it
on
that
like
we
could,
keep
it
small.
So
if
it
turns
out
the
credentials,
are
an
issue
setting
up
a
web
hook
where
we
have
something
that
listens
and
an
axe
is
on
our
behalf.
A
To
set
up
such
systems
may
also
they
also
work
so
that
so
well,
so
we
have
an
alternative
if
we,
if
we
don't,
have
a
way
of
getting
those
credentials
in
place,
so
my
biggest
concern
is
primarily
someone
typing
echo
credentials
into
into
the
logs
and
then
getting
a
getting
an
output
of
username,
passwords
tokens,
etc.
So
that's
yeah.
A
A
F
C
Exactly
I
mean
I
I,
made
intensely
nervous
by
the
fact
that
we
don't
have
good
CI
around
the
asari
avi
stuff.
I
know
that
the
folks
who
are
working
on
it
or
being
a
situating
the
changes
they
make
by
you
I
feel
so
much
better.
Knowing
that
good
testing
is
running
and
I
know,
everyone
wants
to
get
there,
yeah.
A
F
One
of
the
major
issues
I
discovered
while
playing
with
us
Rav,
is
that
some
package
servers
have
SRA
V,
disabled
and
bias
so
to
overcome,
like,
in
my
specific
case,
I
had
to
manually
get
into
the
bias
change
the
setting
and
after
that
it
was
working.
I
know
that
Yan
is
working
on
the
more
automated
way
to
deal
with
that.
F
F
A
D
If,
if
we
split
the
testing
of
stuff
like
SRV
from
doing
full
end-user
integration
test,
where
you
may
want
multiple
systems
and
you
showing
how
pods
and
containers
everything
come
up,
if
you
move
that
as
a
separate
thing
and
just
have
SRV
support,
a
no
packet
would
be
okay
with
dedicating
we
number
one.
You
could
keep
something
up.
It
also
be
cheaper
to
have
it
at
all.
The
time
I
mean
it
was
one.
It's
one.
Small
set
of
systems
or
single
system
for
testing
those
parts.
Yeah.
C
A
A
C
Well,
so
I
actually
strongly
encouraged
folks
to
go.
Take
a
look
at
the
cross
cloud,
CI
stuff,
because
I
use
butter
lis
trivial
to
spit
up
a
cross
cloud
CI
cluster
on
packet.
There
are
really
good
directions
there.
The
only
thing
I
would
strongly
strongly
strongly
counsel
is.
When
you
give
the
name
of
the
cluster
you're.
A
C
What
I'd
actually
like
to
do
is
once
we
get
things
working
on
packet,
because
we
do
have
some
hardware,
aids,
I'd,
love
to
start
getting
the
CI
in
general
working
across
all
major
cloud
providers.
So
we
can
sort
of
say
well
yeah,
because
people
will
ask
all
the
time.
Does
this
work
on?
You
know
the
public
cloud
and
and
right
now
the
answer
is
that
is
our
intention
and
I
would
love
the
answer
to
be
well.
It
passed
ci
last
it
passed
CI
earlier
today
on
all
the
public
clouds.
That's
just
a
much
stronger
answer.
A
Okay,
so
let's
take
a
look
at
the
at
the
cross
cloud
them
so
I
think
if
we
focus
on
those
two
things
to
start
off
with
and
then
we'll
learn
more
about
the
the
problem,
and
then
we
can
circle
around
and
work
out
a
more
detailed
set
of
next
next
steps
in
regards
to
how
do
we
inject
that
work
service
mesh
into
into
the
cluster
and
make
sure
that
the
daemon
sets
are
all
set
up
properly
and
and
so
on?
So
I
think
so.
A
A
That
might
be
another
way
as
well
as
we
can
add
and
stuff
with
circle
CI
as
inside
of
a
and
have
some
flag
that
disables
circle
CI
and
just
automatically
passes,
except
for
the
test
that,
except
for
the
patch
that
we
want
and
then
once
we're
done
with
the
patch
and
it's
where
we
want
it
to
be,
and
then
we
we
remove
those.
We
remove
the
portion
that
that
causes
the
that
causes
it
to
automatically
pass.
So
that
might
be
another
another
option
as
well.
Yeah.
C
So
I
mean
there
are
lots
of
ways
to
skin
the
chi
yeah.
You
know
it's
just
there's
a
little
bit
of
slogging
through
getting
some
of
the
nick.
You
know
getting
past
some
vanilla
box
issues
so
that
we
could
just
literally
just
stand
up.
Containers
is
once
we
can
stand
up,
containers
for
VPP
and
t-rex
and
optionally
get
nyx
or
SRO
phoenix
into
those
things
as
part
of
like
a
cross
policy.
I
think
I'm
pack
up
once
for
at
that
point.
C
A
Yeah,
you
guys
have
all
been
doing
a
fantastic
job
with
documenting
that
so
definitely
didn't
like
keep
it
up
because
six
months
from
now
a
year
from
now,
you
know
if
we
need
to
go
through
some
of
this
again
yeah.
We
we
want
to
have
as
much
information
as
possible.
That's
stored
in
that
state,
yeah.
A
B
I
kind
of
came
in
I
went
through
audio
hell
in
the
beginning.
There
and
I
won't
give
you
all
the
details.
My
laptop
wasn't
seeing
the
audio
devices
and
I
went
to
call
in,
and
so
I
had
to
find
I'm
in
Ireland
still
I
had
to
find
the
access
number
I
found
the
access
number
for
zoom,
but
the
caller
ID
wouldn't
get
me
into
and
I
mean
the
conference.
Id
is
apparently
wrong
in
my
meeting
invite,
and
so
that's
something
we
have
to
check
to
get
us
into
the
this
zoom
channel
anyway.
B
I
met
I
was
at
DP
DK
yesterday
and
the
day
before,
our
DP
DK
userspace
here
in
Dublin
and
I,
was
talking
to
some
people
who
are
working
on
on
some
configuration
api's
for
configuring
course
for
for
for
containers,
some
Intel
people,
I
think
largely,
and
there
was
also
some
there's
a
lot
of
growing
interests
in
containers
in
that
space.
I
started
talking
a
little
bit
about
NSM
people,
weren't
aware
of
that,
and
there's
a
there's,
an
interest
both
to
talking
about
how
they're
there
api's
could
be
incorporated
in
our
extended
endpoint
api
for
configuring.
C
There
are
also
some
usability
things
from
db2
get.
It
would
be
ultra
ultra
helpful.
I
know
there
were
some
discussions
about
hot
plug
for
DB
TK,
but
it
is
an
unbelievable
pain
that
you
know.
There
are
apparently
problems
such
that
if
I
start
a
system
up
and
I
just
two
hours
later,
that
I
want
to
insert
a
necessary
oh
of
Enoch
or
some
of
the
BBK
Nick,
that
I've
got
to
go
bounce
the
whole
system
to
make
it
work
that
that
turns
out
to
be
a
real
bummer
in
a
more
dynamic
environment.
B
Yes,
there
was
a
paper
on
that
presented
on
the
hot-plug,
a
PRI
to
so
there's
several
of
these
things
coming
together
and
what
I'm
thinking
is
I
got
to
go
back
to
my
notes
and
figure
out
what
which
people
I
talked
to
and
see.
Maybe
if
we
could
have
some
kind
of
joint,
invite
or
joint
event
or
maybe
having
them
specifically
to
do
a
presentation
about
what
they're
doing
in
one
of
my.
C
I've
seen
sort
of
two
proposals
there
are
one
is
the
Newman
group,
the
new
manager
proposal
and
the
other
that
I've
seen
has
been
the
cpu
group
proposal
and
the
cpu
group
proposal
is
kind
of
Roth
because
it
asks
kubernetes
to
change
how
it
thinks
about
everything.
The
new
manager
is
good
as
far
as
it
goes,
but
I'm
a
little
bit
concerned
about
more
complicated
use
cases.
Let
me
sort
of
give
you
the
example
of
what
I
mean
by
more
complicated
use
cases.
C
Imagine
that
I
have
a
server
a
node
and
I
got
to
nix
a
10,
gig
Nick
and
a
40
Nick,
and
they
happen
to
have
their
PCI
lanes
coming
into
two
different
sockets,
so
10
gig
goes
to
socket
0
40
gig
goes
to
socket
one
and
I
wanted
to
play
a
container.
That's
going
to
grab
both
of
those
Nick's
right
through
network
service
mesh
or
whatever
mechanism.
C
I
would
like
to
be
able
to
pin
to
some
number
of
cores
for
socket
0
to
serve
this
indeed
Nick
and
some
number,
of
course,
for
socket
one
to
serve
the
40
gig
Nick.
But
yes,
but
it's
not
clear
how
to
do
that
on
the
new,
a
manager
proposal,
because
the
new
manager
proposal
would
simply
say
ok
for
the
10
gig
Nick.
You
know
this
is
my
suggestion
as
to
the
course
that
you
give
this
thing
and
you
get
conflicting
advice
for
the
40
gig
neck
and
there's
no
really
clear
way
how
that
gets
resolved.
C
So
I've
also
been
sort
of
rumbling
in
my
head
that
something
sort
of
similar
to
the
pattern
we
have
for
Network
service
manager
might
help
there
because
you
down
to
a
this,
comes
down
to
a
CP,
2
CPUs
problem.
If
I
understand
correctly,
where
you
need
to
create
a
CPU
set
for
the
pod
for
the
the
cores
that
is
supposed
to
pin
to
and
and
and
quite
honestly,
there
are
a
lot
of
these
situations
that
you
kind
of
have
to
deal
with,
I
think
in
ways
that
are
similar
to
the
NS
out
popular.
B
Exactly
and
I
think
there's
some
related
issues
too,
with
two
cores
and
being
able
to
step
up
the
frequency
and
doing
power
management
on
the
course
that
can
have
interesting
effects
they're
working
on
two.
So
in
any
case,
we
there's
an
opportunity
for
us
to
have
some
collaboration
on
this
stuff
as
well,
because
there's
some
fairly
significant
frustrations
on
the
part
of
these
individuals
with
regard
to
kubernetes,
because
they
were
have
been
focused
to
this
point
on
somehow
trying
to
you
know
plug
this
stuff
into
standard
kubernetes
network
should.
C
Maybe
that
there's
strategy
that
was
taken
with
OpenStack,
which
was
just
take
all
the
nitty-gritty
details
and
make
them
part
of
the
global
API
and
there's
absolutely
no
way
that
that's
happening
for
kubernetes
right,
and
this
is
where
things
like
I
think
the
CPU
group
guys
have
run
into
which
is
no
I'm.
Sorry
we're
not
going
to
change
entirely.
A
A
B
These
are
ever
more
reasons
why
we,
what
we're
doing
is
really
miss
us,
a
necessity
in
and
make
reasonable
and
if
the
type
workloads
to
work
in
in
kubernetes
environment
and
and
so
it's
all,
the
more
reason
why
we,
we
have
to
have
these
conversations
I
think,
particularly
as
we
talked
about
how
how
we're
going
to
provision
with
the
you
know,
provision
these
core
dependent
types
of
things
like
these
days.
You're
talking
about
it.
A
F
B
A
So
I'll
I
need
to
close
up
this
particular
meeting,
but
you
know
we
can.
We
can
discuss
afterwards
chats
chat
rooms,
probably
the
best,
because
I
know
that
a
couple
of
people
have
to
drop
off.
Who
are
interested
with
that?
Thank
you,
everyone
for
attending,
and
we
will
see
you.
We
will
see
you
all
next
week.