►
From YouTube: Istio Community 7 26 18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Well,
thanks
for
joining
today,
we've
got
a
couple
of
questions
on
the
working
doc
agenda,
which
is
great,
and
please
do
you
know,
add
stuff
there.
If
you've
got
stuff
that
you
want
to
talk
about
big
stuff,
that's
coming
up
one!
You
probably
have
seen
that
1.0
is
fast
approaching.
I.
Think
there's
a
little
countdown
timer
on
the
website
now,
which
is
pretty
cool,
so
check
that
out
start
playing
around
you
know,
there's
there's
still
time
to
you
know,
highlight
anything
any
weirdness
you're
finding.
A
We
definitely
want
to
get
those
noted
and
get
everything
ready
and
of
course,
1.0
was
not
the
end.
So
hopefully
we'll
get
a
lot
more
folks
that
are
using
a
co
kicking
the
tires
and
the
looks
like
saying
more
PR
people
to
review
and
all
those
good
fun
things
so
definitely
jump
in
and
get
involved.
It's
gonna
be
exciting.
A
Real,
quick
I
just
wanted
to
do
kind
of
a
recap
of
I
know
some
of
you
were
there,
some
of
you
weren't.
If
you
weren't
able
to
attend,
we
did
have
the
first
SEO
day,
which
was
super
exciting,
because
this
was
not
a
day
that
we
asked
for
so
this
was
literally
ozk
on
programming
committee.
Recognizing
that
SEO
was
a.
C
A
Topic
of
interest
and
putting
the
the
day
together
so
that
was
really
exciting.
I
know
a
lot
of
you
submitted
to
the
CFP,
so
we
had
some
great
talks
which
are
recorded
I,
believe
so
I'll
do
a
blog
post
once
all
those
videos
are
ready
and
link
outs.
All
of
them
have
everything
together
at
once,
but
it
was
really
awesome
to
have
a
lot
of
the
you
know:
people
that
are
using
SEO
and
really
getting
involved
and
like
building
things
around
the
ecosystem.
A
They're
talking
about
some
of
the
challenges
that
folks
have
run
into,
and
some
of
the
exciting
things
that
people
are
working
on.
So
that
was
really
great
to
see
everyone
together
and
talking
through
those
things.
Of
course,
this
week
is
also
Google.
Next
there's
a
lot
of
the
Google
SEO
team.
That
is
there.
So,
hopefully
it
looks
like
we've
got
some
Red
Hat
folks
and
a
couple
of
IBM
folks
on
the
call
as
well.
So
hopefully,
some
of
the
core
contributors
can
help
out
with
some
of
the
questions.
A
A
We
do
have
some
exciting
stuff
that
is
coming
up
as
well
in
the
community.
I
actually
have
multiple
links
open
for
it.
That
I'll
add
to
the
community
calendar
today,
but
iBM
has
got
a
demo
that
they're
doing
on
the
31st,
which
is
really
exciting,
and
it's
going
to
talk
about
how
to
expand
SEO
by
adding
VMs
I'll
put
the
link
in
our
community
calendar.
So
you
can
see
that
and
then
there's
also
container
camp
UK
it's
coming
up
and
Lin
son
will
be
speaking
there
as
well
about
SEO.
A
So
that's
exciting
too
then
I'll
put
links
to
that
and
that's
in
September
I
also
want
to
call
out.
Coupon
is
fast
approaching.
There
is
coupon
China,
which
is
going
to
happen
in
November
and
then
kook
on
North
America
is
happening
in
Seattle
in
December.
The
call
for
presentations
for
those
is
rapidly
approaching.
China
actually
has
already
passed
at
least
for
the
big
sessions,
but
for
Seattle
I.
Believe
it's
August.
A
Well,
something
in
August
pretend
it's
August
first,
because
we
all
know
you
wait
till
the
last
minute
anybody,
but
yes,
I
get
those
in
and
definitely
Seattle
should
be
a
really
big
one
which
is
going
to
be
exciting.
So
there
should
be
a
lot
of
great
folks
there
and
a
lot
of
you
know
interesting
opportunities
to
do
some
cool
stuff
around
SEO
there.
So
we
will,
you
know,
look
at
other
opportunities
that
we
can
do
there
as
well
in
terms
of
like
you
know
getting
together
and
things
like
that.
A
One
of
the
fun
things
at
OSCON
was
just.
There
was
an
IBM
how
to
meet
up
during
oz
Khan,
where
you
know
we
kind
of
did
an
overview
of
sto,
and
that
was
a
really
great
way
to
meet
folks
in
the
community
and
and
have
you
know,
I
think
actually
around,
like
10:30
just
chatting,
which
was
kind
of
cool.
We
got
kicked
out
of
the
Convention
Center.
A
Of
course,
very
exciting,
good
to
see
things
like
that,
so
we
should
plan
to
have
some
fun
stuff
in
Seattle
and
as
always,
if
you
are
interested
in
doing
a
talk
but
you're
looking
for
a
co-presenter
or
just
want
some
help,
you
know
kind
of
kicking
around
the
idea
for
your
CFP
you're,
welcome
to
email
me
and
I
can
connect
folks
together
and
then
I
would
also
suggest
joining
in
the
rocket
chat
and
we've
got
a
channel.
That's
specifically
for
events
and
that's
also
a
good
spot.
A
A
D
I
am
can
people
yeah
and
I'm
super
jealous
of
that
chair
Thanks,
so
my
name
is
Spencer
kind
of
a
new
history
of
community
member
everybody
else
I'd
like
to,
but
hopefully
we
can
get
that
fixed
kind
of
what
I
do
over
at
IBM.
So
I
work
at
IBM
and
you
know
they
have
me.
You
know
talking
to
developers
and
stuff,
and
so
one
of
the
things
we're
doing
for
that.
It's
like
a
lot
of
twitch
streaming.
D
That's
oh
I
thought
I'd
be
cool
idea
to
do
like
a
that's
like
sometimes
when
they
release
a
video
game.
They
do
an
all
day.
Twitch
party,
celebrating
that
video
game
and
I
thought
maybe
that'd
be
fun
thing
to
do.
If
the
sto
will
play
no
lunch
so
I'm
on
I
like
share
something.
That's
just
like
some
slides
that
threw
together
in
ten
seconds
just
to,
and
then
this
doesn't
be
like
four
minutes
of
promise
promises.
D
So
basically
I
think
it
would
just
be
kind
of
cool
to
do
a
Twitter
stream.
Anybody
doesn't
know
twitch,
it's
basically
just
an
online
streaming
platform,
it's
really
popular
with
the
gamers,
but
there's
a
lot
of
developers
there
and
programmers
and
stuff,
and
so
what
we've
been
kind
of
doing
it.
Abs
has
been
kind
of
doing,
and
other
people
have
been
doing
is
using
this
stuff
called
OBS,
which
is
GPL,
I,
just
sort
of
broadcasting
content
and
conversations
so
for
August.
D
17Th
is
kind
of
the
day
we're
looking
at
it's
doing
like
a
like
a
possibly
up
to
six
hours.
If
we
can
get
enough
content,
streamed,
live
from
San
Francisco
and
have
people
in
we're.
Gonna
have
a
couch
for
like
just
conversations
and
interviews
and
a
desk
for
people
who
want
to
show
slides
or
do
a
demo
or
anything
like
that
and
I'd
like
to
get
as
many
of
the
misty
of
people
and
a
nice
diversity
of
companies
and
backgrounds
to
stuff
in
there
as
well.
D
Buy
some
balloons
have
some
cake
and
just
sort
of
generally
have
a
nice
little
day
and
I
think
that'd
be
cool
for
a
lot
of
people
who
don't
really
know
what
is
to
you
is
or
have
lost
the
thread
on
sto
because
you
know
sometimes
it
gets
lost
in
all
the
kubernetes
feelings,
so
I
think
that'd
be
really
fun.
That's
what
we're
trying
to
do
I
have
to
answer
any
questions
and,
if
you're
interested
in
participating.
This
is
my
contact
info.
D
It's
easy
enough
for
me
to
just
say
it's
easy
enough
for
me
to
just
say
we
can
do
it
at
Watson
West,
which
is
right
there
at
505
powered
it's
somewhat
open,
but
that's
the
place
where
I
have
kind
of
the
most
influence
in
terms
of
like
hey,
I,
need
to
put
a
couch
in
and
take
over
for
a
day
and
set
up
a
a
bit
of
a
recording
studio.
The
other
thing
is
me,
and
some
of
the
people
at
IBM
have
a
fair
amount
of
experience
with
this.
D
A
G
D
So
we
can
yes,
so
we
can
bring
people
in
through
Google
Hangouts
or
something
we
have
a
whole
zoom
blue-jean
system
for
doing
that.
If
we
do
too
much
of
it
on
the
day,
I
think
it'll
detract
from
the
overall
production
value
of
the
thing.
It'll
just
be
like
a
series
of
webinars
I'm
trying
to
keep
this
much
more
conversational.
But
if
you're
interested
please
reach
out
to
me,
and
we
can,
we
can
kind
of
slice
those
into
the
schedule,
especially
if
people
are
in
different
time
zones
right
somebody's
in
Europe.
A
This
is
really
awesome.
I
love,
this
idea,
I
think
it'll
be
a
lot
of
fun
and
if
you're,
not
in
San
Francisco,
you
know
August
is
a
beautiful
time
to
visit
San
Francisco,
especially
with
what
is
it
everywhere
else
in
the
countries
and
like
a
massive
heat
wave
and
even
in
Europe
as
well
in
San
Francisco's,
like
the
one
spot
that
is
normal.
So
come
hang
out
visit
us
play
with
this
to
you
and
relax
with
the
Bay
Breeze.
A
A
H
Sure
everyone
so
I
was
running.
I
was
running
sto
on
gke
and
that's
abortions
diversion
there.
So
I
did
the
install
using
the
auth
for
the
mutual
TLS
and
everything
was
running.
Fine
when
I
check
the
sto
pods
in
the
sto
system.
Everything
is
green
and
running
so
no
errors.
There
then
my
goal
was
to
start
an
envoi
edge
proxy,
so
I
created
a
service
and
a
deployment
just
using
the
setup
to
essentially
receive
a
level
seven
request
through
n
boy
coming
in
to
do
the
cluster.
H
So
that's
set
up
to
run
I
with
a
co
sidecar.
So
then
I
started
up
and
I
run
it
and
everything
is
running
fine
in
terms
of
Android
edge
proxy,
so
that's
receiving
requests
that
I
can
reach
the
edge
point
TLS
that
I'd
set
up
using
envoy
and
it
also
starts
up
the
ISTE
of
proxy
sidecars,
but
for
some
reason,
there's
a
crash
lead
back
off
which
keeps
occurring
in
the
history
of
proxy
sidecar.
H
H
C
H
C
C
C
I
had
to
change
the
number
of
something
I
have
to
look
it
up
exactly,
but
the
number
of
watches
that
Linux
supported
I
was
having
a
lot
of
trouble
running
in
ten
tests
with
a
lot
of
resources
problem
in
a
crash
loop,
and
then
it
was
an
obvious,
but
when
I
changed
that
with
some
help
from
Jason
I
haven't
seen
it
since
so
that's
one
thing,
I
could
sort
of
suggest.
I
could
get
some
specific
details
and
try
to
post
it
in
the
chat
here
before
the
meetings
over
oh
yeah.
H
H
I
C
I
C
C
H
H
It's
just
like
and
where's
root.
The
process
is
owned
by
root
you're
transferred
again
as
a
regular
user.
It
couldn't
be
able
to
open
that
file
and
then
it
dies,
delete
the
file
and
you're
going
to
start.
But
I
don't
know
if
I
have
access
to
deleting
that
file,
because
the
pods
are
just
being
managed
by
the
deployment,
so
I
wouldn't
be
doing
any
kind
of
manual
file,
deletion,
I.
I
H
J
H
J
J
A
A
A
But
yeah
changelog
looks
like
yeah
I
will
see
so
as
they
roll
out
to
one
point.
It
was
happening,
there's
a
whole
lot,
that's
going
on
around
docks
and
so
I
know
that
there's
a
lot
of
changes
that
are
happening,
they're,
not
sure.
If
there's
anyone
on
the
call
that's
from
the
docks
working
group
that
can
kind
of
give
a
general
update,
but.
A
A
J
I
think
you
know,
I
I
just
spend
quite
a
bit
of
time
of
research.
You
know
the
one
who
snapshot,
release
I.
Think
there
is.
You
know
what
a
few
things
you're
lacking
of
documentation,
for
example,
I
think
he
was
in
the
periphery.
Was
you
know,
City
breaking
or
the
rate
limit
policies
that
you
set
up,
that
reference
is
alpha.
Two
resources
that
I'm
not
at
all
in
the
documentation.
J
Right
now
thick
is
the
duplicated
API
or
the
artifact
three,
so
I
think
that
would
be
maybe
useful
there
and
even
I
run
into
it's
gonna
be
interesting.
You
know,
I've
been
using
it
since
is
0.3,
and
so
I've
been
used
to
the
old
API,
which
I
thought
was
much
more
tweeted
for
somebody
coming
here
to
East
you
and
understand
how
everything
works,
the
100
API.
J
We
had
this
notion
of
residences,
so
you
can
set,
you
know
precedence
or
one
rule
over
another.
By
giving
it
you
know
different,
probably
number
and
I
think
that's
actually
gone
now,
and
you
know
with
the
new
API
is
more
like.
You
know
what
you're
doing
your
match.
King.
Are
you
matching
on
the
first
class
that
we
match?
You
know
the
expectation
whether
it's
a
heavier
or
a
cookie
or
something
we'd
be
triggered
and
I
think
we've
lost
functionality
there
in
terms
of
you
know,
had
a
demo.
J
J
Precedence
so
I
was
able
to
in
essence,
you
know
check
out
my
retry
logic
by
acting
of
for
teens,
with
the
service
and
I.
Think
that
is
gone
now
is
the
loss
of
precedent
versus
only
one
of
those
rules
would
be
fired,
I
mean
I
could
be,
may
think
something
there,
but
I
do
see
some
lost
some
big
loss
here
and
functionality
I
think
can.
I
C
J
Yes,
I:
will
you
know
I
will
find
it
because
I
think
that's
you
know
maybe
I'm
missing
something.
Maybe
it
is
still
there.
But
to
me
you
know
the
new
API
seems
to
just
be
the
first
rule
that
matches
well
fire
and
therefore,
at
this
point,
I
think
you've
lost
the
ability
to
you
know,
check
out
your
betrayal
and
making
sure
they
work
by
just
you
know,
injecting
forced
you
know
on
the
self-service,
which
is
something
that
you
used
to
be
able
with
the
precedence
by
setting.
J
J
Because
I,
you
know
I
get
that,
but
you
know
it
seems,
like
you
know,
I've
tried.
I
was
trying
to
do
this.
Ten
demos
that
I
had
before
with
the
old
API,
which
I
did
have
to
change
any
of
my
code
and
for
me
to
sure
to
retry
actually
to
force
in
my
code.
You
know
an
error
right,
so
every
took
over
I
will
fail
so
that
I
could
actually
read
em
over
the
world.
You
know
retry,
which
was
kind
of
bad
right,
because
I
stole
premise
is
not
having
to
check
your
code.
J
So
that
was
one
thing
again,
and
one
thing
that
really
tripped
me
big
time
going
from
the
old
API
to
the
new
API
is
I
used
to
fail.
You
know
how
to
forge
policies
set
up
with
a
football
hardware
to
show
failure.
You
know
like
for
yourself
a
time
of
see
fifty
percent
of
the
time.
This
is
no
longer
of
considerable
force
in
the
new
API
seems
like
a
forty
five
hundred
or
above
instead
of
400
or
500
with
the
new
API
and
that
treaty
big
time
is
not
in
the
documentation.
J
It
just
says
you
know.
If
the
service
fair,
you
know
it
doesn't
actually
specify
which
you
know,
error
codes
you'll
be
looking
at,
which
I
spent
quite
a
bit
of
shelter
figure
out
within
my
setup
that
was
wrong
or
I
was
actually
missing,
something.
This
is
something
that
was
fine.
You
know
in
the
audit
API
that
we've
lost
in
the
new
API.
Does
that
make
sense,
yeah.
G
J
I
J
This
is,
you
know,
I
mean
there's
a
dissipation
in
the
HTTP.
You
know
specification,
you
know
there
are
in
400-500,
you
know,
I
think
should
be
considered
a
problem.
You
know
with
your
service
yeah
and
also
you
know
just
it's.
You
know
you
guys
might
be
of
interest.
You
know
I've
been
playing
with
snapshot
zero,
one.
Let's
see:
1
2
&,
3
or
0
1
2,
3,
I,
forgot
and
I've.
J
Snapshot,
yeah
I
was
traded.
You
know
I
last
night,
I
was
like
okay
do
a
demo,
the
old
stuff
you
with
your
API,
which
are
everything
dialed
in
or
do
I
venture
out
and
actually
try
out
the
window
stuff
and
do
you
know
the
users?
You
know
you
know
the
new
stuff
and
I
figured.
You
know
why
should
I
should
go
from
the
new
API,
so
I
ready
to
do
a
lot
of
issues
trying
to
make
the
old
mo
work
with?
Yes,.
J
Because
you
know
check
out
the
new
stuff
and
I
agree
with
you,
I
think
you
know
in
many
ways
it
is
it's
more.
You
know,
like
I,
think
that'd
be
easy
to
everything
in
one
on
five.
You
know
service
I,
think
it's
a
better
approach
idea,
always
but
also
I,
think
we've
lost
some
functionalities
in
that
transition
and
as
well
yeah.
A
J
K
F
K
Included
we
if
I,
understand
the
situation
correctly.
The
underlying
problem
is
that
every
sidecar
has
to
know
about
every
other
tank
sidebar
and
not
every
other
service
within
the
cluster.
Is
there
a
way
to
limit
them,
because
network
seems
to
be
suggesting
that
somehow
it
still
needs
to
configure
beside
cars
to
only
know
about
what
is
relevant
to
that
sidebar
right
or
who
it's
running?
On
behalf
of.
K
Again,
I
might
be
wrong
on
this,
but
it
seems
also
that
the
full
routing
set
right
essentially
that
new
sidebar
users
there's
being
eager
little
right
pilot
since
everything
in
one
go.
Are
there
any
comments
or
thoughts
about
lazy
building
that
data,
because
obviously
this
currently
doesn't
feel
right.
I
I
was
going
to
say,
I
recorded
some
discussion,
it's
what's
been
done
today,
but
they
are
discussion
going
for
going
forward.
After
one
out
you
actually
push
only
the
configuration
you
need
and
I
remember,
one
of
our
competitor
product
I
think
was
console.
They
recently
read
produce
a
mesh
integration,
I
think
it's
connect
of
their
product
I
think
they
were
advertising
all
of
the
key
feature.
Was
they
only
push
the
configuration
they
need?
I
I
Yeah
so
I
know
what
the
other
thing
I
do
know
is
part
as
part
of
the
performance
they
are
trying
to
produce
large
topology
sizing
documentation
officially
from
is
to
die.
Also,
obviously
we
don't
have
that
documentation
yet
so
that
would
at
least
help
produce
guidance.
You
know
how
many
services,
how
big
of
Cuban
and
Acosta
and
even
four
different
cloud
provider,
would
be
a
reasonable
number
to
have
I
think
those
are
the
stuff
you
are
looking
for.
K
A
L
Kevin
go
ahead.
Yeah
I
was
going
to
say
one
of
the
things
that
we
need
to
do
course.
One
zero.
On
the
right
hand,
side
is
look
at
how
this
works
for
multi-tenancy,
so
we
have
a
requirement
to
integrate
where
there
are
various
Sdn
layers,
so
restrict
the
information
that's
going
to
the
site
card
to
the
IPA
on
boy
and
just
so
that
the
invoking
service
only
has
visibility
of
the
services
that
can
see
as
far
as
the
network
is
concerned.
L
K
By
the
way,
the
way
we
are
reproducing
it
is
by
having
a
console
instance
that
is
aware
of
our
current
services
that
are
running
outside
of
kubernetes
only
point.
We
essentially
point
just
like
to
sync
up
with
the
con,
for
instance,
and
essentially
that
is
what
creates
those
external
services
and
that
reproduces
the
same
situation
as
described
in
threat.
A
A
I
K
K
Don't
but
then
you
have
so
in
within
the
kubernetes
cluster.
We
have
a
couple
of
applications
at
say,
a
handful
of
applications
and
those
sidecar
cigar.
Let's
go
up
to
that,
so
it
seems
yeah.
It's
proportionate
could
be
like
kubernetes
services
that
still
is
pushing
out
to
the
sidebar,
but
sorry
initially
I
was
thinking.
It
may
be
doing
health
checks,
keeping
additional
stage
for
every
sidecar
things
like
that,
but
it
shouldn't
be
doing
health
checks
to
external
services.
K
K
So
yeah
so
then
it
seems
like
just
just
be.
You
know:
I
call
it
the
routing
table,
but
whatever
it's
called
internal,
it
seems
to
be
growing,
proportionate,
yeah
I'm,
not
sure
why?
Because
it's
not
that
much
information
like
a
thousand
services
shouldn't
amount
to
hundreds,
hundreds
of
megabytes
of
state.
J
J
I'm
just
wondering
you
know,
if
that
information
that
are
not
set,
you
know
to
the
Envoy
yeah,
you
know,
I,
don't
know
the
answer
to
that.
But
if
you
set
up
you
know
security,
saying:
okay,
you
know,
services
in
one
of
spinors
can
talk
to
other
namespace
or
not
whether
they're.
You
know
those
rules
will
not
be
pushed
at.
You
know
to
each
convoy
in
your
cluster.
You
know.
Maybe
there
is
something
there
I'm
guessing
I,
don't
know
for
sure
that.
I
I
just
learned
this
recently
they
actually
change
it.
You
know,
I,
don't
know
we
added
this
new
feature
where
there
are
two
level
of
caching
of
mixels.
Wine
is
on
the
server
side
of
mixer.
The
other
one
is
on
the
client
side
of
mixer,
which
a
local
okayed
with
you
on
boy
psyche.
I
am
as
a
processor
on
the
inside
of
your
psyche
on
so
that
level.
One
caching
would
also
do
authorization
Trucking's,
so
the
authorization
doesn't
necessarily
need
that
makes
server
side.
E
C
K
But
we
are
essentially
running
like
a
pretty
vanilla
set
up
just
to
experiment
with
this
with
you
just
those
services,
racial,
we're,
not
adding
any
or
enabling
any
other
features
and
inside
of
the
cluster.
At
this
point
we
just
experiment
we
booked
in.
For
that
just
keep
the
variables
look
I,
see
and
to
be
honest,
we-
and
this
is
this-
is
early
stuff,
because
I
haven't
really
gotten
into
it
yet,
but
on
1.03
b2
it
seems
to
be
a
bit
worse
than
0.8.
I
K
I
I
K
I
J
I
J
Not
sure
you
know,
I
don't
know
if
this
is
related,
but
I
heard
you
know
from
the
kubernetes
community.
You
know
people
running
twenty
thousand-plus
services,
those
kubernetes
services
right
and
running
into
issues
with
IP
table,
because,
obviously
I
feet
a
board
is
not
here
denominates
talking
about
what
30
plus
years
old
technology
right
yet
to
you
know,
handle
all
that
volume
and
a
lot
of
folks
have
been
switching
over
to
IP
V
ads.
J
You
know,
which
is
you
know
better
and
chromatin
that
deals
with
routing
at
the
kernel
level
versus
you
know,
getting
to
use
your
space
and
I.
Think
a
lot
of
people
have
switched
over
there.
The
implementation
of
IP
table
I.
Don't
know
you
know
two
thousand
service.
It
doesn't
sound
that
big.
You
know
for
various,
you
know,
custard
I,
don't
know
if
that's
maybe
the
issue
there,
but
that
might
be
something
to
think
about
to
you.
After
getting
into
that
the
limitation.
K
J
Correct
yeah,
you
know
and
again
I
mean
you
know.
I
can
imagine
your
workout
business
will
run
twenty
thousand.
You
know
comparing
services,
but
I
do
know
that
there
was.
You
know,
issues
with
IP
tables.
You
know
with
that
volume
of
it
is
a
sure.
Obviously
you
know
all
the
crew
proxies
you
know
are
gonna,
be
really
all
on
the
on
the
on
the
blocks.
You
know
when,
when
you
have
done
many
services,
you
know
in
place.
So
you
know
I
I,
don't
think
he's
THQ
here,
but
but
it
might
be
something
to
think
about.
J
K
I
That's
interesting,
unfortunately,
the
default
one
is
iptables.
Today
was
it
still
the
sidecar
we
are?
Hopefully
it's
not
going
to
a
potato
is
not
causing
a
lot
of
problem
because
see
it's
the
IPP
belong
Todd
and
not
necessarily
the
IP
table
on
the
worker
bee,
and
so
what
kind
of
only
said
they
have
you
table
within
the
pod
Network,
but
certainly
we
have
it
down
in
his
MBR,
like
hundreds
or
thousands
of
services,.
J
J
I
L
Took
on
and
Austin
back
in,
December
or
known
about
BPF
and
how
that
integrates,
especially
with
the
difference
in
performance
because
with
if
you
don't
use
BPF,
you
run
down
the
the
network's
network
rain
times
or
as
if
use
BPF.
You
just
dropping
in
and
everything's
happening
within
the
kernel
or
that
way.
A
F
H
H
So
because
I
was
listening
to
you
guys
talk
earlier,
and
it
seems
that
there's
some
changes
in
terms
of
1.0,
so
I'm,
looking
at
the
document
or
preliminary
gesture
that
I
owe
so
that
that
site
kind
of
goes
over.
What
what
what
the
the
new
changes
in
terms
of
deployment
would
be.
Is
that
the
best
way
to
kind
of
look
at
the
new
way
to
deploy
everything
are
there
means?
Yes,.
I
H
Okay,
that's
great
so
actually
for
my
issue.
What
I'll
do
is
I'll
just
we
run
everything
on
the
1.0
pre-release
and
then
just
see
if
they
come
up
there
and
just
kind
of
take
it
from
there
and
call
things
relative
to
that.
Once
one
comes
out:
okay,.
H
I
I,
just
I
just
run
daikon
stir
and
then
I,
just
curl.
The.
J
Because
you
know
last
night
to
running
again,
you
know,
maybe
my
demo
is
a
little
bit
different
from
the
regular
demos,
because
I've
been
running,
you
know
function
as
a
service.
Instead
of
just
you
know,
straight-up
services
and
I
hope,
so
he
audience
and
I
forgot.
We
snapshot.
I
was
running
into
issues
where
I
basically
deployed
the
canary.
You
know
if
one
of
the
functions
but
had
in
a
sense,
you
know
v1
v2
path,
running
on
my
crew,
very
nice
cluster.
J
However,
when
I
specified
destination
vector
service,
you
know,
basically,
you
know
you
just
doing
basic
traffic
coming.
You
know,
so
many
percent
on
version
was
communication
version
to
what
I
have
noticed.
It
seems
like
the
pilot
and
maybe
that's
part
of
service
discovery
lost
the
notion
of
that
version.
V2
that
was
running
so
even
though
I
had
my
subset.
You
know
my
subset
stand
up
for
v1
and
v2
the
rules
you
know
applied
with
no
errors.
J
However,
you
know
my
traffic
would
not
go
on
version
v2
of
that
service
and
what
I
ended
up
doing
is
actually
killing
v1
and
v2.
You
know
pods
and
restarting
them,
and
then
you
know
things
started
to
work
again,
but
you
know
I
could
find
those
services
you
know
using.
You
know
the
labels.
You
know
that
same
logic
that
I
have
defined
in
the
rule,
but
somehow
you
know
I
think
the
pilot
was
getting
like
confused
about
what
was
actually
running
on
my
cluster,
even
though
those
pods
were
running
I
will
not.
J
I
was
not
able
to
direct
traffic
to
that
second
version
of
the
pod,
which
was
not
weird
and
I.
Think
and
I
forgot.
This
is
all
in
the
snapshot.
You
know
one
two
or
three:
that's
you
know,
I
think
that's
what
I
had
to
double
where
to
to
get
past
that,
but
again
this
is
already
on
many
cubes
vaalu.
Look,
oh
and
I.
Don't
know
if
it's
an
it's
a
soft,
you
know
MIDI
kill
going
themselves
or
or
eesti.
Oh,
you
know
somehow
lost.
The
do
connection
is
something.
C
I
So
we
I,
don't
honestly
I,
don't
know
who
validate
the
documentation
inside
of
the
core
teams.
Just
give
you
some
heads
up.
We
used
to
use
mini
cable,
a
lot,
but
we
find
out
it's
not
having
enough
memory
or
we
just
couldn't
do
a
lot
basic
testing.
So
a
lot
of
folks
switch
to
use
either
the
compact
platform-specific
a
while.
Oh
just
a
couple,
kubernetes
oh
I,.
A
Awesome
well,
if
I
can
keep
my
sweet
puppy
from
barking
and
giving
her
opinions
here,
we've
got
just
a
couple
of
minutes
left,
so
I
just
want
to
wrap
up
and
thank
everybody
for
helping
each
other
out,
sharing
questions
and
feedback.
This
is
really
helpful.
I
expect
we're
going
to
get
you
know
a
lot
more
as
folks
are
digging
into
1.0
and
getting
things
running
in
production.
So
please
do
keep
it
coming
in.
Just
the
last
few
minutes
that
we
have.
H
A
F
A
A
Well,
I
know
we're
at
time
for
today,
if
we
didn't
get
to
you
or
your
issue,
please
do
join
us
in
the
next
couple
of
weeks,
we'll
have
our
next
meeting
and
then,
of
course,
reach
out
if
you've
got
any
questions
or
anything
else
that
you
need
help
with
you're
welcome
to
reach
out
to
me
directly,
or
course
on
the
mailing
list
as
well.
So
thank
you,
everyone
for
being
here.
Thank
you
especially
to
John
and
Fernand
and
Lynne
for
being
so
helpful
and
appreciate
it.
J
Thank
you
for
hosting
this
I
think
it's
a
great
idea.
I
think
Easter
is
the
future.
So
I'm
super
excited
about
the
technology.
If
you
guys
are
interested
I've
got,
you
know
the
demo
that
I
read
last
night,
if
you
guys
are
interested
I
could
with
earlier.
If
you
like,
it's
again
a
copy
deferred
for
everything
else,
it's
running
function
as
a
service,
so
using
facts
using
SQL
and
incorporating
everything
of
commemorated,
slash
steel,
cluster
using
nucleo,
which
is
you
know,
kind
of
a
new,
fast
framework
that
came
out.