►
From YouTube: Istio Community Meeting 2019-10-31
Description
A
B
A
Very
nice,
alright,
so,
let's,
let's
jump
in
first
thing:
I
wanted
to
cover
real
quick
is
the
one
floor
schedule,
so
let
everyone
know
where
we
are
there.
We
at
the
code
freeze
and
obviously
the
branch
has
occurred.
1016
we've
done
to
code
bashes
that
have
taken
place
as
of
October
22nd
and
October
29th,
there's
actually
links
here.
That
will
take
you
to
some
of
the
results
for
that,
and
you
can
see
here
there's
like
one
of
on
the
great
issue.
Oh,
but
back
you
can
see
the
number.
D
A
There
you
go:
that's
fine,
okay!
So
now
you
can
see
it
now
in
the
link
there,
you
can
see
our
bug
bass.
You
can
see
the
results
as
far
as
the
number
of
issues
and
who
they're
assigned
to
so.
Obviously,
if
you're,
one
of
the
contributors
to
one
of
those
working
groups
and
the
repositories,
please
it'd
be
helpful.
A
If
you
can
take
a
look
at
this
and
help
knock
down
some
of
the
bugs
that
are
assigned
to
you,
several
bugs
have
been
open
and
the
teams
are
at
this
point
now,
working
through
and
knocking
those
down.
Sorry
to
say:
networking
unfortunately
has
the
both
of
them,
and
that
seems
to
be
the
case,
and
it's
probably
no
surprise
there,
but
working
through
those
I
also
found
this
this
week
as
well.
We
have
these
github
reports.
If
you
haven't
seen
those
I
I
found
those
quite
helpful.
A
If
you
open
that
up,
this
actually
shows
you
like
a
time
history
of
the
bugs
that
have
been
opened
up
over
time
and
the
areas,
and
it
gives
you
some
really
nice
views
into
where
they
are,
how
many
we
have,
and
basically
the
the
trend
of
them
and
where
they're
going.
So
if
you
wanted
to
get
like
a
quick
view
of
the
bugs
that
are
opened
against
the
entire
project
and
what
areas
are
open
over
time,
this
will
give
you
that
view.
So
I
found
this
quite
useful.
A
It's
also
really
nice
to
see
the
overall
trend,
so
one
of
the
areas
as
a
community
that
we're
trying
to
focus
on
is
knocking
down
the
backlog.
So
anything
you
can
do
as
a
maintainer
and
a
contributor
here
is
to
help
your
team
and
your
working
groups
scrub
your
backlog
and
help
to
knock
down
some
of
these.
Some
of
these
defects
in
the
in
the
backlog,
and
obviously
you
can
see
from
this
graph
where,
where.
C
E
A
A
So
as
far
as
happenings
that
are
occurring,
I
didn't
see
anything
outside
of
everyone.
Heads
down
getting
the
release
out.
I
did
want
to
put
out
a
note
that
we
will
be
doing
a
pop-up
sto
meetup
next
week.
So
if
you're
in
Ottawa
Canada
coming
out
of
the
u.s.
you
could
take
come
join
us
from
six
to
eight
on
November
5th,
which
is
Tuesday
I,
believe
there
will
be
a
pop-up
meetup
that
we
we
formed
quite
quickly.
A
I
actually
have
a
travel
engagement
in
Ottawa
on
the
six,
so
we
took
this
time
to
engage
with
the
CNCs
event
team
to
set
up
this
event
and
get
a
location
very
quickly,
so
we're
gonna
we're
gonna
host
that
that
evening.
So
if
you
know
anyone
in
the
Canada
Ottawa
area
that
you
feel
would
benefit
from
this
like,
let's
get
the
word
out
and
drive
that
up.
I
know
in
the
first
few
hours,
or
so
we
got
25
people
registered
since
it
was
posted.
So
hopefully
we'll
see
a
lot
me
will
blow
up
the
list.
A
A
Okay
and
say
that
would
be
quite
astonishing
if
it
was
already
full
I
did
want
to
bring
up
just
so
people
are
aware,
and
if
they
have
comments
on
this
is
something
that
we've
been
discussing
in
the
steering
committee
and
also
in
the
TOC.
A
little
bit
is
the
ISTE
Oh
support
and
how
we
go
about
managing
support
for
the
community
content
today,
we're
managing
at
n
minus
one.
A
So
it's
it's
one,
release
back!
That
would
be
supported.
However.
The
kubernetes
community
supports
n
minus
two,
so
we're
a
little
bit
out
of
sync
with
what
the
kubernetes
community
supports
and
there's
there's
some
discussion
and
or
concern
that
n
minus
one
is
a
little
too
short.
It
doesn't
leave
a
lot
of
a
window
for
enterprise
or
other
customers
to
adopt
and
stay
on
a
current
version
for
a
period
of
time
before
they're
somewhat
forced
to
move
up.
So
it
really
comes
down
to
what
is
the
model
that
we
should
support
as
a
community?
A
A
Going
once
going
twice,
okay,
I
I
will
tell
you
that
we're
strongly
leaning
toward
and
supporting
it,
but
just
figuring
out
how
we
go
about
doing
that
without
putting
too
much
extra
load
on
the
current
on
the
current
teams
and
the
working
groups
to
support
that
that
level
nothing
approved
yet.
But
it
is
something
under
serious
consideration.
I
also
wanted
to
just
point
out
something
else:
I
learned
I
didn't
know
about
this,
and
maybe
other
people
already
do
know
about
this.
A
But
I
did
not
I
learned
about
the
ISTE
Oh
engineering,
page
and
I
I
found
this
page
quite
interesting
and
useful
I
know
it's
a
work
in
progress.
It's
not
fully
developed
yet,
but
I
see
it
as
providing
some
promise
for
individuals.
So
if
you
want
to
know
who
is
working
on
the
project,
the
engineering
dashboard
will
give
you
a
view
of
who's
who's
active
in
the
project
and
what
projects
or
repos
that
they're
active
in.
A
So,
if
you're
looking
for
someone
in
a
particular
repository,
you
can
actually
come
here
and
see
who's
active
in
the
community
and
what
what
projects
will
repos
they're
active
in?
So
if
you
want
to
ask
a
question-
and
you
can
see
here-
there's
additional
sections
here-
that
more
more
content
is
going
to
get
dropped
in.
So
like
the
issues
section,
it's
showing
you
the
number
of
issues
that
are
open
per
month
in
the
different
repositories.
So
this
gives
you
a
nice
view
of
how
many
issues
are
there.
A
Where
are
they
opening,
whereas
the
areas
that
are
getting
the
most
attention
or
the
most
issues
at
any
given
point
in
time
and
over
time,
this
is
going
to
evolve.
It's
going
to
incorporate
more
information
about
the
project,
here's
a
here's,
a
view
of
the
performance
page.
Obviously
it's
done
or
construction
now,
I
hear,
but
over
time
this
is
going
to
show
the
overall
performance
test
and
the
test
results
per
releases
in
this
environment
and
I'm
realizing
the
colors
on
the.
A
A
A
Dark
mode
and
again
you
can,
you
can
see,
there's
there's
other
aspects
in
here
these
these
are
empty
at
the
moment,
but
more
of
this
to
to
come
out
it'd
be
interesting.
If
anyone
has
any
thoughts
on
as
far
as
other
sections
that
would
be
helpful
to
show
in
here
that
would
would
help
the
community
in
general
we're
definitely
open
to
feedback.
What
else
should
be
shown
in
this
engineering
deck
or
to
help
the
community.
A
G
Thanks
hi
everyone,
let's
see,
let
me
see
if
I
can
share
my
screen
here.
G
Okay,
can
everyone
see
if
it's
okay,
yes,
awesome,
all
right
great,
so
yeah,
so
I
thought
I
would
sort
of
give
a
rundown
of
how
the
MS
expansion
works
in
this
do,
and
this
is
intended
to
be
sort
of
an
intro
on
full
disclosure.
This
is
also
what
my
coop
contac
is
about
and
I'm
co-presenting
on
that.
So
if
this
is
like
interesting
to
you
and
you'll
be
at
KU
con
and
definitely
yeah
I'll,
be
there
too
so
great,
let's
see
so
I
guess
to
start.
G
If
you
don't
know,
this
sto
does
have
virtual
machines
support
in
our
Doc's.
The
term
is
called
mesh
expansion.
We
are
trying
to
move
from
that
term
in
part
due
to
like
SEO
reasons,
and
also
because
it's
not
necessarily
very
descriptive
of
what
of
what
the
process
actually
is.
G
But
the
idea
behind
it
is,
you
can
treat
a
VM
roughly
like
you,
would
a
pod
and
actually
place
the
ISTE
of
proxy
and
the
node
agent
for
certs
inside
of
a
VM
run
the
service
in
there
and
kind
of
treat
it
the
same
way
as
you
would
an
injected
pod.
And
why
would
you
want
to
do
this
really
for
consistency
reasons?
G
So
imagine
you
have
an
application
that
has
a
lot
of
pods
and
you
are
running
some
sort
of
staple
or
other
workload
inside
the
VM
and
you
kind
of
want
to
bring
that
in
to
the
party
as
it
were
and
use
sto
traffic
rules
and
do
splitting
and
then
TLS
and
I'll
see
and
all
the
good
stuff
SEO
gets
you
for
for
VMs.
This
would
be
the
reason
you'd
add
it
to
the
mesh.
G
G
So
if
it's
like
a
managed
database
or
an
external
API,
definitely
use
that
service
entry,
but
if
you,
if
you
do
own
the
service
and
it's
inside
a
VM
that
you
have
access
to
on
void,
does
doesn't
ultimately
take
up
that
many
resources,
so
definitely
consider
that
that
adding
it
to
your
mesh
so
on
so
use
cases,
I
kind
of
already
alluded
to
this.
It's
really
any
situation
where
you'd
want
to
have
a
service
in
a
VM
versus
in
a
container.
G
So
a
lot
of
the
the
use
cases
I
hear
about
are
related
to
databases
for
things
that
you
know
for
security
reasons
or
sort
of
administrative
reasons.
You
don't
want
to
put
it
in
a
container
legacy:
apps
hybrid
environments,
the
thing
to
notice
about
or
to
note
about.
This
is,
if,
if
you
do
add
a
VM
to
an
sto
mesh,
you
do
need
pod
IP
connectivity.
G
So
in
the
example
I'm
about
to
show
I'm,
it's
all
in
the
shared
theme
PC,
you
would
need
something
like
a
VPN
to
get
that
IP
collectivity
if
you're
outside
of
the
network
as
well
as
failover.
So
you
could
do
a
thing
where,
like
your
VM,
is
actually
sort
of
a
backup
to
what's
running
in
your
cluster
and
use
a
ciosed
locality
based
load
balancing
to
failover
from
one
region
to
another,
which
is
pretty
cool,
yeah,
so
prerequisites
so
in
the
environment.
I'm
about
to
show
I
have
something
like
this.
G
I've
got
one
project
in
Google
cloud
with
a
gke
cluster,
so
kubernetes
cluster
sto
is
installed
and
then
I
have
a
standalone.
Vm
I
have
full
admin
privileges
on
this,
which
is
needed
to
start
up
envoy
and
yeah.
I
have
IP
connectivity
as
well
and
before
I
show
like
the
demo
of
it
working
sort
of
in
action.
It's
kind
of
maybe
worth
going
through
exactly
how
this
whole
thing
works.
So
on
the
Left
we
have
a
cluster
on
the
right.
We
have
two
PM
I
have
installed
this
studio
on.
The
cluster
should
be
noted.
G
There's
other
control
plan
components
that
happen
here.
These
are
the
ones
that
kind
of
get
involved
with
the
VM
piece,
so
we
have
pilot
for
the
network
configuration
and
we
have
set
adele
for
the
certs
and
ingress
gateway
handles
the
traffic,
the
control
planning
traffic
coming
from
the
VM.
So
it's
a
couple
of
steps.
It's
like
five
or
six
steps,
so
the
first
thing
we're
going
to
do
is
we're
going
to
send
over
some
info
to
the
VM.
G
Then
we're
going
to
run
this
command
called
sco
CTL
register,
which
generates
a
selector
list
service,
which
is
just
like
an
endpoint
for
the
PM's
IP.
We
also
need
to
create
a
service
entry.
This
is
a
little
confusing.
Hopefully
this
change
it
to
some
day,
but
we
need
a
service
entry
as
well
pointing
to
the
VM.
Then
we're
gonna
actually
update
the
DNS
on
the
virtual
machine
so
that
traffic
going
so
like
anytime,
the
proxy
on
the
VM,
needs
to
get
updated
config.
That
traffic
is
actually
going
to
go
through
the
ingress
gateway.
G
It
could
also
go
through
an
il,
be
like
an
internal
load
balancer.
The
idea
behind
this
is
like
pilot
and
says:
Adele
are
not
exposed
outside
it's,
sir,
so
we
need
a
way
for
the
PM
to
get
to
them,
get
get
to
those
components
and
then,
next
to
last
we're
going
to
actually
start
the
sto
remote
components
on
the
VM.
G
This
is
two
things:
it's
the
remote
Citadel
component
for
certs
and
the
proxy
and
then
the
last
thing
we're
gonna
do
is
run
our
services,
so
I'm
going
to
show
a
service
running
in
raw
docker
on
the
VM.
It
could
also
be
a
raw
executive.
All
and
then
I
have
some
services
in
the
cluster
as
well,
and
it
should
be
noted.
That's
like
in
this
configuration
in
want
like
a
1,
Network
config
data
path.
G
G
G
So
here
we
are
in
sort
of
my
cluster
view.
We
can
see
that
sto
1.3
is
installed
and
running
on
our
cluster
and
all
of
our
services,
except
for
the
product.
Catalog
are
deployed
onto
the
cluster
and
then,
if
I
SSH
into
my
VM
I
am
running
the
product
catalog
in
with
just
like
with
docker,
and
it's
kind
of
all
set
up
ahead
of
the
time.
What's
actually
happening
here
is
I've
sort
of
script.
G
If
I
do
some
of
this
because
it
can
get
a
bit
lengthy,
really
the
thing
to
know
is
this
service
entry
piece,
so
I've
created
a
service
entry
for
product
catalog,
that's
populated
with
all
this
info,
including
the
IP
of
the
VM
so
like.
If
I,
if
I
get
my
service
entry,
we
can
see.
All
of
this
is
here
and
like
so
that,
like
routing,
can
go
from
the
search
services
to
probably
catalog
just
with
that
Dean
coupe
DNS
name,
which
is
helpful.
G
So
what
this
actually
looks
like
an
action
like
the
whole,
you
know
the
product
catalog
is,
can
be
served
successfully
and
in
key
Olli
and
the
service
graph.
We
can
see
that
instead
of
showing
a
triangle
for
like
a
kubernetes
native
service
on
deployment,
we
actually
have
this
internal
service
entry
for
the
product
catalog,
which
tells
us
it's
actually
not
running
in
the
cluster.
It's
running
elsewhere,
in
this
case
a
VM
and
we
can
see
sort
of
traffic
moving
to
it.
G
And
the
next
of
the
serve
last
piece
I'm
going
to
show
is
a
situation
where
I
am
going
to
split
traffic
between
a
version
of
this
running
in
the
cluster
and
the
VM
version,
and
in
order
to
do
that,
I'm
actually
going
to
deployed,
deploy
the
same
service
into
the
cluster
and
I'm
going
to
keep
what's
running
in
the
VM.
G
Wait
for
this
to
be
started
running
and
then
what
I'm
going
to
do
is
actually
apply
a
traffic
splitting
rule
so
that
90%
traffic
goes
to
the
VM
but
I'm
starting
to
send
a
small
increment
of
the
traffic
into
the
version
on
the
cluster.
The
use
case
for
this
isn't
migration
use
case,
so
I'm
I
maybe
have
containerized
the
father
catalog
and
I'm
ready
to
have
it
run
in
kubernetes.
It
should
be
said
that,
because
of
the
whole
setup
around
the
selector
list
service,
we
actually
can't
have
these
two
things
share
a
service.
G
We
have
to
have
a
separate,
a
separate
kubernetes
service
and
a
separate
destination
rule,
but
they
can
share
a
virtual
service,
so
you'll
notice
that
here
yeah
like
I'm,
sending
yeah
90/10.
So
what
I'm
going
to
do
is
apply
that
and
the
way
to
kind
of
know
if
this
is
working,
because
it's
kind
of
hard
to
see
because
it's
like,
because
it's
the
same
service,
we
don't
really
see
any
change
of
behavior.
But
what
we
should
hopefully
see
here
in
a
second
is
to
product.
Catalog
show
up
one.
G
This
one
in
purple
is
the
one
in
the
VM,
that's
still
serving
traffic
and
then
there's
going
to
be
one
called
local.
That's
going
to
be
shipped
like
serving
a
small
percentage
of
the
traffic
and
that's
backed
by
a
kubernetes
workload.
You
can
also
do
like
encryption.
That
kind
of
thing
really
any
role
you
can
do
on
the
container
I'm,
pretty
sure
you
can
do
on
the
VM
as
well
so
exciting
times,
just
to
kind
of
close
out.
G
There's
definitely
I
think
a
good
amount
of
work
to
be
done
and
is
currently
happening
on
the
sto
team
as
far
as
improvements,
so
one
big
caveat
right
now
with
this
whole
process
is
if
traffic
is
coming
from
a
VM
and
going
into
the
cluster
or
if
it's
going
from
a
VM
to
a
VM.
In
the
case
that
you
have
multiple
VMs
configured,
that
traffic
does
not
get
tracked,
so
it
won't
show
up
in
the
service
graph.
You
can't
get
metrics
due
to
a
labeling
situation,
that's
happening,
hopefully
that
that
problem
will
go
away.
G
Hopefully
some
of
the
sort
of
hands-on
config
you
have
to
do
will
be
improved
and
yeah
we're
we're
also
working
on
improving
the
docs
for
the
clarity
and
discoverability.
So
yes,
that's
the
demo.
Thank
you.
Happy
Halloween
and
yeah
yams
come
come
to
ku
con
thanks.
A
Thanks
Megan
actually
I
had
a
quick
question
so
in
this
in
this
setup,
so
you
have
to
create
a
service
entry
and
a
coop
service.
At
the
same
time,
in
the
command
to
register,
the
VM
will
create
the
coop
service,
but
not
the
service
entry
correct
that.
G
Is
correct,
I've
talked
with
the
SEO
team
about
this
a
little
bit
I'm
I'm,
hoping
that
so
the
the
logic
is
that
hopefully,
the
service
entry
piece
or
one
of
one
of
those
two
pieces
is
a
chemical
away
and
the
co
CTL
will
do
eventually
do
all
the
things
yeah.
It's
it's
a
bit
annoying
right
now
so
like
you
can
see
to
actually
get
this
whole
setup
working.
It's
this
pretty
long
script.
G
You
have
to
run
on
your
machine
and
then
there's
also
like
a
good
amount
of
stuff
that
has
to
happen
on
the
VM
too,
like
the
whole
yeah
DNS
stuff,
so
yeah
I'm,
a
big
fan
of
like
so
like
if
I
can
execute
all
of
this
with
a
script.
There's,
no
necessarily,
if
there's
no
reason
why
this
can't
happen,
vo
CTL
as
well
so
I'm
definitely
advocating
for
that.
A
G
Yes,
so
I
can
actually
do
a
screen
shot
and
maybe
put
it
in
the
slides.
But
when
you
start
up
SC
or
when
you
start
at
the
proxy
on
the
VM,
it's
a
spy
I
think
it's
a
special
remote
version
of
the
proxy
and
there's
definitely
some
IP
table
stuff
happening,
because
there
needs
to
be
a
way
to
intercept
that
traffic
like
I.
Don't
think
it
does
it
for
the
sudo
user
or
for
the
sto
proxy
user,
but
I
think
they're,
like
the
generic
user.
It
does
update
the
active
tables.
Okay,.
H
Because
I
figure
you
either
have
to
change
IP
tables
to
capture
the
traffic
or
you
just
have
to
configure
application
to
hit
localhost
and
1500
whatever
mm-hmm.
Okay,
and
if
that's
running,
is
it?
Can
you
try
it
like
on
that
host?
Can
you
try
to
send
traffic
to
the
Korean
A's
cluster
right
Pete
and
have
that
work
without
because.
H
H
A
A
I
Sure,
thank
you
so
I've
been
playing
around
with
sto
1.4,
because
I
wanted
to
get
MPLS
working
for
headless
services
and
I
did
see.
There
was
some
mention
about
MPLS
and
headless
in
the
release
notes.
So
what
happened
was
when
I
installed
the
1.4
using
the
Helmand
tiller
combination,
it's
kind
of
broken
the
sto
in
it.
It's
not
working.
The
charts
are
not
working.
I
just
wanted
to
bring
it
to
evidence.
Attention
here,
I
think
I
created
an
issue,
but
I
don't
see
anyone
responding
to
it
yet.
C
I
I
also
found
the
mpls
was
not
working.
I
tried
using
so
initially
I
started
installing
the
sto
one
for
using
the
manual
method,
which
you
mentioned,
like
I,
got
the
source
and
I
was
navigating
to
the
source.
Folder
and
I
did
the
install,
but
for
some
reason
MPLS
is
not
working.
So
then
I
was
almost
wasted
like
one
week,
scrambling
on
it
and
then
I
quickly
as
fun
of
gke
cluster
with
sto,
pre-installed
and
MPLS
was
working
on
it.
I
Then
what
I
did
was
I
went
back
to
aks
and
I
is
installed
one
three
three
version
of
sto
using
Hellman
tiller
and
the
MPLS
works.
Therefore,
the
same
core
work
so
I
have
a
strong
doubt
that
empty
LS
is
broken
in
one
for
I'm,
not
sure.
If
someone
validated
that
in
the
test
day,
if
not
I,
would
suggest
validating
that.
B
D
I
Yeah
for
me,
it
was
but
I'm
say
down.
If
you
scroll
down,
I
would
show
you
what
is
happening
so
I
even
created
a
documentation
part,
usually
in
1/3.
So
there
is
a
documentation
bug
which
I
created
and
until
version
1,
3
3
of
history.
Oh,
when
we
do
a
sto,
och
och
check,
what
happens
is
under
client
and
server.
Usually
we
see
M,
TLS
or
HTTP
write
the
values
and
I
went
into
the
test.
Cases
for
history
of
SEO
control
and
I
did
see.
I
The
test
cases
were
returning,
East,
your
mutual
sorry,
we're
returning
empty
LS
or
HTTP
in
the
response,
however,
in
one
for
I
think
it
was
CR
d-10
inside
in
it
which
was
changed,
and
there
is
a
enum
which
actually
has
a
definition
for
strict.
Is
your
mutual
disabled
on
all
these,
and
these
values
are
getting
returned
instead
of
the
usual
MPLS
and
HTTP,
and
the
document
documentation
is
out
of
sync:
that's
the
reason:
I
created
the
documentation,
bug
and
overall
I
don't
find
empty
LS
working
properly.
I
I
I
I
Okay
and
the
next
item
which
I
had
is
so
I'm
trying
to
say,
get
MPLS
working
for
a
headless
service
and
I'm
following
this
documentation,
I
couldn't
find
any
other
appropriate
documentation
because,
as
per
this
documentation,
it
says
we
need
to
create
a
service
entry
for
the
headless
of
the
service,
which
is
a
specific
part
IP
and
then
also
create
a
HTTP
gateway.
Sorry
a
gateway,
and
we
can
access
the
traffic
through
that,
because
sto
doesn't
inherently
support
headless
out
of
the
box.
I
A
C
A
H
B
B
I
If
you
see
the
last
three
weeks
of
the
community
meeting,
I've
been
constantly
trying
to
get
the
headless
service
working
and
been
asking
multiple
queries
in
the
discuss
and
I
haven't
had
much
help.
If,
if
someone
has
a
working
sample
which
I
can
take
a
look
at,
that
would
be
great.
Any
help
from
the
community
is
much
appreciated.
Yeah
and.
A
E
I
Is
I'm
trying
to
hit
it
from
a
simple
end
point
to
consume
the
history
TP
what's
happening?
Is
it
is
letting
the
traffic
no
matter
what
so
it
does
show
MPLS
implemented
on
both
the
services.
If
I
do
is
to
your
art
control,
it
shows,
MPLS
is
implemented,
but
I'm
not
supposed
to
hit
the
service
right.