►
From YouTube: 10.21.2020 Community meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
first
thing
we
have
on
here
is
some
things
from
service
mesh
hub.
So
joe,
would
you
like
to
take
it
away
there.
B
Sure
yeah,
so
the
latest
release
in
service
mesh
hub
went
out
since
the
last
meeting
about
six
days
ago.
Now
that
is
0.9.0
a
couple
of
interesting
changes.
There
include
that
we
now
have
crd
validation,
schemas
for
all
of
our
resources,
so
that'll
make
it
a
lot
easier
to
debug
problems
in
your
yaml,
and
we
also
improve
the
validation
within
the
servicemen
controllers
themselves,
particularly
networking
where
we
will
validate
that
the
reference
config
targets
on
traffic
policies
exist
as
part
of
the
reconciliation
loop.
B
Also,
we
added
support
for
osm
0.4,
largely
documentation,
changes
there
and
just
some
simple
tweaks
to
our
integration,
but
we're
now
working
with
the
latest
version
of
osm
and
yeah.
The
the
other
like
interesting
feature
that
we
added
is
this
setting
crd.
B
This
came
out
of
a
discussion
that
we
had
at
the
last
community
meeting,
where
we
talked
about
ways
that
we
could
allow
users
to
specify
the
mtls
behavior
in
the
like
destination
rules
outputted
by
service
mesh
hub.
So
now
on
our
settings,
crd,
you
can
specify
the
default
mtls
setting.
B
You
want
on
the
generated
destination
rules
from
the
networking
controller
and
you
can
override
that
one
global
default
with
traffic
policies,
selecting
individual
traffic
targets
so
just
a
little
bit
more
control
for
users
on
the
mtls
settings
as
opposed
to
the
always
istio
mutual
setting
we
had
in
the
past.
Now
users
can
use
different
mdls
policies
for
different
traffic
targets.
B
Beyond
that,
the
rest
of
the
changes
were
largely
just
fixes
and
improvements
which
we're
always
merging
in
and
I'm
sure,
we'll
have
a
new
patch
release
for
0.9
any
day
now,
as
well
with
more
minor
enhancements
with
that
scott.
Do
you
want
to
talk
about
the
service
mesh
of
extensions
that
you're
working
on
sure.
C
C
I'll
put
it
in
the
chat
just
to
make
sure
everybody
can
take
a
look
at
it,
so
we
have
here.
Maybe
I
should
I
should
share
your
screen,
actually
wondering
if
that.
C
All
right,
you
guys
see
this
look
good
all
right.
So
what
is
extensions?
Let's
see?
So,
basically,
we
have
we've
come
into
this
using
other
projects,
and
it's
something
that's
come
up
for
us
in
the
past,
with
blue,
which
is
basically
the
the
desire
to
make
an
extensibility
layer
to
allow
people
to
update
and
modify
the
behavior
of
service
mesh
hub
without
having
to
modify
the
code.
C
So
we've
created
an
interface
that
will
allow
allow
you
to
run
a
grpc
server
outside
of
adjacent
to
service
mesh
hub
and
configure
service
mesh
hub
to
communicate
with
it
in
order
to
get
in
order
to
have
its
co,
the
configuration
that
is
going
to
output
in
order
to
patch
that
using
code.
So
if
any
of
you
guys
are
familiar
with
the
envoy
filter
crd
in
istio,
this
is
like
a
a
kind
of
an
equivalent
notion
here.
We
basically
so
we
have
this.
C
This
concept
of
a
networking
extension
service
and
service
meshup
is
actually
going
to
go
to
that
service.
If
you
configure
it
to
do
so
and
fetch
patches
for
its
config
before
it
writes
into
the
cluster.
So,
for
example,
if
there's
a
feature
that
you
want
to
expose
to
your
users
as
a
service
mesh
hub
user,
you'd
like
to
add
a
new
crd
to
your
system,
to
configure
some
additional
things
on
istio,
but
you
want
to
use.
C
You
can
do
so
using
this
interface.
We
actually
plan
to
develop
some
closed
source
extensions
for
service
mesh
hub
and
we
will
be
leveraging
this
interface.
But
it's
designed
to
be
a
generic
extension
point
for
building
plugins
for
service
meshup.
D
Yeah
that
that
resonates
a
lot
with
us,
we,
you
know,
we
had
a
similar.
You
know
goal
end
goal,
but
we
hadn't
really
spent
much
time
thinking
about
how
to
implement
it
or
even
start
a
conversations
with
you
on
how
to
implement
it.
So
I
think
this
is
very
interesting.
C
C
If,
if
this,
if
you
configure
service
mesh
job
with
one
or
more
networking
extensions,
which
is
a
server
that
provides
this
interface
implements
this
grpc
interface,
then
service
mesh
hub
will
request
patches
from
the
server
before
applying
the
configuration.
So,
for
example,
let's
say
you
give
us
a
traffic
policy,
the
traffic
policy
results
in
a
virtual
service,
but
you
have
your
own
custom
extension.
Let's
say
you
have
your
own
crd
that
you
want
to
modify.
The
user
is
going
to
say,
for
example,
add
you
know
a
course
policy.
C
I
mean
you
can
do
that
with
the
service
mesh
up
today.
So
it's
that's,
maybe
not
the
best
example.
One
of
the
examples
that
that
I'll
give
that
we're
building
it
for
is
wasom.
So
we
want
to
build
some
enterprise.
Wasm
extensions
they'll
be
a
part
of
a
service
mesh.
I'm
sorry,
I
don't
know
if
I'm
going
too
off
the
track
here,
but
basically
the
point
is
we
have
some
plans
to
extend
the
functionality
of
service
mesh
hub.
C
We
want
to
add
some
additional
crds
and
some
additional
apis
and
functionality,
and
the
idea
is
that
service
meshup,
core
will
reach
out
to
an
enterprise
extension
server
and
the
extension
server
will
do
its
own
calculation,
its
own
translation
and
append
to
those
resources
submit
them
back
to
service
mesh
hub
service.
Michael
will
finalize
everything
and
actually
write
the
config
out
to
the
cluster.
C
And
this
allows
it's
almost
like
a
filter
chain
in
a
way
like
an
envoy
filter
where
you
can
basically
process
all
the
config
that
service
mesh
hub
is
generating
and
modify
it
in
any
way
you
see
fit
before
it
gets
written
to
storage,
to
allow
basically
full
customization
of
service
mission.
Behavior.
D
So,
scott,
if
I'm
understanding
correctly,
this
is
a
little
bit
of
like
a
front
end
extension
in
the
sense
that
it'll
take
some
config,
that
the
user
inputs,
possibly
massage
it
with
the
extensions
and
then
commit
it
to
memory,
are
there?
Is
there
anything
on
the
sort
of
back
end
that
will
operate
on
those
resources
to
affect
the
the
running
of
the
meshes
in
the
clusters?
Or
is
that
not
in
the
scope
of
this
particular
pr.
C
Clear
this
little,
this,
I
think,
will
help
if
I
draw
a
quick
diagram
here.
So
let's
say
I
have
surface
mesh
hub
core
and
service.
Bishop
core
is
gonna,
get
some
set
of
resources.
You
know
it
might
be,
let's
say
some
traffic
targets,
workloads
and
some
meshes,
and
then
you
know-
and
these
have
had
some
policies
applied
with
some
access
policy
and
some
virtual
mesh
and
some
traffic
policy
see
it
being
applied
back
there.
C
Okay,
this
comes
into
service
mesh
hub.
Normally,
what
happens
is
service
mesh
hub
will
then
go
out
directly
to
istio,
with
the
configuration
it's
going
to
send
it
some.
You
know
some
virtual
services,
destination
rules,
service,
entries,
etc.
Is
that.
C
C
And
extend
this,
you
know
this
translation,
so
the
user
will
have
a
view
of
the
user
config.
You
know
this
extension
point.
Maybe
that's
choose
a
better
thingy
here
right.
This
is
going
to
be
my
smh
extension.
C
Here
it's
going
to
see
each
traffic
target
and
the
associated
policies
each
workload
each
mesh,
and
it
will
also
see
the
outputs
that
serve
that
the
service
meshup
translation
has
produced.
So
it's
going
to
see
a
tuple
of
these.
These
two
sets
of
data
and
it
will
be
able
to
make
modifications
to
the
outputs
here
so
you'll
be
able
to
know.
For
example,
the
virtual
surface
associated
with
a
traffic
target
you'll
be
able
to
edit
it.
C
Let's
say
you
have
your
own
crd
that
points
to
that
traffic
target
that
tells
you
to
redact
something
from
what
service
mesh
hub.
Is
configuring
or
append
something
to
it?
You'll
be
able
to
do
that
from
this
extension
point
and
it
all
happens
over
grpc,
so
there's
no
need
to
recompile
or
rebuild
servicemen
shop.
C
Cool,
I
can
just
maybe
we'll
share
this
in
the
dock
or
something.
C
That's
the
long
and
short
of
it.
The
details
are
in
the
that
one.
C
Networking
api
pr,
which
I'll
make
sure
is
linked
in
the
doc,
but
essentially
it's
just
it's
three
functions
or
there's
a
fourth
function.
So
the
three
functions
for
getting
patches
because
we
partition
resources
for
each
discovery
object.
So
we
have
each
traffic
target.
It
has
a
bundle
of
resources
which
you
can
patch.
Each
workload
has
a
bundle
of
resources
that
you
can
actually
mesh
as
a
bundle
that
you
can
patch
and
then
the
last
thing
is
actually
the
ability
to
signal
service
mesh
shop
to
resync.
C
So
that's
essentially
have
one
one
set
of
functions
for
getting
the
patches
and
another
function
for
actually
notifying
service
mesh
hub
that
we
should
re-sync.
This
is
this
is
basically
the
interface
that
extensions
will
be
able
to.
C
So
that
is
it
for
that.
We
currently
have.
The
api
is
open,
we're
happy
to
take
feedback
on
that,
but
we're
also
moving
ahead
with
the
implementation,
and
we
can
always
you
know,
pending
feedback
from
users.
C
It's
part
of
the
idea
here
is
to
make
something:
that's
common
both
for
ourselves
to
use
as
well
as
our
users
and
make
this
a
generic,
powerful,
extensibility
layer
for
service
meshup.
C
Now,
currently
it
only
works
for
istio,
but
will
it
should
be
relatively
simple
to
extend
support
for
other
mesh
types?
It's
really
all
about
this
generated
resource
object
here,
which
encapsulates
the
outputs
of
service
mesh
hub.
Currently
it
only
includes
istio
outputs,
but
if
we
added-
let's
say
an
app
mesh
crd
here,
we
could
add
add
more
types
in
the
future.
For
other
meshes.
C
Okay,
the
space
itself
is
generic.
It's
just
that
the
only
types
of
patches
that
are
currently
supported
are
for
istio
config,
but
all
we
have
to
do
is
expand
the
the
type
of
patches
that
we
apply.
C
E
C
Very
good,
if
there's
no
more
questions,
I
will
give
it
back
to
betty.
A
Cool,
so
those
were
the
only
two
updates
from
our
crew
from
the
solo
team
that
were
in
the
document
in
the
agenda
this
time
is
there
anyone
else
on
the
call
that
has
questions
updates
things
they'd
like
to
share.
E
Yeah
I'll,
I
have
a
couple
of
things
to
mention.
So,
first
of
all,
you
expect
a
pr
for
limited
trust
late
this
week
or
early
next
week.
Probably
unless
something
else
happens,
two
questions
on
this
I've,
so
one
is:
we've
hit
an
issue
with
the
envoy
filter
that
you
guys
use
to
rename
the
services
it
looks
like
it
only
works
for
a
pass-through
gateway.
E
As
far
as
we
can
tell-
and
we
were
wondering
if
you
guys
know
anything
more
about
this,
how
do
if
you've
ever
hit
something
good
using
the
unwavering,
not
a
pass
for
gateway
with
something
that
actually
does
the
donation.
C
So
maybe
we
will,
we
can
take
this
into
a
more
detailed
discussion
offline,
but
I
would
say
just
as
a
I'm
not.
C
I
think
we
need
to
get
more
detail
on
on
what
exactly
you
mean
by
the
android.
Filter
won't
work
unless
it's
because
the
envoy
filter
is
basically
a
generic
patch
to
the
istio
generated
envoy
config.
I
believe
we
can
apply
patches
to
any
type
of
gateway.
It
just
might
be
the
way
that
we
create
the
for
shared
trust.
Right
now
we
use
pass-through
tcp
pass-through
on
the
gateway
so
that
the
gateway
doesn't
have
to
handle
tls.
It's
just
the
upstream
service
that
does.
E
The
option
so
our
problem
right
now
and
is
that
there
is
a
significant
difference
in
configuration
between
a
password
gateway
and
something
that
does
tls
termination
and
origination
and
the
envoy
filter.
That's
currently
being
applied.
It
doesn't
seem
to
be
applied
to
the
gateway
in
the
correct
place
and
the
traffic
doesn't
go
as
expected.
What
we
did
right
now
for
limited
trust
and
why
I'd
like
to
get
your
feedback
on
this
is
we're
not
we're
not
creating
downfall,
filter
anymore
and
we're
creating
virtual
services
for
each
of.
C
I
mean,
I
would
say
off
the
bat.
It's
probably
always
preferable
to
not
use
android
filters
where
possible
because
they
tend
to
be
fragile,
but
you
know
we
can
I
we
can
probably
take
it
offline
and
discuss
in
more
detail
the
implementation
you
guys
have
with
them.
You
know.
E
Oh,
I
think
the
best
when
we
get
the
pr
up
and
running
we
can
discuss
on
the
pr
directly
right
now.
I
want
to
add
some
unit
tests
to
my
code,
so
it's
not
naked
in
there,
but
that
should
be
about
it.
E
E
E
C
E
Happy
to
think
I
did
find
an
an
issue
posted
on
the
istio
github
from
august.
That's
about
this,
but
the
only
thing
that
happened
to
it
was
that
a
bunch
of
labels
were
added
to
it.
Nobody
actually
replied
to
it
and
it's
still
open
so
yeah.
That's
all
for
me.
D
So,
just
related
to
that
pr,
we
were
planning
documentation
to
be
separate
from
the
pr.
I
assume
there's
no
immediate
concerns
with
that.
B
Yeah,
I
think,
that's
fine.
We
could
just
file
an
issue
to
track
the
docs
coming
afterwards.
D
E
D
Yeah,
so
I
mean
in
one
sense,
the
documentation
impact
can
be
really
small
because,
as
I
said,
the
api
change
is,
you
know,
very,
very
small,
but
the
impact
to
the
layout
to
the
dock,
and
you
know
your
some
of
your
guides
of
okay.
The
first
thing
you
do
is
you
establish
a
common
route
and
stuff
like
that?
D
That
probably
might
be
a
little,
not
tricky,
but
just
take
a
little
bit
more
time
to
make
sure
you
convey
the
right
right
message
on
how
those
guides
should
be
played
out,
or
even
you
know,
you
guys
might
have
some
input
on
to
even
whether
that
is
an
entirely
separate
kind
of
workflow
versus
a
option
within
the
current
workflow.
So
those
are
some
details
that
we
figure
we'll
follow
up
with
on
a
separate,
a
separate
pr
after
that,
after
you
least
see
the
code
and
how
that
how
it
falls
together.
B
A
Good
cool
anything
else,
anybody
else
around
the
horn,
just
let
in
a
couple
more
people
into
the.
D
Yeah,
I
I
guess,
there's
one
other
sort
of
thing
that
we've
touched
on
a
few
times,
but
has
there
been
any
more
thought
or
planning
about
service
discovery,
whether
you
allow
service
discovery
more
more
generically?
D
You
know
service
discovery
from
a
controller,
or
even
things
like
how
to
control
which
services
you
want
to
import
or
export
for
lack
of
a
better
word.
Even
within
the
current
mechanisms
of
service
discovery.
Has
that
has
there
been
any
planning
or
discussions
around
those
areas,
because
that's
an
area
we
keep
coming
up
against
in
our
some
of
our
work.
So
we
want
to,
if
you
have
been
doing
some
planning
we'd
like
to
you
know,
hear
about
it.
C
It
would
be
great
to
hear
what
the
requirements
are
that
you're
working
with
like
what
kind
of
custom
back
ends.
Do
you
want
so
it
sounds
like
you
want
to
be
able
to
add
custom
service
discovery
endpoints
as
well
as
filter
out
some
of
the
ones
that
surface
bishop,
maybe
aggressively,
discovers
on
its
own.
D
Well,
yeah,
there's
there's
a
there's,
a
multitude
of
use
cases,
so
I'm
being
a
little
generic
here,
scott,
but
yeah.
I
think
there's
there's
there's
sort
of
two
things,
one.
If
you
remember
way
back,
we
talked
about
some
of
the
stuff
we're
doing
with
network
service
mesh
where
the
service
discovery
doesn't
the
services
and
how
they're
exposed
network
wise
are
a
little
bit
different
than
what
kubernetes
normally
supports,
so
that
presents
that
presents
a
little
bit
of
the
first
use
case
of
you
know.
D
Having
other
means
to
get
service
discovery
in
that
could
be
useful
for
service
meshes
that
run
on
top
of
network
service
mesh,
for
example.
So
that's
sort
of
one
use
case.
D
The
second
use
case
is
what
we're
hearing
you
know
in
some
other
discussions
that
people
don't
necessarily
want
all
services
in
every
cluster
to
be
discoverable
that
some
some
some
services
should
only
be
more
internal
to
the
particular
mesh
that
that
in
that
cluster
versus
some
services,
you
know
you
want
to
be
more
broadly
and
set
up
gateways
for
and
they're
going
to
be
more
injured
cluster.
So
they're
kind
of
two
separable
use
cases
in
that
in
that
sense,
but
again
we
could
be
more
specific.
I
guess
I'm
just
asking
you
know.
D
Is
there
already
ongoing
work?
Is
there
any
any
prs,
and
you
know
slack
chat
that
we
should
be
paying
more
attention
to
in
this
area,
or
should
we
just
bring
it
up
at
the
next?
Some
of
the
next
couple
meetings.
C
C
Those
are
two
options
and
again
we
don't
have
we've
not
implemented
the
surface
mesh
hub,
but
that's
the
model
that
we
have
in
glue
which
has
worked
well
for
us.
There.
C
As
far
as
custom,
extensions
to
the
registry
of
traffic
targets
and
and
workloads
that
we
have
in
service
mesh
shop,
we
definitely
talked
a
lot
about
it.
We
just
have
not
started
the
work
on
it
yet,
but
we
have
done
a
bit
of
planning
around
that.
A
Okay,
anyone
else
any
other
questions
or
comments
topics
you
all
want
to
bring.
A
Up,
okay
going
once
going
twice
and
then
we
will
call
it
for
this
meeting.
The
next
one
is
in
the
beginning
of
november,
and
then
I
want
to
say
that
so
we'll
only
end
up
having
one
meeting
in
november,
because
the
the
second
one
either
lands
during
the
week
of
kubecon
or
the
or
thanksgiving.
So
I
was
like
people
probably
not
around
so
to
meet
them.
A
They'll
be
busy
with
those
things
so
we'll
just
I'll
get
the
notes
up
started
for
the
next
one,
and
then
we
will
see
you
all
in
two
weeks,
all
right.