►
From YouTube: 10.07.2020 community call
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Actually,
if
you
want
to
start
with,
let's
say
kevin
is
not
here,
so
let's
go
with
joe
and
then
shane.
If
you
all
want
to
give
some
updates
and
any
feedback
requests
for
the
community,
and
then
we've
got
two
questions
from
john
that
we
can
then
discuss
after.
B
It
sounds
good
to
me:
I
can
get
started
over
the
last
couple
of
weeks.
The
service
mesh
hub
team's
focus
has
kind
of
shifted
to
some
internal
projects,
but
that
said,
we
have
still
merged
a
handful
of
usability
improvements.
One
such
improvement
is
has
to
do
with
the
statuses
on
our
resources,
specifically,
the
access
policy.
B
We've
started
enumerating
the
workloads
and
the
traffic
targets
that
are
selected
by
those
access
policies
through
the
service
accounts
that
they
match
on.
So
that
just
gives
you
a
better
kind
of
bird's-eye
view
of
which
workloads
have
access
to
which
services.
So
you
can
tell
the
effect
that
your
security
policies
are
actually
having,
and
additionally,
this
is
just
merged.
It
will
be
in
the
0.90
release.
B
We
will
have
validation
schema
on
our
crds,
so
that
will
be
a
big
usability
improvement
for
you
know,
handwritten
yaml
help
us
avoid
writing
typos
when
we're
going
through
our
own
internal
testing
and
hopefully
help
out
our
users
as
well.
B
That
mesh
is
still
in
progress.
The
discovery
pr
actually
just
rebased
after
having
merged
some
of
the
kind
of
more
structural
changes
that
we
made
to
support
it.
But
the
mesh
discovery
should
be
merged
shortly
as
well,
and
that
should
be
in
zero,
nine
zero
or
zero
nine
one.
Before
we
move
on
to
like
the
traffic
policy
translation,
that's
the
next
story
there
and
I
believe,
that's
it.
I
think
that's
what
we've
been
up
to
for
the
last
couple
weeks.
A
Well,
are
there
any
open
any
open
kind
of
like
issues
or
anything
that
you're
looking
for
community
feedback
on.
B
We've
had
a
couple
of
kind
of
like
feature,
requests
and
questions
come
in
from
the
community
yeah.
It
would
be
great
if
people
could
check
in
there
there's
nothing
kind
of
open
right
now,
but
yeah
like
as
you're
testing,
feel
free
to
file
some
issues.
We
had
a
great
discussion
in
slack
yesterday
about
a
a
problem
the
user
was
having
with.
B
I
believe
it
was
the
way
that
our
destination
rules
assume
mtls,
so
expect
some
changes
there
as
well,
might
have
to
introduce
kind
of
like
a
settings
construct
in
service
mesh
hub
or
perhaps
we'll
use
like
an
annotation
driven
config
yeah,
always
open
feedback.
That's
one
of
the
most
recent
issues.
I
can
link
it
in
the
chat
in
the
zoom
call
or
I'll
even
put
it
in
the
the
meeting
notes.
A
Yeah,
I
think
yeah,
and
actually,
if
you
can
give
a
little
more
context
on
what
that
was,
I
think
this
might
be
a
good
place
for
it
to
say
like
this
came
up
and
it's
actually,
if
it's
prompting
some
change
or
you
know
changing
how
how
something
works.
That
might
be
a
good.
This
might
be
a
good
place
to
kind
of
just
bring
it
up
and
discuss.
B
Yeah
for
sure
so
currently,
like
the
service
mesh
hub
federation
system,
will
it'll
assume
that
you
want
to
have
mtls
enabled
on
your
destination
rules
to
the
to
the
remote
cluster,
and
this
is,
you
know,
a
good
assumption
in
most
cases,
since
that's
the
kind
of
federation
model
that
we
support
today
is
shared
trust,
so
we
expect
them
to
be
able
to
communicate
over
mtls.
B
So
I
guess
I'm
not
sure.
If
anybody
has
any
any
feedback
about
that
point,
in
particular,
I
know
in
glue
there
have
been
some
like
pros
and
cons
to
having
kind
of
a
central
settings
construct
as
opposed
to
letting
users
kind
of
on
the
fly,
just
kind
of
slap,
an
annotation
on
to
tweak
behavior
here
and
there
for
kind
of
edge
case
services.
B
Maybe
something
we
need
more
data
for.
But
I'm
curious,
if
any.
If
anybody
has
any
thoughts
to
that
end,
if
you
prefer
to
have
configuration
through
annotations
or
more
of
like
centralized,
driven
config
through
like
globalized
settings
for
features
well,.
C
B
Yeah,
the
implication
there
would
be
that
like
as
like
a
default
like
a
global
default
and
then
like
kind
of
opt
out
on
a
one-off
basis,
maybe
with
like
a
traffic
policy
on
like
a
particular
service
versus
a
configuration.
That's
a
bit
looser
with
just
an
annotation
on
a
service,
it's
kind
of
like
the
opposite
extreme.
C
Well,
I
guess
having
a
global
default
is,
is
always
a
good
thing,
but
that's
I
mean
that's
almost
a
little
bit
of
the
you
know
icing
on
the
cake
kind
of
thing
I
think
more
often
than
not
you're
going
to
find
the
need
to
have
more
granularity
that,
even
even
if
you
think
about
the
fact
that
if
you
have
a
given
cluster
you're,
not
always
going
to
be
necessarily
federating
that
with
a
single
peer,
you
might
be
federating
that
with
multiple
peers
and
some
some
of
the
peers,
you
might
only
be
able
to
do.
C
You
only
want
to
do
mtls
with
and
some
of
the
other
peers.
Maybe
you
don't
so
I
think
the
granularity
is
almost
certainly
going
to
become
a
requirement
at
some
point,
but
a
global
default
is
always
a
good
thing
too.
B
Yeah
that
makes
sense
so
yeah.
So
I'm
kind
of
hearing
that,
like
the
global
default,
seems
like
something
we'd
want
to
be
able
to
support,
but,
like
kind
of
maybe
it's
kind
of
an
argument
against
the
sort
of
one-off
configuration
at
the
service
level.
That
kind
of
multiple
peers
case
that
you
raised
right,
like
maybe
we'd,
want
to
be
able
to
select
which
destinations
receive,
which
mtls
option.
C
Basis,
at
least,
would
be
would
be
probably
required.
I
think
also
this
is
going
to
so
one
of
the
other
things
that
I
was
thinking
about
as
you're
saying
this
is
I
mean,
isn't
it
more
complicated
than
just
mtls
on
or
mtls
off,
because
there
are
a
couple
different
flavors
of
tls,
you
have
whether
you're
doing
client,
side
or
server
side,
or
maybe
not
even
tls
at
all
right
is
that
is
that
factored
into
the
discussion
already
or
is
that
a
an
additional
component
of
the
evaluation.
B
Yeah,
that's
definitely
something
that
we
should
consider
as
we
kind
of
like,
implement
a
solution
for
this.
That's
a
good
point,
yeah!
It's
it's
definitely
more
rich
than
kind
of
like
on
off.
C
C
And,
and
this
this
might
influence,
I
think
mihai
brought
up
last
week-
mihai
isn't
able
to
attend.
C
I
think
he
had
also
some
maybe
some
discussions
in
this
area,
because
it
also
affects
the
sort
of
the
limited
trust
case
a
little
bit,
perhaps
depending
on
how
you're
going
to
do
this,
because
he
was
it,
he
was
going
to
need
to
add
some
api
things
with
respect
to
when
you
have
the
full
root,
trust
versus
the
limited
trust
and
how
the
certificates
come
in,
so
there's
possibility
what
you're
describing
has
an
influence
in
what
he's
doing
as
well.
So
I
guess
we'll
definitely
it
is:
is
the
within
that
issue?
C
I
didn't
pull
it
up
yet
within
that
issue
joe,
is,
it
is
your
sort
of
have
you
captured
your
your
your
sort
of
current
path
that
you're
following
or
your
current
plan
of
record,
or
is
that?
Is
it
just
capturing
the
issue
itself.
B
C
Okay,
yeah,
I
think
we
we
should
look
at
that,
and
especially
I
think
we
want
to
contrast
that
mihai
again
mihai's,
not
here
but
daniel,
is,
I
think,
those
guys
want
to
contrast
it
a
little
bit
with
what
they
were
doing
for
them,
trust
as
well.
So
that
sounds
perfect.
Thank
you.
A
Cool
thanks
joe
shane.
Do
you
have
something
on
webassembly
yeah.
B
Absolutely
so,
on
the
webassembly
front,
the
big
ticket
item
is
that
we
got
rust
support
into
master.
So
we
have
support
for
rust
filters
now,
which
is
great.
We
haven't
actually
cut
our
release
for
it
just
yet.
We
were
waiting
for
one
or
two
things
from
the
upstream
rust
sdk,
but
there
are
a
couple
users
who
are
just
building
straight
off
of
master
and
we're
having
some
good
reports
that
it's
it's
working
for
them
early.
So
that's
always
good
to
hear
we're
hoping
to
get
a
release
out
in
the
next
few
days.
B
The
other
big
news
that
we're
really
excited
about
is
that
the
the
author
of
the
tiny
go
library
has
actually
opened
a
pull
request,
which
has
also
been
merged.
So
now
we
have
tiny
go
support,
so
anyone
who
wants
to
use
go
to
build
envoy
filters
and
wasm
via
go.
That's
now
a
feature
that
we
have
and
again
users
have
been
trying
it
out
and
so
far
so
good
we're
getting
some
really
good
feedback
in
terms
of
feedback
from
the
community.
We've
also
gotten
some
good
feedback
on
the
oci
image
spec.
B
So
that's
always
good
to
see
and
we've
made
taken
a
couple
action
items
from
that,
and
you
know
it's
a
it's
a
great
collaborative
process
we're
seeing
a
lot
more
action.
Our
wasm
channel
in
the
community
slack
has
also
you
know,
seen.
D
B
A
Well,
is
there
anything,
are
there
any
other
opens
or
pendings
or
areas
in
which
you'd
like
feedback
on
or
you
need
some
more
engagement
from
the
community.
B
Not
right
now
the
oti
image
spec
is
the
big
one
that
we're
trying
to
solicit
feedback
on.
We
have
seen
an
uptick
in
community
poll
requests
as
well,
which
is
great
to
see
another
user
just
added
stateful
set
deployments
as
a
feature
which
is
great
to
see
so
we're
very
appreciative
of
all
the
community
efforts
there
and
you
know
we're
supporting
everyone
who
wants
to
get
pull
requests
up,
and
you
know
just
come
ping
us
in
the
slack
channel
or
open
pull
requests,
and
we
can.
B
A
Cool
thanks,
and
so
let's
see
here
last
week
we
announced
clue
one
five,
there's
updates
there
for
open
source
and
enterprise,
but
if
kev,
oh
kevin's
on
so
kevin,
if
you
want
to
give
just
some
a
couple
of
highlights
on
the
from
the
open
source
side.
E
A
A
Like
like
any
like
something
major
I'll
link
in
the
I'll
link
in
the
change
logs
and
stuff
in
here
and
then
kind
of
like
you
know,
what's
kind
of
what's
what
else
we're
working
on
on
the
oss
side
would
be
great
sure.
E
Yeah,
we
should
definitely
change
log
because
the
high
level,
the
bulk
majority
of
the
work,
is
user
improvements,
bug
fixes
security
updates
things
along
that
nature.
But
some
of
the
new
features
include
like
grpc
json
transcoding,
we
moved
rate
limit,
crds
or
created
a
cd
for
rate
limit
config
so
that
you
can
reference
it
that
loss.
It's
like
more
templatized
rate
limiting
config,
and
then
I
saw
you
know
the
integrations
some
glue
fed
stuff
that
joe
could
talk
about
further
as
well.
A
And
then
can
you
talk
about
from
an
open
source
site,
what's
kind
of
what
are
some
of
the
next
things
in
progress.
E
Sure
we're
working
on
a
new
rainwood
api
that
handles
like
wild
carding
and
missing
descriptors,
like
not
we're
still
gonna
support
the
old
api,
but
just
one,
that's
a
little
more
flexible
for
for
fields
that
are
missing
for
advanced
rate,
limiting
useful
cases,
we're
updating
our
x-stop
implementation
and
server
to
handle
more
complex
flows
that
handle
like
boolean
logic,
ending
and
oring
services
together
right
now,
if
you
define
multiple
kinds
of
authentication
on
a
route,
all
of
them
have
to
succeed
for
the
route
to
be
seated,
to
be
authorized,
that's
being
split,
so
you
can
control
that
more
with
more
granularity.
A
C
Yeah,
I
think
the
first
one
is
really
just
maybe
a
question
on
direction.
I
know
a
lot
of
groups
have
bounced
around
with
how
they
want
to
do
in
installation
of
or
deployment
of,
their
main
modules.
So
has
this
group
talked
about
moving
to
like
an
operator
framework
versus
well,
I
guess
you've
transitioned,
that
a
little
bit
you
had
some
some
cli
scripts
and
then
you
have
the
mesh
ctl
that
handles
it
now
is
there?
C
Is
that
the
long-term
strategy
to
stick
with
that?
Has
there
been
discussion
about
moving
to
like
an
operator,
a
framework
that'll
manage
the
life
cycle
of
the
surface
mesh
hub
modules
or
or
just
I
just
kind
of
wanted
some
opinion
on
direction
where
you
think
you're
headed
or
what
you
think
you're
planning
to
do
there.
C
B
Yeah,
so
I
think
that,
like
our
our
main
focus
for
now
is
you
know,
like
installation,
through
helm,
upgrades
through
helm
and
supporting
a
workflow
that
is
ideally
very
simple
and
compatible
with,
like
git
ops,
not
necessarily
focused
on
like
an
operator
framework
in
the
near
term,
although
like
kind
of
over
time,
especially
if
the
community
continues
to
go
back
in
that
direction.
I
think
that
that
would
make
sense
for
us
to
look
at
at
that
point.
B
But
for
now,
like
helm
and
then
mesh
gtl
for
like
cluster
registration
is
like
the
primary
workflow
that
we're
targeting.
D
Yeah,
can
I
ask
what
we're
talking
about
an
operator
to
do
what
exactly.
C
C
They
have
a
little
cli
that
can
deal
to
deploy
the
operator
and
the
operator
crd,
but
then
the
operator
itself
takes
care
of
all
the
mesh
config
and
all
the
sdo
modules
and
sort
of
everything
from
there
on
handles
lifecycle
management.
So
I'm
not
necessarily
asking
for
this.
I
just
kind
of
want
to
understand.
You
know.
D
Right
now
is
a
relative
is
pretty
simple
in
terms
of
what
needs
to
be
deployed.
It's
really
just
a
set
of
of
our
back
configurations,
and
then
you
get
a
few
pods
deployed.
D
D
None
of
those
things
would
be
a
concern
for
service
meshup,
given
its
current
implementation.
So
I
don't
know
if
there's
a
strong
need,
you
could
use
something
like
argo
or
flux
for
a
pretty
painless
integration
of
the
helm
tried
into
ci
cd,
and
I
think
that
would
you
know
you
could
basically
get
all
of
your
life
cycle.
Management
needs
through
that.
C
D
D
I
think
what
I'm
trying
to
communicate
is
that
it's
it's,
it
would
be
recommended
to.
You
know,
given
the
simplicity
of
the
current
chart,
to
to
just
leverage
that
if
we
find
that
things
get
more
complex,
particularly
if
there's
a
need
to
do
like
live
upgrades
where
you
know
we
want
to
avoid,
you
know,
potential
outages
that
can
be
caused
by
reinstallation
of
components
to
cluster.
D
You
know
at
that
point
I
think
you
know
maybe
we'll
look
into
you
know
more
robust.
We
have
discussed
using
an
operator
for
the
cluster
registration
piece,
because
that
is
something
that
is
not
so
simple
to
do
with
helm.
D
C
Yeah
yeah
yeah.
No,
no!
I
I
recognize
that
I
mean
ultimately
that
the
upgrade
is
the
perfect.
You
know
the
poster
child
example
and
that's
part
of
the
that's
one
of
the
ones
that
turned
istio
a
lot
of
how
to
handle
the
upgrade.
Yes,
okay,
so
that's
the
so
there's
a
good
chance.
This
will
come
up
again
in
the
future,
but
at
least
for
now
it
sounds
like
helm.
Is
the
what
to
stick
with
right.
D
Because
you
know
in
particular,
because
upgrading
service
mesh
hub
should
not
cause
result
in
any
outages.
There's
no,
like
you
know,
service
misha,
being
offline
should
not
prevent
envoy
from
getting
configuration,
and
you
know
where
istio
being
offline
can
be
a
more
more
of
a
critical.
You
know,
can
result
in
outages.
C
I
don't
know
that
I
would
agree
with
that
assessment.
Istio
has
the
same.
I
mean
istio.
Has
the
same
model
of
envoy
will
persist
the
config
it
has
until
the
control
planes
back
up
and
istio
is
controlling
istio
ends
up
controlling
inter
intra-cluster
things.
Just
like
you,
control,
inter-cluster
things.
So
I'm
not
sure
I
would
agree
with
that.
You
know
argument
that
that
you're
less
important
or
less
impactful,
I
should
say
than
than
istio,
but
anyway
I
I
I'm
happy
with
your
answer.
I'm
not
trying
to
push
for
any
particular
thing.
A
C
Yeah,
unfortunately,
mihai
wasn't
able
to
join
today.
He
he
was
going
to
try
to
maybe
show
some
samples
of
the
code.
He
was
working
on.
He
hit
daniel
and
he
have
most
of
this
coded
up,
but
they've
been
having
some
trouble
getting
everything
actually
working
in
a
working
environment,
so
they
weren't
quite
ready
to
post
the
pr
daniel's
on
the
call,
so
I'm
sure
he'll
jump
in
if
he
wants
to
add
anything.
E
A
Sounds
good,
so
that's
it
for
the
things
that
were
on
the
agenda.
Is
there
anything
else
from
folks
on
the
call
today.