►
From YouTube: 09.23.2020 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
All
right
so
right
now
we
have
three
items
on
the
agenda.
John,
you
don't
have
enough.
You
don't
have
an
item
on
the
agenda
this
week.
A
B
Okay,
it's
great
all
right,
so
then
let's
get
started
first,
we
have
oh,
let's
see
we
want
to
talk
about
the
web
assembly
oci
spec.
There
was
an
announcement
that
we
made
last
week.
We
had
our
you
know,
started
work
with
web
assembly
webassembly
hub
and
then,
as
part
of
that,
there's
an
image
image,
artifact
specification
related
to
how
those
modules
are
stored
and
the
metadata
related
to
them.
B
So
and
that's
something
we
wanted
to
make
as
a
broader
community
discussion
so
that
the
specification
could
apply
to
any
sort
of
webassembly
module
beyond
you
know,
you
know,
are
our
use
cases
mostly
envoy,
but
could
it
be
something
that's
if
we
get
the
right
set
of
things
in
that
spec
it
could
apply
broadly
and
then
that
way.
B
There's
you
know,
there's
more
there's
compatibility
in
registries
and
such
so
is:
let's
see
what
do
we
have
from
our
site
on
the
the
webassembly
shane's
here
shane,
you
want
to
talk
about
a
little
bit.
B
D
So,
to
give
those
you
guys,
who
don't
know
a
little
context
where
we're
coming
from
so
you
guys
do
probably
know
or
may
have
heard
of
the
oci
spec,
which
is
the
open
container
initiative
image
spec,
it's
basically
a
descriptor
for
or
it's
a
specification
for
how
to
package
a
tarball
such
that
it's
it
can
store
originally
docker
containers.
D
But
really
it's
it's
like
a
layered
file
system.
Where
you
have
layers
it's
a
the
turbo
is
stored
as
a
set
of
layers.
Each
layer
has
a
commit
shot
and
a
descriptor
that
describes
what's
in
it,
so
they
can
really
be
used
to
store
anything
originally.
This
is
how
this
was
built
based
on
docker,
and
it
was
you
know
specifically
for
docker
containers,
but
we've
seen
other
usage
start
to
get
adopted
in
the
cloud
space.
D
The
other
use
case
that
I'm
aware
of
is
people
using
it
to
package,
helm,
charts
and
distribute
helm
charts,
but
it's
got
the
it's
all
the
the
niceties
of
the
way
that
you
build
and
ship
a
docker
container.
So
you've
got
to
check
some.
You
can
use
caching,
you
can
use
a
docker
daemon,
actually
to
store
it
or
something
like
harbor.
You
can
use
an
image
repository
to
store
version,
your
images
and
it's
just
a
very
convenient
way
of
distributing
really
any
kind
of
assets
that
are
going
to
your
cloud
workloads.
D
You
know,
so
everything
can
be
done
with
this
declarative
configuration
and
so
on.
So
the
oci
image
spec
is
there
and
it
describes.
You
know
how
to
store
anything
as
an
image
and
then
what
we
did.
We
built
on
top
of
it,
a
wasm
image
spec
and
we're
basically
leveraging
oci
it's
built
on
cloud.
D
So
it's
so
oci
gives
you
it
basically
based
it
tells
you
what
the
content
of
a
layer
is
in
terms
of
you
have
a
descriptor
and
the
actual
bytes
that
make
up
a
layer,
we're
specifying
what
actually
goes
in
the
descriptor
and
what
goes
in
the
layer
specifically
as
a
means
of
packaging
and
distributing
oci
images
that
contain
wasa
modules.
So
we
had
this
problem
with
glue
where
we
wanted
to
allow
glue
and
eventually
other
systems
as
well.
D
We
have
tooling
around
it
called
wazmi,
but
there's
a
there's,
a
whole
solo,
io
wasom
repo
right
now
that
you
can
check
out
but
part
of
the
repo
is
about
building
and
distributing
wassum
filters
for
envoy,
and
so
that
that
has
purpose
that
has
applications
in
both
really
in
all
of
our
our
projects,
but
specifically
in
glue.
We
had
this
use
case
where
we're
like.
Okay,
we
want
to
make
distribute.
D
You
know,
allow
users
to
leverage
wasm
inside
of
envoy
and
provide
a
convenient
way
to
ship
them,
because
it's
you
know
it's
compiled
code
at
the
end
of
the
day,
so
it's
an
artifact
and
you
need
a
way
to
distribute
that,
so
we
basically
built
tooling
around
it
and
when
the
tooling,
you
know,
rather
than
just
hard
coding
some
implementation
inside
the
tooling.
We
actually
built
out
a
specification
that
works
well.
For
you
know,
I
would
like
to
say
for
the
general
distribution
of
blossom
modules.
D
It's
not
just
specific
to
envoy,
like
we
can
go
into
the
actual
details
of
the
spec,
but
it's
basically
designed
to
say:
okay,
we
we
understand
that
wassum
is
becoming
a
really
powerful
tool
on
front
end.
It's
becoming
more
and
more
adopted
for
back-end
use.
Cases
like
envoy
is
a
great
example.
D
We've
also
talked
to
you
know,
talked
about
using
it
to
distribute
graphql
modules,
there's
really
a
lot
of
applications
there.
So
we
wanted
to
basically
put
something
down
that
would
be
useful
to
the
community
that
we
could
get
some
consensus
and
convergence
around.
So
definitely
you
know
we're
looking
for
feedback.
We
should
add
the
link
to
the
spec
in
the
dock.
If
it's
not,
there.
B
Yep,
I
did,
and
I
also
put
it
in
the
chat
I
don't
know
if
we
want
to
just
show,
what's
all
in
there
or
not,.
D
But
the
main
points
of
the
spec
are
that
the
image
consists
only
of
a
simple
of
two
layers,
one
that
contains
the
module
binary
itself
and
another
that
contains
a
config
file.
The
config
file
is
json
that
is
expected
to
be
loaded
by
an
oci,
a
spec
compliant
runtime.
So
there's
this
concept
of
runtime.
D
When
you
have
a
wasa
module,
it's
always
being
run
by
something
so
in
envoy
or
a
browser,
or
you
know
bazel,
I
I
don't
know
you
know,
I
don't
know
who
outside
of
of
these
use
cases.
You
know
different
runtimes
that
exist,
but
there
are
many
and
so
you'll
have
a
configuration
file
that
describes
certain
properties
that
are
common
to
every
wasa
module,
such
as
the
version
of
the
interface
that
they're
built
for
trying
to
I.
D
That
would
be
configuration
for
envoy
at
you
know,
static
configuration
that
it
needs
to
be
aware
of
in
order
to
load
this
module,
and
so
when
you
build
the
wasa
module,
not
only
are
you
able
to
build
the
binary,
but
you
can
also
build
as
part
of
the
image
some
configuration
parameters
that
otherwise
your
users
would
have
to
hard
code
or
keep
track
of
somewhere.
A
B
It'd
be
great
to
have
him
join
one
of
the
community
meetings.
A
B
Cool,
so
any
other
any
questions
or
comments
on
the
the
web
assembly
stuff.
Here
I've
added
the
link
to
the
announcement,
as
well
as
the
repo,
both
in
the
chat
and
in
the
document
so
in
the
agenda,
so
that
folks
can
check
it
out
and
then
you
know
we
can
have
a
follow-up
conversation
after
people
kind
of
review.
C
It
not
on
the
oci
stuff
specifically,
but
while
we're
talking
about
wasm,
we
are
we've
just
merged
support
for
istio
1.7
webassembly
envoy
filters
into
master
this
morning,
so
we
should
get
a
release
out
either
today
or
tomorrow
morning.
That
will
officially
support
sd17,
which
you
know
has
been
a
big
thing.
The
community
is
looking
for,
so
we're
excited
to
get
that
out
pretty
soon
and
then
also
we're
getting
really
close
to
having
full
official
rust
support
for
webassembly
modules,
which
is
really
exciting
and
has
been
another
big
thing.
E
C
C
So
we
built
a
cli
called
wazney,
which
is
used
to
interact
with
all
these
wasm
images
using
our
oci
spec,
and
it
also
talks
to
our
wasm
hub
platform,
which
which
is
webassemblyhub.io
and
that's
our
online
cloud
service
that
is
free
to
use
and
you
use
it
to
push
and
pull
images
down
that
have
these
oci
images
of
webassembly
modules
and
specifically
right
now.
A
lot
of
people
are
using
this
for
envoy
filters
in
both
glue
and
istio,
and
also
just
the
I
say,
default
envoy.
C
But
right
now
it's
actually
still
a
fork
called
envoy
wasm,
but
they're,
hoping
to
upstream
it
really
soon,
hopefully
but
yeah.
So
we
can
either
do
it
via
our
wasme
cli,
with
just
simple.
I
think
it's
wasme
istio
deploy
and
then
you
can
get
your
filter
right
out
onto
whatever
namespace
you.
B
Cool
thanks
shane
and
I
added
those
I
added
the
coming
soon
stuff
to
the
notes
as
well
so
and
then
we
can
update
it
when
update
that
when
those
are
out.
Let's
see
next,
if
we
have
no
questions
or
comments
there.
Next
we
can
go
to
joe
to
provide
some
progress
on
service
mesh
hub
on
the
aws
atmash
side.
C
Yeah
for
sure,
before
we
dive
into
app
match
a
little
bit
just
to
give
kind
of
a
broader
project
update
on
service
meshup
over
the
last
couple
of
weeks.
Since
we
last
spoke,
we've
merged.
You
know
a
handful
of
like
smaller
fixes,
docs
fixes,
bug,
fixes
and
we've
also
added
support
for
istio
1.7
through
our
demo
flow.
C
So
if
you've
been
testing
out
seo
17
through
istio
ctl
1.7
on
your
own
machine
now,
you'll
be
able
to
do
the
same
with
service
mesh
hub
and
our
istio
multi-cluster
demo
command
and
additionally,
in
terms
of
new
functionality
that
we've
added
recently
we've
just
merged
in
the
ability
to
do
subset
routing
on
on
our
own
failover
services.
So
you
get
a
little
bit
more
granular
control
when
you're
working
with
those.
C
But
the
larger
initiative
that
we've
been
working
on
is
app.
Mesh
support
in
service
mesh
hub
scott
has
worked
on
a
lot
of
the
groundwork
there
and
kind
of
like
the
overall
design.
C
You
know,
as
we
kind
of
build
out
this
atmos
support.
You
can
expect
the
same
discovery
and
networking
design
to
apply
here
so
we'll
have
discovery
running
against
these
different
control.
Different
clusters,
where
we
see
the
atmesh
crds
and
you
know
in
doing
so,
we'll
be
able
to
discover
injected
workloads
and
the
various
app
mesh
meshes
could
be
one
or
many
that
are
being
managed
by
controllers
running
on
registered
clusters.
C
We
will
also
be
providing
some
traffic
policy
features,
we're
targeting
timeouts,
retries
and
traffic
shift
in
this
kind
of
initial
push
that
mesh
isn't
quite
as
fully
featured
as
istio,
but
as
features
are
added
and
as
we're
able,
we
will
try
to
maintain
feature
parity
with,
like
the
full
extent
of
the
service
mesh
hub
api
on
top
of
these
kind
of
atmesh
primitives
and
then
like
down
the
line.
Kind
of
like
a
next
up
feature
that
we
will
start
investigating
is
going
to
be
adding
the
ability
to
define
the
network
edges
within
app.
C
Mesh
atmos
doesn't
have
the
same
kind
of
strong
guarantees
of
mtls
and
access
policy
as
istio,
but
it
still
has
the
ability
to
determine
which
services
are
allowed
to
communicate
with
one
another.
So
we
will
look
to
leverage
that,
maybe
not
in
our
own
access
policy
apis
as
not
mislead
users
into
thinking
that
they're
getting
those
same
security
guarantees,
but
perhaps
in
kind
of
like
a
sibling
api
that
will
allow
users
to
define
those
network
edges
through
our
api.
C
And
in
the
community
meeting
dock
I
added
a
couple
of
links
here.
One
talks
about
one
is
a
link
to
the
documentation
for
the
aws
atmesh
controller,
to
learn
a
little
bit
more
about
that
project.
In
my
experience
so
far,
it's
been
much
easier
to
manipulate
atmesh
objects
through
those
crds
compared
to
kind
of
like
wrangling.
A
C
Sure
yeah,
so
the
concepts
documentation
that
I
linked
to
there
is
more
about
kind
of
like
the
code
architecture
of
service
mesh
hub.
So
it
kind
of
describes
our
reconcile
loop
from
like
input
being
say,
kubernetes
services
and
some
of
the
like
raw
mesh
or
yeah
kubernetes
services,
workloads
and.
C
Of
like
injected
resources
and
then
the
output
being
like
the
api
objects
themselves,
so
an
istio
like
you
know
virtual
services
and
whatnot
and
an
app
mesh.
You
have
virtual
nodes
virtual
routes,
virtual
gateways.
So
I
I
linked
to
that
dock
to
say
that
the
like
kind
of
at
mesh
design
is
going
to
be
very
analogous
to
the
like
istio
design
that
we
have
in
place.
A
A
A
I
guess
one
just
one
sort
of
question
on
that:
you
mentioned
that
they
don't
have
mpls
support,
but
I
think
they're
in
the
progress
process
of
working.
That
is
that,
do
you
have
a
visibility
into
how
they're
going
to
do
that
and
whether
it's
going
to
affect
your
api
as
much
or
is
that
still
still
kind
of
waiting
to
see
what
they
come
out
with
with
respect
to
that
support,.
E
Go
ahead,
nope
we
just
we
just
we
do
have
insight
into
that,
and
not
but
I'll.
Let
joe
in
engineering
speak
to
you
know
what,
where
we
are
with
that,
but
yeah
we're
working
with
aws
on
that.
C
Right
exactly
like,
since
that
is
kind
of
an
ongoing
development,
we
are
focusing
more
on,
like
the
traffic
policy
side
of
things
before
we
get
into
like
the
mpls
and
access
control.
A
B
All
right
and
let's
see
so
next
up,
we
have
celso
on
the
community.
We
need
to
talk
a
little
bit
about
the
work
that
he
did
for
glue
on
arm.
I
think
that'd
be
fun.
So
if
you
can
give
a
talk
a
little
bit
about
it,
that'd
be
great.
F
Yeah,
I
I
think
it's
okay.
If
I
share
my
screen
just
I
don't
know
if,
even
if
this
even
has
access
to
it
so
yeah,
as
as
I
would
guess,
zoom
doesn't
have
access
to
to
the
screen
share.
It
doesn't
matter
essentially
this.
This
was
something
that
started
out.
B
F
Okay,
max
and
and
all
their
permissions,
but
essentially
this
this
was
on
one
of
those
projects
that
came
out
of
boredom
and
frustration
during
during
quarantine
and
I've
been
kind
of
trying
this
idea
of
having
a
raspberry
pi
cluster,
which
is
pretty
pretty
common
setup,
and
also
to
to
use
that
one
as
my
kind
of
lab
for
personal
use
and
professional
use,
because
we
are
developing
on
the
company
that
I'm
working
at
we
are
developing
with
with
luke.
F
So
this
quickly
took
me
to
the
realization
that
okay,
we're
not
quite
there
yet
there's
no
support
for
for
our
for
armor
on
this,
and,
of
course,
this
took
me
down
the
rabbit
hole
of
trying
to
understand.
Where
exactly
does
this?
This
issue
lie
and.
F
It
was
pretty
apparent
that
envoy
was
already
building
the
the
arm
support.
So
if
envoy
was
already
building
it,
we
should
probably
be
able
to
take
advantage
of
it
and
also
use
those
versions
to
build
something
that
that
would
run
on
that
on
that
architecture
turns
out.
The
issue
is
not
quite
as
easy
and
there's
there
are
a
few
dependencies
here.
F
Okay,
the
the
envoy,
build,
did
work
on
arm,
but
envoy
glue,
for
instance,
had
a
couple
of
dependencies,
one
on
the
alpine
g,
sleeve
c
image,
which
turns
out
it's
not
also
built
for
rm64,
and
even
that
one
had
a
dependency
on
a
package
from
another
repo
which
I'll
be
I'm
I'll,
be
happy
to
share
to
share
afterwards
that
that
provide
the
binaries
required
to
build
this.
F
This
alpine
geolocy
image,
of
course
again
no
no
arm
support,
so
we
kind
of
had
to
build
the
the
whole
stack
so
from
building
glibc
to
work
to
to
arm
64
architectures.
F
So
the
the
changes
were
made
mainly
for
for
those
those
specific
reasons
and
afterwards,
in
terms
of
blue
itself,
it
was
mostly
about
changing
access
to
changing
access
to
to
the
way
that
that
the
bills
are
made.
So
we
had
a
bunch
of
hard-coded
references
to
amd64
images.
F
It
was
mainly
a
matter
of
introducing
a
new
variable
during
build
time
which
defines
the
the
the
architecture
you
can
probably
share
the
pr
on
on
the
dock
for
the
community,
but
that
was
mainly
the
the
work
revolved
around
the
build
process
and
making
sure
that
all
the
dependencies
in
the
stack
to
get
glue
to
build
on
arm
64
images
would
would
also
be
supported
and
available.
During
this,
this
process.
F
F
We
had
a
brief
discussion
around
this
on
slack
and
we're
thinking
about
probably
using
s1
instances
on
aws,
because
I
think
those
are
rm64.
F
Yeah,
it
was
not
as
easy
as
I
would
like
it,
but
again
it
was
actually
pretty
fast.
Once
it
was.
We
had
all
the
dependencies,
it
was
pretty
fast
to
do
it.
There's
there
weren't,
really
not
a
lot
of.
C
C
F
If
I
want
to
be
100
honest,
I
other
than
making
sure
that
it
was
running,
I
really
haven't
had
the
time
to
really
get
back
to
to
it,
I'm
actually
in
the
right
in
the
middle
of
the
process
of
building
the
cluster.
Now
that
I
got
all
the
parts,
so
I
should
be
performing
some
more
testing
around
the
end
of
this
month.
F
I
hope-
and
I
also
believe
there
there
are
still
a
couple
changes
that
we're
gonna
need
to
make
to
unvoice
glue
before
we
can
really
start
releasing
those
those
images
gotcha.
I
think,
we'll
by
the
way,
also
envoy
itself.
It's
not
yet
releasing
official
arm
64
images
only
on
the
envoy
dev
channel,
but
it
should
be
pretty
soon
and
I
think
we
should
probably
wait
until
then
before
focusing
on.
B
Looking
forward
to
looking
forward
to
the
progress,
so
you
said
end
of
the
month
and
then
we
can
check
back
in
yeah
and
then
so
if
there
are
no
additional
commas
to
that,
let's
see
the
thing
we
can
do
is
now
we
have
mihai
you
want.
If
your
audio
is
good.
I've
got
I've
got
you
on
as
the
last
to
give
some
an
update
on
some
progress.
B
G
Okay,
so
we've
been,
this
is
part
of
the
limited
trust
discussion.
G
After
some
internal
reviews
and
some
meetings
we
had,
we
came
to
the
conclusion
that
the
fastest
and
best
approach
to
get
this
right
now
is
the
approach
you
guys
described
in
for
limited
trust,
using
basically
using
a
gateway
to
create
another
another
shared
trust
domain
between
an
ingress
and
the
egress
and
not
modifying
the
trust
domains
inside
the
each
cluster.
G
So
we've
we've
start.
We've
recently
started
working
on
that
model.
From
what
I
can
tell
there
should
be.
I
think
we
need
two
external
api
changes
for
that
to
work.
One
of
them
is
adding
the
egress
gateway,
as
as
information
in
the
istio
mesh.
It's
currently
not
there,
you
only
have
the
ingress
gateway
and
another
one
is
that
the
limited
trust
model
for
in
the
virtual
mesh
is
basically
an
empty
interface,
but
it
has
nothing
related
to
how
to
get
certificates
into
into
overtime
using
using
limited
trust.
G
As
far
as
I
can
tell,
and
in
order
to
get
that
to
work,
we
were
thinking
of
trying
to
get
the
certificate
agent
instead
of
pointing
to
ecod,
to
issue
the
certificate
and
modify
the
certificates
there
to
see
if
we
can
get
it
to
point
to
the
ingress
and
egress
gateways
and
just
modify
the
certificates
on
those
on
those
workloads.
G
Okay,
so
istio
supports
configuring,
tls,
origination
and
tls
termination
on
ingress
gateways.
I
believe
it
does
that,
through
some
secrets
that
you
can
configure
what
we
want
to
do
is
have
not
change
the
the
the
trust
domain
inside
each
cluster
for
in
in
the
mesh
communication
we
want
to.
We
just
want
to
have
in
ramesh
communication.
We
just
want
to
have
a
trust,
the
main
be
established
between
different
clusters.
So
the
first
thing
we
want
to
do
is
not
modify
the
certificates
in
istio,
so
that.
E
So
in
this,
in
this
model
that
you
are
attacking
here,
the
the
the
trust
domains
are
different
and
they're
anchored
in
different
route.
Different
route,
signing
certificates.
G
Yeah
exactly
and
then
you
have
a
trust
for
the
trust
domain
if
you
have
to
cluster,
so
you
have
trust
domain
a
for
cluster,
a
trust
domain
b
for
cluster
b.
Then
then,
you
have
a
third
trust
domain.
That's
between
a
and
b.
This
does
prove
have
the
problem
of
losing
identity,
but
it
also
solves
the
problem
of
having
to
tamper
with
certificates
in
clusters
to
get
them
to
talk
with
each
other.
The.
A
I
think
we
talked
about
that
a
little
while
back
that,
there's
probably
a
couple
number
of
different
trust
models
that
one
can
have
the
shared
route
or
what
mihai
just
described,
or
even
some
things
like
istio
is
describing
with
spiffy
federation
and
trust,
bundles
and
stuff,
like
that,
so
probably
something
that
we
maybe
want
to
explore
more
holistically.
But
I
think
for
now
we
decided
that
we
just
wanted
to
get
something
going.
Get
something
in
front
of.
A
That's
also
partly
why
I
was
asking
about
where
app
mesh
is
going
with
mtls,
because
they
might
have
some
of
their
own
models
of
how
you
federate
trust
across
clusters.
So
that's
why
you
know
there
might
be
an
influence
with
what
they
end
up
supporting
there.
That
might
influence
how
we
want
to
approach
this.
B
All
right
is
there
a
anything
else,
any
other
questions
or
comments
or
things
that
people
would
like
to
add.
We've
gone
through
all
the
items
officially
added
to
the
agenda.