►
From YouTube: WebAssembly Filters Meeting (July 26th, 2021)
Description
WebAssembly Filters Meeting - July 26th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
All
right
welcome
everybody.
It's
the
web
assembly
filters
meeting
for
the
26th
of
july
2021.
It's
a
monday.
We've
got
about
45
minutes.
I'm
set
out
to
discuss
this
topic
and
well.
We
did
have
a
healthy
agenda.
It
looks
like
anirban
is
he's
not
on
the
call-
and
I
guess
his
agenda
item
went
away.
Maybe
I'm
behind
on
some
slack
messages
on
this.
B
A
All
right
so
he's
he's
got
a
conflict
today
he
won't
be
able
to
make
it.
Anybody
else
have
topics
for
today's
discussion.
A
A
Good
well
so,
let's
rearrange
the
schedule.
So
nirvan
is
next
time
on
envoy's
capabilities
with
respect
to
traffic.
Mirroring
is
joining
that
puts
garroth
toward
the
top,
and
I've
got
a
topic
to
discuss
on
filter
support
across
various
service
meshes.
A
Okay,
good,
so
karsh
is
joining
in
a
second.
While
we
take
attendance,
let's
go
ahead
and
jump
into
our
first
topic.
If
we
can.
A
I
see
okay,
so
utkarsh
is
having
all
right.
So,
let's,
let's
bump
up
lee's
topic
to
the
top,
so
kara
was
getting
ready
and
then
who
crashes
as
well
so
good.
So
anybody
on
the
call
for
the
first
time
we
got
vishal
shuvum
rudraksh.
A
A
A
A
C
A
Good
deal,
do
you
want
to
say
hi
real,
quick.
C
To
everybody
everyone,
I'm
ruth.
I
recently
joined
two
weeks
back
right
now,
I'm
working
with
nikhil
on
a
learn,
ng
topic
front
end
and
I'm
learning
mystery
and
microservices.
So
I
thought
I
should
join
this
topic.
A
Perfect,
so
rotanjay,
actually,
as
you
go
through
those
learning
paths
at
some
point,
we'll
have
content
for
specific
to
webassembly
filters
and
so
good
you're
paving
the
way
for
others
to
learn.
A
So
the
first
topic
at
bat,
then,
is
with
respect
to
how
it
is
that
mesh
reorchestrates
filters
and
utkarsh
is
going
to
walk
us
through
this
end
to
end
so
I
won't
explain
how
it
works
because
he
will
instead,
what
I
will
say
is
we
measury
is
in
the
process
of
all
of
its
adapters
supporting
management
of
webassembly
filters,
most
notably
it's
the
measuring
adapter
for
istio
that
has
the
most
advanced
support
for
filter
management.
A
Second,
to
that
I
don't
think
sayatin
is
on,
but
rudraksha
is
rudraksha.
Do
you
want
to
tell
us
about
the
status
of
phil?
You
know
like
basically
pattern
support,
filter
support
for
the
rest
of
the
adapters,
and
maybe
maybe
we'll
find
that
folks,
like
ashish,
are
interested
in
jumping
in
on
that.
D
Yeah,
so
basically,
we
are
doing
some
work
on
generating
pattern
components
automatically
for
every
service
mesh,
so
that
would
be
where
we
would
be
handling
all
the
filters.
Recent
call
for
volunteers.
For
me
for
this,
I
can't
find
the
link
so.
D
A
Okay,
so
so
I'm
trying
I'm
trying
to
put
in
a
check
mark-
and
I
can't
find
the
stinking
thing,
but
so
the
istio
adapter
is
well
on
its
way
and
is
managing
actively
and
has
been
actively
managing
envoy
filters
for
a
long
time.
Sayatan
and
rudraksh
have
been
look
adding
that
support
to
open
service
mesh,
as
well
as
I
believe,
to
kuma
now
as
well.
A
These
are
the
big
five,
but
it
does
leave
us
with
all
the
rest
as
well.
So
the
citrix
service
mesh
nginx
service,
mesh.
A
Even
outside
of
that
support
for
non-envoy
proxy
data
planes,
like
linker
d
nginx
service
mesh,
these
two
we'll
still
need
to
add
pattern,
support
which,
if
you're,
adding
filter,
support,
you're
more
or
less
adding
pattern
support
at
the
same
time,
and
so
so
on
that.
D
A
You
know
right:
no,
actually,
none
of
them
do
yeah,
you're
right.
You
know,
let's,
let's
explain
more
on
that
point
and
and
how
to
approach
it
after
utkarsh
walks
through
the
the
filter
management
capabilities
and
how
those
are
pattern-based.
I
think
it'll
make
a
lot
more
sense
to
everyone
else,
and
then
we
can
talk
about
how
to
ramp
up
this
effort
to
complete
it
for
the
rest
of
the
adapters.
A
D
A
So
the
approach
here
is
something
I
want
to
talk
about
after
utkarsh
walks
us
through
it,
because
I
think
I
think
there's
a
couple
of
approaches
to
providing
this
type
of
support
and
so
worth
reflecting
on.
E
Yeah,
actually
I
wanted
a
demo
and
I
think
that
the
pod
are
initializing
for
like
past
two
or
three
minutes,
I
can
start
like,
maybe
showing
the
pattern
and
let's
see
the
department
is
really
good.
E
So
this
is
the
pattern
that
I
today
even
early,
in
fact
shared
in
the
wasm
channel.
Also,
what
this
pattern
is
doing,
as
dr
draksha
was
mentioning
is,
is
a
bit
still
specific
right
now,
but
the
name
also
is
the
same.
That
is
this
particular
pattern
is
actually
is
specific.
It
says
onward
with
this,
so
the
implementation
is
also
specific,
or
what
we
are
doing
here
is
first
initially,
I'm
applying.
E
Basically,
I'm
setting
up
onward,
I'm
setting
up
istio
so
that
it
can
instruct
on
where
for
configuration,
discovery
and
those
kind
of
things
is
required,
because
I
mean
this
is
how
actually
istio
sends
the
configuration
to
is
to
onward.
So
this
is
the
first
step
that
I'm
doing
and
after
that,
I'm
actually
applying
the
rate
limit
filter.
This
rate
limit
filter
is
being
graphed
from
here.
That
is
the
life
organization.
We
have
our
image
and
repository
and
there
is
a
rate
limit
filter.
E
So
basically,
I'm
instructing
that
go
ahead
and
grab
this
particular
horizon
filter
from
this
particular
url
and
with
this
particular
configuration
and
basically
use
that
I'm
also
instructing
it
to
first
ensure
that
first,
it
applies
this
generic
steel
filter
which
actually
sets
sets
things
up
and
after
that
it
will
go
ahead
and
apply
this
rate
limit
on
and
if
the
department
is
ready,
which
it.
F
E
So
what
I
should
be
able
to
do
is
run
machine
city
pattern
apply
this
particular
pattern
with
my
and
token
hope
it
works
yeah.
It
said
that
it
created
these
particular
things
that
I
asked
asked
for
it
to
create.
Let
me
check.
D
E
So,
though,
I
think
the
wasn't
on
the
on
the
onward
proxy
actually
received
the
configuration,
I'm
not
sure
why
it's
not
reachable.
E
So
actually,
I'm
using
the
ingress
gateway
and
the
gate
was
the
gateway.
Shouldn't
have
been
configured
when
I
actually
actually
have
provisioned
the
application
from
here
and
when
we
actually
provision
the
application
from
here,
we
also
patch.
We,
we
actually
generate
a
gateway
configuration
also.
So
what
I
would
expect
is
that
when
I
send
the
request
to
imagehub
imagehub.machine.io,
it
will
go
to
this
particular
in
in
risk,
which
will
actually
be
directed
to
to
the
actual
service,
which
is
not
responding.
E
Because
of
the
picture,
so
this
is
also
a
demo
for
the
probably
for
the
delete
command,
which
you
would
actually
you
can
actually
delete
the
pattern
that
you
applied
using
the
apply
command
just
by
doing
the
d,
let's
see
if
it's
accessible
now
just
to
okay.
So
it's.
B
E
A
Yeah,
I
know,
and
you
have
local
entry
hosts
and
yeah.
A
A
The
probably
shifted
because
yeah
yeah
yeah
good,
actually,
if
it
sounds
great
yeah,
actually
the
the
demo-
is
just
like
cream
on
the
top
like
not
necessary
just
but
but
to
the
point
of
discussing
how
the
fact
that
other
adapters
need
enhanced
to
support
filter
management
as
well
as
just
pattern
management
in
general.
A
I
think
it's
a
good
seeing
some
of
this
is
helpful
context
for
everyone
so
and
that's
to
say
that
in
in
each
adapter,
the
purpose
of
having
a
mesh
adapter
for
a
specific
surface
mesh
is
to
have
service
mesh
specific
support.
So
I
like
that,
seems
pretty
obvious.
You
know
in
terms
of
the
design,
but
but
to
reinforce
that,
as
there
are
individual
operations
and
management
tasks
that
each
adapter
can
perform
for
each
specific
service
mesh,
not
all
service
meshes
are
made
equal.
Some
do
different.
A
You
know
they
do
different
things
and,
as
such,
each
adapter
needs
to
talk
interface
with
those
service
meshes
differently.
We're
trying
to
it's
hard
to
sustain
10
different,
supporting
10
different
service
meshes,
and
it's.
We
certainly
don't
want
to
hard
code
really
as
little
bit
as
possible,
because
each
individual
service
mesh
has
changes
over
time.
They
have
the
new
releases
coming
out
all
the
time,
they're,
adding
new
features
and
so
to
be
as
dynamic
as
possible
and
supporting
their
op,
the
their
capabilities
and
exposing
those
capabilities
to
users.
A
There's
the
approach
that
we've
taken
is
an
acknowledgement
that
all
of
the
meshes
that
we're
looking
at
run
in
context
of
kubernetes,
some
of
them
support
workloads,
running
outside
of
kubernetes,
but
all
of
them
minimally
support
microservices-
and
you
know,
containers
running
on
kubernetes
as
such.
Most
of
them
have
taken
a
common
design
pattern
of
creating
custom
resources,
so
kubernetes
cr
custom
resources.
A
They
each
come
with
different
custom
resources,
hence
the
word
custom,
the
the
thing
and
and
yeah,
and
so
whether
they
come
with
a
custom
resource
or
set
of
custom
resources
or
not
the
service
meshes
themselves.
They
are
often
like
when
they
are
deployed
onto
kubernetes.
They
are
just
their
deployment
is
described
in
as
a
set
of
manifests,
a
set
of
kubernetes
manifests
and
which
is
a
set
of
yaml.
A
Good,
so
actually,
if
you
just
if
measuring
and
each
measuring
adapter
analyzes,
the
kubernetes
manifests
for
that
for
its
respective
service
mesh,
that
adapter
can
dynamically
extract
information
about
what
what
things
are
configurable
for
that
service
mesh
and
what
custom
resources
that
specific
service
mesh
manages
and
what
are
the
details
of
those
custom
resources
so
by
parsing
through
yaml,
by
working
with
that
json
and
extracting
schema
definitions
for
those
custom
resources
mesh
we
can
dynamically
the
the
mesh
readapters
can
more
or
less
you
know,
that's
the
way
in
which
we
don't
hard
code.
A
What
operations
and
adapter
supports
it
should
be
dynamic
based
on
what
service
mesh
is
being
deployed,
what
version
of
that
service
mesh
is
being
deployed
and
those
are
the
operations
that
should
be
exposed
in
measuring
in
the
ui
okay.
So,
to
sum
up,
what
I
just
said
is
the
measuring
manages
a
lot
of
service
meshes.
Each
service
mesh
is
complex.
They
each
have
their
own
capabilities;
they
don't
have
their
own
versions,
keeping
a
compatibility
matrix
of
support
for
all
of
those
things
or
it
takes
a
small
army.
A
Fortunately,
we
have
a
very
warm
and
welcoming
community
that
embraces
what
is
currently
a
small
village
is
building
toward
an
army
that
can
sustain
these
things,
but
even
at
that,
we
don't
want
to.
We
want
to
be
as
intelligent
as
possible
and
do
be
excellent
software
engineers
by
being
as
lazy
as
pos,
I
mean
being
as
smart
as
possible
and
not
repeating
ourselves
in
terms
of
the
code
that's
written
in
each
of
those
adapters,
so
each
adapter
okay.
A
So
with
that
in
mind,
each
adapter
needs
to
grab
a
copy
of
the
kubernetes,
manifests
that
belong
to
its
specific
service.
Mesh
run
a
utility
against
that
against.
Those
kubernetes
manifests
extract
info
about
those
managed
elements,
so
sayaten
and
utkarsh
and
rudraksh
have
been
working
on
the
so
that
they
have
been
working
on
adding
that
extraction
up.
You
know
the
process
of
extracting
that
information
they've
been
adding
it
to
the
build
process
for
each
service
mesh
adapter.
A
So
that's
what
we
were
just
talking
about
root
rock
was
saying
well,
there's
like
three
adapters
that
have
you
know
that
are
that
are
that
this
type
of
support
is
being
built
for
good.
That
means
there
are
seven
others
that
need
that
type
of
support.
So
so
one
we
should
talk
about
what
that
current
model
is
and
explain
it
to
others,
so
that
ashish,
who
is
you
know
who
has
volunteered?
A
I
don't
know
how
many
times
to
work
on
a
mastery
adapter
to
go
to
go,
be
able
to
do
this
and
anyone
else
that
wants
to
do
it
as
well,
and-
and
I
say
this
very
jokingly-
because
she
quite
literally
is
never
asked
to
do
this,
but
but
I
think
that
he's
capable
of
doing
it,
so
I'm
gonna
volunteer
and
then
anyone
else
that
wants
to
do
this
as
well.
So
there's
one
thing
for
us
to
do
and
that's
to
go.
A
The
second
thing
we
should
look
at
is
a
discussion
around
the
fact
that
the
need
to
for
measuring
for
an
adapter
to
understand
to
to
to
perf
to
understand
how
to
manage
a
service
mesh
that
building
that
in
at
build
time
of
the
docker
image
that
we
create
for
the
mesh
readapter.
A
A
But
over
time,
as
a
new
version
of
that
service
mesh
comes
out,
you
either
need
to
download
a
new
version
of
meschery
or
you'll
need
to
run
that
same
measure.
You
might
need
the
ability
to
run
that
same
process
that
it
does
during
build
time
to
run
that
same
process
at
runtime,
and
so
that
would
be
our
second
conversation.
E
Yeah
just
one
last
thing,
and
that
was
the
caveat
that
all
of
this
thing
works,
but
this
this
particular
implementation.
That
is
this
particular
implementation,
actually
lies
on
a
feature
which
was
introduced
in
istio
version
1.9.
E
So
anything
before
that
will
so
basically,
this
particular
approach
is
almost
right,
but
in
that
case
onward
would
start
complaining
because
because
it
would
expect
a
cluster
to
exist,
that
thing
came
into
existence
only
since
1.9
and
1.8.
This
particular
approach
doesn't
quite
works
with,
so
it
will
work.
You
can
get
it
working
by
defining
a
custom
custom
cluster
in
here
which
has
access
to
which
has
outbound
access.
E
So
if
you
by
cluster
I
mean
the
on
word
list,
not
the
securities
cluster,
you
can
actually
define
a
cluster
in
here
and
then
it
and
then
it
will
work,
because
in
that
case,
what
will
happen
is
that
everything
would
hundred
by
onward
what
did
in
version.
1.9
is
basically
even
right
now.
E
What
happens
is
that
convoy
reaches
out
to
get
the
voicem
binary,
but
actually,
if
your
agent
intercepts
that
request
and
then
grab
grab
the
filter
wrap
the
binary,
it
can't
do
the
sharp
56
test,
which
is
the
next
step,
but
because
we
are
not
providing
it
here.
So
this
theo
won't
do
that,
but
it
can
do
that
and
once
it
grabs
a
binary,
it
will
give
it
to
onward
it,
because
so,
in
that
case
onward
doesn't
need
a
cluster,
but
this
was
again
introduced
in
1.9,
so
yeah.
E
E
A
E
Yeah,
so
actually,
in
that
case
right
now,
this
particular
approach,
as
I
said,
was
only
for
1.9
or
later.
That
is
only
two
versions,
one
two
major
versions,
one
dot,
nine
1.10.
E
If,
if
you
want
to
support
anything
before
that,
then
we
have
to
actually
rely
on
convoy's
capability,
in
which
case
of
a
lot
more
configurations
would
be
required
in
here
now.
This
particular
pattern,
as
it
is,
would
work
only
in
the
latest.
One.
A
few
more
configurations
would
be
required
in
here
like
truster,
the
timeout,
shard,
56
and
yeah.
After
these
three
attributes
are
what
what
is
required.
We
can
actually
create
patterns
for
those
service
missions.
E
Let's
say
we
detect
that
the
service
mesh
that
is
running
is
actually
the
older
one.
We
can
actually
do
that.
The
only
issue
is
that,
because
onward
is
being
directly
controlled
by
a
person
in
this
case,
so
configuring
a
cluster
cluster
on
the
fly,
is
something
I
mean.
I'm
not
I'm
not
sure
if
we'd
be
doing
that,
but
we
I
mean
we
can.
E
But
if
we
do
that,
then
we
actually
kind
of
we
kind
of
start
configuring
onward
at
its
core,
but
definitely
we
can
because
the
default
outbound
cluster
that
it
has
none
of
them
can
actually
reach
out
to
the
internet
by
default.
Actually,
I
was
looking
through
the
configuration
that
at
the
default
configuration
that
this
theo
provides
to
onward.
I
I
I
didn't
find
a
cluster
which
has
access
outbound
access
to
the
to
the
public
internet
so
that
that's
our
issue.
E
We
can't
definitely
reach
out
to
envoy
and
create
a
cluster,
but
it
would
be
too
much
of
things
also
onward
envoy
in
in
their
code.
E
They
have
mentioned
that
they
know
that
there
is
an
issue
that
you
have
to
in
order
to
support
this
thing,
because
this
configuration
just
goes
to
the
onward
almost
as
it
is
so
so
they
have
comment
in
their
code
that
they
know
that
this
is
an
issue
that
you
need
to
have
a
cluster
in
order
to
get
to
work,
get
things
working,
so
they've
said
in
there
that
they
will
actually
fix
it
in
future,
so
that
you
can.
E
They
can't
resolve
dns
inline
dns
is
what
they
called
it,
so,
probably
in
future
onward
onward
proxies.
We
have
this
support
built
in,
but
for
now
either
either
the
way
is
to
either
the
way
is
to
use
istio's
latest
feature
or
or
provide
basically
tweak
on
wire
configuration
so
that
it
has
a
cluster
which
has
outbound
public
or
the
third
is.
E
The
third
method
is
the
method
that
we
were
actually
using,
and
that
was
to
basically
basically
patch
the
deployments
so
that
it
has
a
basically
it
has
a
persistent
volume
in
that
volume
drop
in
the
was
in
binaries
and
configure
the
configuration
to
load
the
version
from
that
particular
binding,
in
which
case
this
particular
thing
would
become
actually
local
and
the
code
will
actually
reside
in
the
local
volume
which
the
proxy
can
actually
refer
to
directly.
E
The
reason
this
was
not
the
first
approach
was
because
it
was
because
this
is
so.
The
thing
is
that
it
will
require
processing
on
the
araka
side
and
would
be
the
most
generic
solution
to
the
problem.
So
that's.
That
is
why
we
were
at
least
I
was
trying
to
get
to
a
solution
where
we
can
actually
use
voice
and
onwards.
It's
on
voice
core
functionality
to
load
the
wasn't
binding.
That
is
the
last
approach.
E
I
think
it's
the
most
like
it
should
work
in
most
of
the
cases
unless
and
until
someone
doesn't
have
a
storage
provision
or
something
in
the
cluster
running.
Oh,
I
think
that
would
be
the
only
issue
that
would
be
the
only
time
someone
would
run
into
an
issue
if
they
don't
have
a
csf
again
or
something.
A
And
that
that
that
more
universal
approach
of
patching
their
deployment
by
adding
by
attaching
a
vault,
you
know
creating
a
volume
attaching
a
volume
that
would,
I
guess,
the
the
good
that
falls
in
line
with
what
joshua
who's
been
on.
A
The
call
for
a
while
from
oracle
he's
gonna
he's
looking
for
permission
to
share
their
use
cases
so
hopefully,
next
week,
and
that
includes
like
direct
management
of
envoy
and
direct
management
and
then
they're
in
direct
management
of
filters
which,
like
you
know,
could
again
could
be
done
in
a
couple
of
different
ways
like
you
were
just
describing,
but
that
more
universal
way
do
you
anticipate
conflict
with
is
later
istios
like.
A
If
that
crd
is
the,
would
we
then
not
be
able
to
list
an
envoy
filter
as
an
istio
custom
resource,
because
it's
going
to
want
to
configure
this
differently.
E
I
I
don't
think
there
would
be
any
kind
of
conflicts
unless
and
until
someday
steve
decides
to
remove
this
or
change
this
particular
crd
entirely,
because
in
that
case,
what
we
are
relying
on
is
again
on
voice
pool
functionality
and
that
is
to
be
able
to
grab
the
grab
the
version
binary
from
from
local
file
system
in
in
in
kubernetes
case.
E
That
turns
out
to
be
a
volume,
persistent
volume,
okay,
so
I
I
don't
think
unless
until
some
days
he
decides
to
remove
or
change
this
particular
thing,
the
entire
thing
from
here,
then
definitely
we
will
run
into
conflict
other
than
that.
I
don't
think
there
should
be
an
issue.
A
Okay
and
the
current
the
the
current
filter
management
that
we
have
or
the
the
mesh
that
measure
adapters
operation,
it
uses
the
it
patches
and
adds
a
volume
correctly.
Okay,.
E
A
E
Yeah,
if
you
perform
the
operation
from
in
here,
it
will
actually
create
a
volume,
and
it
will
not
that
so
doing
it
from
here
won't
work
right
now,
because
there
was
some
configuration
issue
that
I
actually
encountered
today
earlier
today,
which
is
actually
actually
basically
told
me
that
why
there
was
an
issue
and
he
fixes
needed
in
the
istio
for
this,
but
in
when
he
used
to
click
in
here.
E
What
used
to
happen
was
what
what
exactly
happens
is
that
we
actually
get
a
volume
we
patch
the
current
deployment,
whatever
you
have
and
we
drop
a
drop,
wasn't
binding
there,
so
that
onward
filter
can
actually
do
it.
A
Good
yep
good
good
good.
I
wonder
how
often
it's
the
case
that
a
given
kubernetes
cluster
wouldn't
have,
or
we
wouldn't
allow
a
volume
to
be
created
and
locally
attached,
like
a
local
volume
specific
to
that
node,
because
this
isn't
we're
not
talking
about
a
pvc,
persistent
volume
or-
or
I
guess,
a
pvc
local
to
that
node.
Maybe,
okay
yeah!
I
wonder
how
often
we
would
run
into
that.
It
wouldn't
seem
like
that
would
be
a
big
deal.
A
Okay,
good
this
that
that's
great!
A
So
if
we
move
on
any
other
questions
for
outcast
before
we
go
to
the
next,
stop.
A
Yeah,
it's
good!
So
if
you're
sitting
on
the
call
and
you're
like
you
have
no
idea
what's
going
on,
then
that's
probably
the
majority
of
us
so
but
but
but
you'll
learn
or
we'll
get
there
so
part
of
getting.
There
then
is
looking
at
more
at
how
it
is
that
the
adapters
are
providing
the
support
so
saitan.
Do
you
want
to
take
us
through
that.
G
Actually,
this
is
the
pr
which
I
generated,
so
this
is
actually
generating
the
workflow
generate
the
json
schemas
and
definition
for
the
osm
components.
G
So
we
are
generating
a
workflow
to
generate
those
json
schema
and
the
definition.
So
this
is
the
workflow
actually
in
the
github
workflow.
I
am
adding
this
file
update
and
under
that
we
are
performing
this
checks
like
if
you
see
this
one.
Actually
these
are
the
things
that
are
going
on
inside,
like
check
the
western
version
and
generate
and
push
the
definition
so
by
checking
the
osm
versions.
There
are
these
six
steps
which
you
will
get
over
here,
so
yeah.
You
can
see.
G
First,
we
are
checking
the
osm
version
from
the
actually.
This
vision
version
is.
G
This
first
job
is
actually
to
check
actually
the
versions
of
the
osm,
and
we
are
done
with
this.
Then
the
actual
steps
start
by.
G
That
lee
was
talking
about
like
the
crds.
Now,
if
you
want
to
check
out
the
osm
crds
it's
it
was
in
over
here
like
in
the
osm
repository
under
the
chart
services.
We
had
this
all
crds
files
over
here.
So
actually
we
need
to
collect
all
these
files
and
get
all
this
here.
So
I
actually
we
did
it
like.
So
there
was
not
a
proper
cli
command
to
generate
those
manifest
like
in
istio.
G
G
Then
we
just
merged
all
the
osm,
all
the
ml
files
together
in
this
osm.tml
file.
Next,
we
are
using
next,
we
are
using
this
qrin
api
json
schema
to
convert
the
open
api
to
json
schemas.
Actually,
this
is
a
node
based
cli
and
here
is
an
example
how
we
use
it,
so
we
are
using
it
over
here
to
get
those
schemas
so.
G
G
So
actually,
these
are
the
the
workloads,
and
these
are
the
json
schemas
and
its
definitions.
These
are
generated
after
you're
running
that
command.
A
Running
good
good
good
thanks
for
this,
I
I'm
sensitive
to
the
fact
that
some
of
us
on
the
call
might
be
lost
already,
and-
and
so
I
figured
I'd,
interrupt
and
ask
like
hey
who's.
Who's
got
questions
on
this.
A
A
I
will
say
this:
you
don't
get
this
kind
of
opportunity
on
most
other
cncf
project
calls
so
like
it's
a
good
one
to
learn.
G
Only
can
I
try
like,
in
one
words,
what
is
actually
going
on,
so
actually
what
in
one
word,
if
we
want
to
see
so
actually,
we
are
just
generating
the
manifest
file
from
the
respective
adapters
github
and
we
are
just
generating
the
manifest
files.
After
that,
we
are
building
the
json
schema
and
then,
after
after
that,
after
that,
we
are
generating
the
workload
definitions.
G
A
Cool
ashish
you're
up
man.
Can
you
answer
my
question
yeah?
What
question
can
you
can
you
characterize
that
workflow
and
what
it's
doing.
B
Yeah
he
did
not
complete
it
up
until
when
I
was
following
him,
he
was
downloading
all
the
yamls
and
after
that
I
kind
of
lost
him
when
he
went
with
that,
so
he
rounded
all
the
yammers
and
then
he
merged
into
one
yamane
and
from
there
he
was
okay.
So
there's
this,
this
action
actually
builds
that
particular
json
schema
for
it.
B
A
Yeah,
that's
it
that's!
Actually
the
thing.
That's
still
I
mean
yeah
good,
which
is
to
say
the
this
fires
off
at
build
time
when
we're
building
the
image
like
building
the
measuring
adapter
and
it
kind
of
kind
of.
Does
those
two
things
it
like
grabs.
The
installation
manifest
for
open
service
mesh
in
this
case
runs
a
utility
against
them
to
pull
out
the
the
json
schema
definition
of
those
custom
resources,
rejiggers
them
by
echoing
you
know,
by
by
echoing
things,
into
a
different
format.
A
Apply
some
jq
and
persist
them
back
to
the
repo
and
like
great,
like
you
know,
like
yeah,
that's
that's
more
or
less
that
simple
it
actually
can
be,
and
so
so
good.
So
we
need
to
take
this
copy
paste
it
to
other
repos
and
then
swap
out
where
it
says
open
service,
mesh,
cli
swap
in
like
kuma,
cli
or
link
or
dcli
or
or
you
know
now.
That
said.
D
No
osm
cli
doesn't
have
any
way
to
get
those
manifest,
so
we
are
using
the
github
api
to
fetch
those
charge
crds
from
over.
Since
reports.
A
Cool
which
all
right
good,
this
is
perfect.
This
is
going
to
line
up
my
my
argument
very
well,
which
is
we
do
want
for
the
measuring
adapters
when
they
ship
for
them
to
be
capable
of
managing
that
service
mesh,
so
they
should
have
and
understand.
They
should
have
these
patterns
right
or
they
should
have
these.
They
should
have
this,
these
schemas
in
hand
right
and
hopefully
for
a
number
of
versions
of
that
service
mesh
good.
That
way,
if
you're
running
in
an
air
gap
environment
you
can
just
start
managing
meshes.
A
The
bash
that
we
have
here
this
this
workflow
that
we
have
here
is
well
it's
we're
repeating
ourselves,
aren't
we,
which
is
to
say
that
in
the
adapter
itself,
there's
an
operation
to
install
a
service
mesh
in
every
adapter
right,
it's
pretty
universal.
What
does
that
operation?
Do?
A
It
goes
out
downloads,
the
manifest
just
like
this
from
github
and
then
applies
them
to
kubernetes,
and
so
actually,
if
we
augment
that
operation
to
download
the
manifest
which
it
already
does,
but
then
just
run
this
utility-
and
you
know
parse
through
you
know,
do
the
things
we're
doing
right
here
then.
Not
only
could
we
invoke
that
at
build
time,
because
we
can
just
invoke
that
function
from
here
in
this
workflow,
but
we
could
also
invoke
it
during
runtime.
F
We
just
just
extract
all
those
bash
commands
in
a
script,
even
if
we,
if
we
duplicate
the
script
and
just
use
first,
let's
say
two
parameters
for
the
source:
location,
the
folder
yeah,
let's
say
at
the
beginning
to
have
like
four
scripts.
I
don't
know:
grab
sources
run
utility
one
run
utility
two
and
serialize
all
together
back
or
something
like
that.
F
A
Also
yeah,
no
yeah
good
actually
so
that
approach
like
I'll
so
I'll.
Take
that
thought
that
you
just
said
and
shift
it
a
little
bit
to
something
like
this
well.
Well,
the
the
the
oh
gosh.
What's
the
word,
the
sophistication
of
the
thing
that
you
just
said
is
that
well,
you
could
actually
just
run
one
workflow
that
parallelizes
the
this
task.
For
all
of
the
excuse
me,
all
the
supported
service
meshes,
so
you
could
just
go.
You
know
in
one
swoop
like
grab
them
all
extract
them.
A
You
know
get
the
info
you
need
you
could
also,
then
you
could
also
build
in
support
for
different
versions
of
each
of
those
service
meshes
because
that
schema
that
json,
that
gets
extracted
might
be
a
little
bit
different
between
those
versions
and
so
yeah.
You
could
sort
of
centralize
that
and
have
it
there's
there's.
A
So
no,
but
the
reason
that
that's
not
being
done,
I
mean
like
that
that
there's
value
to
that,
like
I
don't
I
mean
and
and
what
I
was
trying
to
highlight-
is
the
fact
that,
like
what
you've
said,
is
an
improvement
like
upon,
what's
being
done
here
and
and
and
there's
merit
to
it
and
does
have
a
place
and
potentially
is
something
to
do
it
to
adina's
point
one
like
there's
one
portion
of
this
set
of
this
script.
Currently,
that
is
written
in
crush
remind
me.
A
E
It
uses
nexi
to
convert
it
to
it
into
an
executable,
because
the
idea
was
that
probably
golan
can
use
that
executable
internally
so
that
we
can
do
all
those
qualities
thing.
Basically,
during
the
run
time,
so
nexu
is
being
used
to
generate
a
executable.
E
D
E
Good,
I
didn't
realize
that
was
a
nude
actually
was
saying
that
yeah.
That
is
actually
a
point.
All
as
raksha
was
saying.
All
we
have
to
do
is
make
a
release
releases,
so
we
can
use
that
cli,
that
is
cuban
apia
json
schemas
right
now.
What
it's
doing
is
over
and
over
cloning
and
then
creating
the
binary
on
the
during
the
workflow
and
then
generally
using
that
particular
binary
to
generate
the
json
schemas.
E
What
we
can
do
is
just
release
the
boundaries
so
that
we
can
use
them
either
in
the
workflow
or.
E
So
we
can
use
it
either
in
the
world
or
probably
or
maybe
I
did
identity
from
go
so
every
time
when
it
when
it
registers
what
we
can
do
is
we
can
just
crop
that
binary
and
do
these
things
in
the
go
in
the
adapter
and
then
send
us.
Send
those
json
schema
for
registration
process.
A
Awesome
so
good
who,
who
is
going
to
continue
to
work
on
this.
A
Yeah
good
and
saitan
and
ashish
good,
okay,
guys
so
so
we're
gonna
and
and
then
whoever
else
wants
to
give
this
a
go.
There's
a
couple
of
things
we
need
to
change
around,
and
so
please,
you
know
are
to
to
what
adina
was
validly
doing
is
like
presenting
an
alternative.
Please
present
an
alternative.
If
this,
what
I
say
just
now
doesn't
doesn't
resonate
so
that
that
is
to.
A
Like
see
here's,
the
thing
is
when
this
runs
and
then
we
persist
these
these
patterns,
these
these
components,
but
conceptually
that
only
has
to
be
done
once
for
that
version
of
that
service.
Mesh.
A
And
so
you
know
in
concept
like
if
we
go
run
this
for
all
of
the
service
meshes
like
to
adina's
point,
then
we're
good
at
supporting
all
those
other
versions.
Anyway,
the
problem
is
that
new
versions
will
come
out
and
so
then
we'll
need
to
support
those
potentially
on
the
fly.
A
E
Yeah,
it
does
get
recorded,
but
right
now
machine
service
and
loses
that
information.
And
that's
something
on
me
to
do.
A
Got
you
okay
gotcha
by
the
way
for
just
as
something
else
to
for
all
of
us
to
look
at,
maybe
as
we
as
we
talk
through.
This
is
there's
an
architecture,
a
mystery
architecture,
deck
that
will
visually
walk.
You
through.
A
A
Yeah
we
have,
we
absolutely
have
to
find
that
slide.
Okay,
anyways,
so
guys
as
we
go
to
wrap
up
here,
because
we're
at
the
top
of
the
hour
is
there's
a
couple
things
to
do.
One
is
make
sure
that
that
versioning
is
that
as
the
jsons,
the
schemas
are
extracted
that
they're
versioned
today.
Are
we
doing
that
with
just
a
oh
yeah,
not
with
a
folder
name?
Are
we
doing
it
with
the
folder
and
including
that
in
the
the
json
itself?
A
Yeah?
Okay,
good?
So
that's
perfect!
The
other
thing
that
we
would
ideally
do
is
you
know
transition
this
out
of
well
leave
the
workflows
as
is,
and
some
and
but
is
to
build
in
this
cave
as
a
runtime
operation
as
well
into
the
adapters
and
then
in
the
workflow.
A
Instead
of
running
the
bash
scripts,
we
would
invoke
the
go
length
capability
in
the
adapter
itself,
which
would
mean
you
know,
potentially
spinning
up
an
adapter
and
using
grpc
to
invoke
that
operation
or
using
whatever
to
invoke
the
operation.
A
You
can
see
examples
of
how
that's
done
with
console,
measuring
console,
there's
workflows
that
will
spin
up
the
adapter.
A
And
invoke
its
grpc
operations,
I
think
under
tests
and
and
it's
using
bats
here,
to
do
these
tests
and
using
this
grpc
curl
to
invoke
those
commands.
A
So
that
would
be
relatively
easy
to
relatively
easy
to
do,
and
so
I
did
so
tanjay
if
you're
looking
for
a
link
to
the
presentation,
the
link
is
in
the
chat
itself
and
if
you
can't
access
it,
then
you'll
want
to
fill
in
your
your
community
member
form
and
you'll
get
access
that
way.
A
In
yep,
so
utkarsh
does
this.
Is
that
true
what
I
said
that
we
would
have
to
sp
in
the
workflow?
We
would
have
to
spin
up
an
adapter.
Have
it
running
in
order
to
invoke
its
operation.
E
So
if,
if
you
are
doing
doing
it
in
the
workflow,
then
yes,
but
probably
we
can't
do
it
instead
of
doing
it
in
the
workflow.
We
can't
do
when
so
right
now.
E
E
D
A
We
could,
but
if
the
only
the
purpose,
the
what
justifies
doing
that
is
actually
including
a
copy,
a
full
copy
of
all
of
that
service,
mesh's
installation
files,
not
its
containers
but
all
of
its
manifests
and
its
clis
if
needed.
A
Otherwise,
the
the
rudraksha
is
a
great
suggestion.
The
reason
that
it's
not
justified
to
have
a
separate
one
for
the
json
schemas
is
because
they're
so
small
anyway,
that
from
here
on
out,
we
should
start
supporting
the
current
version
and
on
forward
and
persist
that
in
the
image
since
it's
since
it's
small,
the
footprint
is
very
small
right.
A
Yes,
we
want
to
support
that
and
do
but
most
likely.
What
will
happen
is
there'll,
be
instructions
that
say
okay.
So
here's
how
you
configure
the
adapter
to
point
not
to
github.com
or
not
to
docker
hub,
but
rather
to
your
local
registry,
and
it
will
retrieve
containers
from
there
or
another.
Alternative
approach
is
to
have
them
run
mesh
recloud
on-prem
and
have
it
serve
up,
be
an
artifactory?
Have
it
serve
up
artifacts,
you
know,
and
so.
A
Good,
so
we
already
have
support
for
multiple
versions.
Building
this
into
the
runtime
and
go
is
the
ideal
is
a
good
goal.
We
would.
It
doesn't
mean
that
we
have
to
remove
the
workflows
that
have
been
written
today.
Those
should
be
merged
and
run,
and
we
should
then
provide
pattern,
management
and
filter
support
for
at
least
those
three
service
meshes
that
we
have
currently,
if
that
workflow
is
easy
enough
to
copy
and
paste
to
two
or
three
of
the
other
adapters.
I
think
I
think
that's
fine,
that
we
would
add
pattern.
A
The
initial
pattern
support
in
the
same
fashion
that
we've
just
done-
I
think
that's
okay
and
then
over
and
then
that
way
we
be
those
become
manageable.
Through
this
service
mesh
pattern
mechanism
and
then
over
time
we
can
build
in
runtime
support
in
golang,
for
you
know
doing
doing
what
we're
just
talking
about.
A
I
know
we're
over
time
everybody
if
you've
got
a.
If
you've
got
a
conflict,
you
know
feel
free
to
drop,
but
otherwise
I'd
love
to
see
what
kara
has
done.
D
So
for
the
filters,
I
have
added
a
multiple
selection
option
within
the
rows
itself,
so
we
can
select
many
rows
at
a
time
and
we
can
just
delete
so
apparently,
I
think
I
should
not
delete
all
of
this
so
because
we
are
on
the
mistake,
so
yeah,
let's
get
deleted
so-
and
this
was
a
quick
update
from
my
latest
vr
and
talking
about
the
previous
update,
and
that
I
was
supposed
to
demo
on
the
version
for
itself
was
that
I
updated
this
to
the
actions
and
over
here
with
just
some
addition
to
the
icons
and
one
thing
I
want
to
ask,
maybe
correctly
so
about
that
deployment
for
the
filter,
since
we
are
using
the
pattern
format
here
in
filters
also
for
deployment
so
for
the
deploy
endpoints
that
we
were
using
in
patterns
was
ap
api,
slash,
experimental
slash
patterns,
so
should
I
also
add
this
end
point
to
the
printers
also
or
we'll
have
separate
filter
endpoints
for
that,
because
currently
I
haven't
added
the
patterns
on
so
the
deploy
won't
work
for
now.
D
Say
that
last
part
again,
should
you
add
what
should
I
add,
the
patterns
endpoints
for
the
deploy
functionality
over
here?
That,
because
we
are
using
the
patterns
for
the
number
of
filters.
D
A
A
Okay,
so
so
a
given
pattern
might
describe
a
filter
and
highlight
the
fact
that
so
yeah,
okay,
now
I
think
I
get
to
okay
as
a
refresher
just
for
me
as
well
that
the
patterns
here
these
are
yaml.
These
are
yaml
manifest
it's
basically
what
a
pattern
is.
A
What
we're
dealing
with
here
is
not
manifest
as
much
as
we're
dealing
with
you
know,
primarily
with
binaries
and
yeah,
there's
a
little
bit
of
configuration
that
goes
with
it,
but
you
know
primarily
a
binary,
and
so
would
it
make
sense
to
that
play
button
that
you
have
if
people
click
it
today,
what
what
does
it
do?.
A
Yeah,
I
don't
know,
oh,
I
don't
know
that
that's
like
in
the
future
that
might
be
appropriate
at
the
moment.
It
might.
We
might
leave
this
as
just
binary
up.
You
know
crud
on
these
binaries
and
the
application
of
it
to
a
mesh.
Maybe
we
would
leave
that
in
the
patterns
ui
and
also
in
the
actual
service
mesh
management,
uis.
A
It's
not
that
potentially
applying
filters
from
here
or
deploying
filters
from
here.
Is
it's
not
that
it's
inappropriate
like
yeah
sure
I
mean
like
that,
it's
that
it's
a
little
bit
of
a
harder
user
experience
to
design
and
clearly
the
like
we're,
not
there
yet
so,
maybe
just
trying
to
get
it
correct
in
terms
of
how
that
experience
happens
in
the
patterns
area
and
then
once
that's
done,
and
things
like
maybe
come
back
and
ask
this
question
again.
A
Okay,
like
from
this
starting
point
in
the
filters
interface,
does
it
make
sense,
then
to
let
people
apply
these
one
thing
that
does
make
a
ton
of
sense
here
would
be.
Really
you
know
interesting
to
see
is
so
so
great.
So
we've
got
these.
You
know
three
different
filters
and,
and
we
can
manage
their
life
cycle,
we
can
upload
them
to
measures
so
that
measuring
can
deploy
them,
so
you
can
deploy
them
using
a
pattern
good,
but
well.
Geez,
I
can't
remember
like:
is
that
test
filter
being
used
anywhere?
A
Is
that
actively
deployed
gosh?
I
can't
remember
I
wish
measuring
would
tell
me
that
like,
and
so
that
is
certainly
something
that
would
is
relevant
that
is
appropriate
to
expose
in
this
ui.
A
A
C
A
And
it's
something
to
go.
Look
at
like
in
the
current
mesh
sync
database
like
if
you've
got
a
pattern
deployed,
is,
is
that
data
readily
available
and
then
great?
If
so,
would
cost
the
relationship
that
you
that
gets
created
the
quote-unquote
mesh
sync
id.
E
Yeah
yeah,
so
because
I
anticipate
that
filters
will
eventually
like
they
do
use
the
batteries
workflow.
That
patterns
work
is
going
to
create
an
entry.
Mesh
thing
is
going
to
send
the
data
to
its
database.
I
mean
so
yeah.
There
will
be
the
relation.
We
can't
figure
out
from
there
that
if
some
a
certain
thing
is
actually
present
in
the
in
the
cluster
or
not.
A
Good
okay,
so
it
comes
down
to
like,
in
this
case,
like
a
graphql
resolver,
to
run
a
query
to
show
the
active
number
of
deployments.
A
A
Okay,
yeah
yeah,
okay,
so
we'll
catch
up
on
this
same
call,
afterward,
but
and
anyone's
everyone's
welcome
to
stay.
If
you
want
to
see
how
to
make
a
release,
but
otherwise
it
sounds
like
that's,
that's
a
wrap!
So
that's
that's
a
good!
It's
a
healthy
webassembly
filters
call
so.
A
Okay,
thanks
everybody,
yeah,
okay,.
A
So
yeah,
that's
a
great
so
so
we
have
examples
of
that
of
our
approach
to
that
using
istio,
which
is,
I
think
there
are
like
three
or
so
operations.
One
is
enabling
or
disabling
mtls.
Another
one
is
like
enabling
or
disabling
automatic
sidecar
injection.
A
E
So
the
issue
with
those
approaches,
for
example,
automatic
cycle
injection,
is
just
the
labeling,
and
that
is
dispatched
the
issue
with
the
which
we
will
encounter
with
configuring
onward
on
the
fly.
E
If
there
is
no
crds
that
one
was
going
to
accept
a
configuration
from
upstream
the
upstream,
which
actually
basically
which
configurated
initially
so
the
the
issue
would
be
that
if
there
is
no
crd
to
communicate
with
communicate
via
cube
the
humanities
api
to
the
onward,
then
I
mean
it's
even
I
would
have
to
actually
look
at
how
we
can
instruct
the
control
plane
of
that
particular
to
actually
pipe
that
configuration
to
to
onward
the
reason
we
cannot
use
those
previous
approaches
because
they
rely
on
patches.
E
A
Yep
makes
sense,
I
don't
know
that
everybody
else
like
the
way
that
you
but
yeah,
so
rude
rocks,
there's
a
couple
of
other
like
there's.
There's
general
approaches
that
other
of
how
you
configure
applications
that
run
on
kubernetes,
so
one
of
those
is
some
of
the
sometimes
applications
will
use
labels
and
the
key
value
pairs
in
those
like
annotations
within
the
manifest
files
used
to
run
that
applica.
A
You
know
that
that
workload
that
application
on
kubernetes
some
use
custom
resources,
some
use
annotations,
unlike
labels,
inside
of
manifest
files
and
so
for
those
service
meshes
that
don't
have
everything
described
in
a
custom
resource
yeah
we'll
have
to
we'll
have
to
have
a
common
approach
to
manually,
describing
we'll
have
to
create
our
own
json
schema
to
describe
those
but
it.
A
But
we
already
have
a
bit
of
a
model
for
what
that
looks
like
like,
if
that's
just
applying
a
label
to
a
namespace.
That's
one
way
of
configuring
how
a
mesh
works.
If
that's
annotating
a
you
know,
patching
a
manifest
or
adding
an
annotation
to
manifest
that's
another
way,
and
actually
that
very
common
approach
like
that.
That
we'll
find
that
we
need
some
of
those
for
every
mesh,
and
some
of
those
will
just
have
to
be
hard-coded
in
that
sense,
and
that's
actually
generally,
it
will
be.
Okay,
generally,
those
are
long.
A
Those
are
like
permanent
long-lived
considerations
that
a
service
mesh
has
chosen.
They
chose
not
to
use
the
crd,
but
to
describe
that
to
describe
its
behavior
over
here
and
generally,
they
don't
shift
away
from
that.
Sometimes
they
do
console
is
an
example
of
a
mesh.
That's
shifting
away
from
some
of
its
configuration
requirements
in
terms
of
like
what
gets
added
as
an
annotation,
so
is
that
satisfactory,
rude
rocks
in
terms
of
yep.
D
D
I
guess
I've
seen
this
happening
with
console
like
image
have
worked
like
this
earlier.
I
guess.
A
Yeah
yeah:
that's
right:
yeah,
okay,
cool!
Okay!
Thanks
for
bringing
that
up
all
right!
Well,
we're!
Definitely
over
time
I'm
gonna
say
thank
you!
Everyone
for
coming,
I'm
to
stick
around
for
another
few
minutes
to
show
how
to
do
a
release,
so
scientist
can
do
that,
but
otherwise
we'll
catch
you
all
next
monday
right!
Thank
you
all.
A
So
this
so
this
process
of
making
a
release
is
it's
universal
for
us
in
in
our
approach,
it's
universal
across
all
of
the
repos.
A
A
And
we
have
you,
know,
unit
tests
and
integration
tests
that
are
performed
as
part
of
a
release
that
pushing
you
know
whatever.
Those
artifacts
that
are
published
are
pushed
to
different
distribution
points,
so
most
notably
docker
hub
or
integrating
with
scoop
for
windows
or
homebrew
for
mac
and
linux,
and
sometimes
the
artifacts.
The
repository
is
github,
and
so,
in
this
case,
for
this
particular
artifact.
What
we'll
end
up
with
is
a
binary
kind
of
like
mesh
ctl
that
just
produces
a
small
binary
and
what
we'll
end
up
doing
is
releasing
that
binary
on.
A
Github,
so,
okay,
there
is
a
couple
couple
of
things
to
be
concerned
with
so
so
one
is
under
the
assumption
that
we
have
a
workflow
that
builds.
You
know,
compiles
this
utility
and
produces
a
multi-architecture.
You
know
multiple,
multiple
binaries
for
different
architectures
right
now.
It's
not
all
it's
not
horrifically
concerning
that,
we
would
produce
multiple
architectures
because
for
the
most
part,
the
concern
here
is,
or
the
reason
we're
producing.
The
binary
mostly
is
to
feed
and
be
used
by
measuring
adapters.
A
So,
whatever
architecture,
those
adapters
are
uniformly
using,
that's
probably
the
architecture
that
we
want
to
go
for,
and
so
basically
you
guys
already.
We
already
have
that
segment
of
the
workflow,
the
of
the
that
we
were
just
looking
at
earlier
like
well,
you
already
have
a
segment
of
a
workflow
that
builds
the
thing
and
like
produces
the
and
so
unless
root
rock
or
someone
else
on.
The
call
has
a
different
perspective
as
to
how
those
should
be
built.
A
Then
you'd
use
that
same
section
of
the
workflow
like
you'd,
probably
end
up
with
we'll
probably
want
to
do,
is
create
a
new
workflow
file.
That's
for
purposes
of
building
and
releasing
so
not
not
just
this
is
poorly
named,
but
not
just
building
which
is
kind
of
what
this
one
is
for,
but
the
other
one
would
build
and
release,
and
the
release
part
is
that
the
artifacts
need
to
be
published
on.
A
A
A
What
you
end
up
doing
is
it
was
like
okay,
it's
time
to
make
a
release
and
we're
okay
and
then,
like
we're
good,
you
know
we're
ready
you
today.
You
won't
have
the
permission
to
see
this,
but
tomorrow
or
later
today,
we'll
get
that
changed
such
that
you'll
be
able
to
come
in
you'll
click
edit
and
you'll
click
publish,
that's
actually
how
you
make
a
release.
That's
it
edit
publish
the
work,
a
workflow
that
will
kick
off
based
on
this
release
event
and
then
do
the
compile
the
binaries
and
then
deposit
the
binaries.
A
Just
where
we
were,
you
know
right
inside
of
right
down
here,
so
the
actual
act
of
making
a
release
is
like
you
know,
extraordinarily
easy.
It's
two
button
clicks
and
it's
intentionally
left
manual
because
we're
not
automatically
releasing
all
the
time
we
actually
do
make
automatic
releases
of
measuring.
A
That's
the
measuring
edge
release.
Every
time
a
pr
is
merged.
A
mesh
release
is
happens
on
the
edge
release
channel
on
the
stable
release
channel
a
stable
release
from
measuring
is
only
done
when
someone
comes
in
and
manually
versions
it
and
releases.
It
there's
a
lot
of
automation
behind
what
what's
happening
here
like
this
whole
thing
is
auto
drafted.
This
number
is
auto
created.
Sometimes
it's
not
appropriate
in
this
case.
A
A
If
I'm
gonna
make
this
release
not
to
steal
your
glory
or
take
away
the
button
clicking
from
you,
but
to
confirm
whether
or
not
the
build
process
is
running
now
is
going
to
okay,
see
and
there's
no
build
process
running,
so
that
does
mean
that
there
needs
to
be
a
new
workflow
written
that
does
answer
the
question
that
the
current
release-
that's
of
the
current
binaries
that
are
out
there,
these
were
hand
generated
by
karsha
and
then
uploaded.
A
G
Yeah,
like
we
need
to
create,
create
a
workflow
to
run
this
thing
yep.
This
is
missing
in
this
repository.
A
G
A
Looking
at
other
repositories
and
how
they
trigger
often
a
release
event
and
like
it's
just
copy
and
paste
from
a
couple,
different
workflows
to
get
to
the
point
where
you're
then
persisting
the
artifacts
as
part
of
the
release.
A
Well,
the
thing
is
like,
even
though
I
technically
made
a
release,
it
doesn't
actually
have
the
binaries
there,
like
github,
will
automatically
zip
up
the
source
code,
the
the
contents
of
the
repo
and
attach
them,
but
these
are
not
built
into
binaries.
So
the
only
available
binaries
are
those
that
are
right
here
and
of
the
change
that
went
through
anshu
had
helped.
You
know
he
made
a
change
in
the
repo,
but
this
isn't
material
to
the
way
that
this
utility
works.
A
I
think
what
these
guys,
I
think
it
would
crush,
was
just
acknowledging
like
we
don't
have
automated
release
going
on
so
the
next
time
that
we
do
make
an
update
here
that
it's
you
know,
you're,
not.
We
won't
be
able
to
go
easily
grab
the
binary,
because
normally
what
you
would
do
is
in
the
workflow
that
is
being
built
into
the
adapters.
A
You
would
say:
okay
go
go
grab
the
latest
copy
of
the
binary,
that's
appropriate
for
that
linux
based
environment.
Probably
this
one
go
grab
the
latest
copy.
You
can
pin
it
to
a
certain
version,
but
then
that
workflow
gets
fragile,
so
you'd
always
want
to
have
on
the
latest
and
and
so
we're
missing
a
workflow.
So
it's
so
it's
not
the
most
urgent
of
things,
but
it
is
needed.
A
A
A
A
This
makes
sense,
so
there's
some
the
only
thing
I
think
that
isn't
copy
and
pastable
is
probably
this
right
here,
but
this
is
tried
and
true,
like
there's
all
kinds
of
other
workflows
that
do
this,
I
think
otherwise,
the
rest
of
this
is
you
can
copy
and
paste
from
other
workflows
that
we
have
this
one.
A
Doing
the
honors,
if
you
will
of
like
of
clicking
the
button.
A
Nice,
okay,
cool!
Well,
it's
nice
to
see
all
you
guys
I'll
catch
you
on
wednesday's
call.