►
From YouTube: Istio Analyzers Workshop Meeting 2019-09-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Great,
so
now
that
we're
properly
recording
yes,
so
we're
going
over
how
sto
analyzers
work
and
the
goal
of
this
meeting
is
to
kind
of
walk
through
the
architecture
and
process
of
building
them
in
practical
terms.
So
what
we're
looking
at
here
with
this
diagram,
this
kind
of
shows
how
we
have
our
inputs
based
on.
What's
on
the
kubernetes
api
server
or
gamma
files
or
a
combination
of
the
two,
what
happens?
Is
we
combine
those
into
a
resource
snapshot
that
we
then
pass
to
the
analyzers,
and
the
analyzers
can
then
actually
use
that
snapshot?
B
Look
at
resources
and
relationships
between
these
resources
and
use
that
to
generate
validation,
messages
that
are
the
output.
So
this
is
cool
because
we
can
use
it
not
only
in
the
command
line
case,
but
also
actually
update
status
fields
on
live
resources.
So
we
have
this
functionality
coming
down
the
pipe
as
well,
and
it
gives
us
kind
of
a
unified
library
that
can
handle
multiple
cases
in
for
a
number
of
different
things,
all
right.
So
let
me
jump
over
to
looking
at.
B
The
the
an
example
of
an
existing
analyzer,
so
here
is
the
kind
of
a
simple
analyzer
that
implements
this.
This
API
and
it's
pretty
straightforward.
We
have
an
interface
to
represent
analyzer.
It
needs
to
have
a
metadata
method
that
returns
information
about
the
analyzer
and
analyze
method.
That
actually
does
the
analysis.
So
the
important
things
to
note
here
for
analyze,
there's
this
context
object
here
and
this
this
is
important
for
two
things.
B
B
B
Ok,
great,
the
other
important
aspect
of
this
is
the
metadata
where,
for
each
analyzer,
we're
defining
a
name
but
even
more
importantly,
we're
defining
the
set
of
inputs
that
it
uses.
This
is
important
property
because
it
lets
us
be
smart,
not
only
about
which
resources
we
actually
gather,
but
also
about
which
analyzers
we
choose
to
run
in
different
contexts.
B
So,
for
example,
if
we
have
an
analyzer
like
like
this
one
that
looks
at
virtual
service
destinations,
it
cares
about
things
like
the
what
services
are
visible
so
and
that's
the
like
the
hosts
field
on
the
virtual
service
destination
entries
in
some
circumstances.
That
may
not
be
available.
For
example,
if
you
are
running
galley
with
service
discovery,
disabled
or
maybe
you're
running
with
files
only
and
you
aren't
including
files
that
find
all
your
services,
in
which
case
it
doesn't
really
make
sense
to
run
that
analyzer.
B
So
having
this
set
of
inputs
lets
us
say:
ok,
we
know
we're
running
in
a
context
where
we
aren't
going
to
know
about
these
service
entries.
Therefore,
we
can
skip
over
this
particular
analyzer,
so
this
is
I've
actually
got
a
PR
that
splits
this
so
that
it's
more
ground
in
general.
It's
a
good
idea
for
analyzers
to
be
granular
based
on
their
required
inputs,
so
that,
in
this
particular
instance,
for
example,
that
we
can
say:
okay
well
we're
going
to
skip
over
checking
those
host
entries,
but
we
still
want
to
check
the
destination
rules.
B
A
B
So
you
notice
that
synthetic
service
entries-
let
me
jump
back
to
where
we're
defining
the
resources
available
in
our
snapshot.
So
there
is
this
is
a
metadata
amyl
that
defines
kind
of
all
these
resources.
All
the
different
types
that
we
care
about-
and
one
important
thing
to
keep
in
mind-
is
that
the
analysis
that
we're
doing
is
based
on
proto
objects,
so
we're
not
looking
at
yam
all
directly
we're
looking
at
proto
object.
B
We
have
that
defined
here
and
there
is
a
I.
Was
it's
kind
of
out
of
scope
for
this,
particularly
to
get
too
deep
into
that?
Or
maybe
it
isn't
I,
don't
know
you
guys.
Let
me
know
what
you
think,
but
we
have.
There
is
a
a
transform
object
that
takes
as
inputs
some
of
the
kubernetes
resources
and
creates
as
output
this
to
synthetic
service
entries
based
on
the
actual
pods
and
services
and
deployments
and
so
forth.
B
B
B
One
other
thing
that
does
bring
up
is
because
we're
doing
this
kind
of
post
translating
out
of
kubernetes
space,
because
you
know
we
want
this
to
be
as
generic
as
possible
so
that
it
works
for
non
kubernetes.
Things
as
well
in
general,
are
the
analyzers
should,
where
possible,
try
to
be
independent
of
kubernetes,
an
independent
of
kubernetes
objects.
B
That
said,
there
are
definitely
cases
where
there's
a
lot
of
value
in
doing
some,
maybe
more
kubernetes
specific
checks
that
you
know
they're
valuable
to
the
customer
and
yes,
they
may
not
apply
everywhere,
but
they're
still
worth
having.
So
that's
allowed
again.
This
is
another
place
where
registering
the
inputs
is
important
because
it
means,
if
we're
in
the
future.
If
we
have
non
kubernetes
stuff
coming
in
and
we
want
it
to
work,
then
we
can
just
skip
kubernetes
specific
analyzers
Lucas.
Did
that
answer
your
question
yeah.
A
I
was
just
wondering
because
we
have
in
Kyary,
we
have
a
large
number
of
validation.
Well,
it's
the
same
funnel
I
said,
but
then
we
also
say
to
combine
with
kubernetes
colleges
Raleigh.
We
can
just
start
with
some
tight
tube
to
provide
to
migrate
to
these
analyzers
or
not.
You
know
the
same
use
case
that
you
presented
just
involving
your
objects
and
probably
what
we
can
do
is
to
try
to
collect
what
we
would
need
to
from
kubernetes
to
analyze,
for
example,
I'm
thinking.
A
Yes,
imagine
that
you
have
a
destination
roll,
something
that
really
tried
to
add
value
is
okay.
This
destination
roll
is
targeting
to
a
deployment
that
doesn't
assist
because
you
know
is
but
labeling
or
things
like
that
or
recognition
we're
going
to
reach
this.
This
this
service
right
and
I,
was
wondering
how
about
yeah,
as
you
said,
that
probably
could
be
a
face-to
or
a
next
step
right,
yeah.
B
You
know
it's
using
this
lookup
table
that's
been
created,
but
the
lookup
table
is
based
on
the
service
entries.
So
there's
a
whole
long,
there's
two
things:
it's
because
there's
the
there's
the
manually
defined
Service
entries
and
then
there's
what's
actually
there
based
on
career
days.
That's
the
synthetic
service
entries.
So
let
me
find
where
I'm
rection,
referring
to
this
I,
can
type.
B
B
There
we
go
okay
right,
so
we
have
check
destination
host.
It's
looking
at
the
manually
defined
Service
entries,
and
then
it's
looking
at
the
synthetic
service
entries.
So
we
do
have
some
capability
to
handle
the
the
kubernetes
specific
stuff
and
kind
of
the
reasoning
behind
having
the
synthetic
service
entries
is
that
it's
a
platform
independent
layer
for
getting
these
services
does
that
make
sense.
Yeah.
B
Yes,
so
what's
happening
is
the
the
analyzer
framework
is
built
in
in
galley,
so
you
can
run
as
part
of
galley
the
case
where
we're
updating
the
status
field
on
resources
that
I
mentioned
before
it
can
also
run
through
the
command
line.
But
what
the
command
line
is
doing
is
it
is
spinning
up
bits
of
galley
and
plugging
them
together.
B
A
B
Because
gali
is
listening
to
the
kubernetes
api
server
for
any
updates
or
changes.
So,
as
you
know,
new
services
come
online
and
then
it
would
turn
those
into
synthetic
service
entries
in
the
command
line
case
we're
just
getting
a
one
time
snapshot
rather
than
kind
of
continuously
watching
it,
okay,
but
even
in
the
even
in
the
live
case.
B
B
B
One
of
the
things
we
recognize
is
the
way
we've
done
it
so
far
may
not
be
the
best
way.
So
please
don't
assume
that
we're
have
our
set
in
our
ways.
At
this
point,
there's
there's
definitely
room
to
change
kind
of
the
the
practices
in
the
way
that
never
going
about
building
these
things
were
I,
guess
the
the
basic
interface,
the
API
interface,
where
you
know
we
have
our
analyzer
interfaces
fairly
solid
at
this
point,
but
even
that
we
can
change
if
we
end
up
needing
to
so.
B
See
other
things
I
wanted
to
point
okay,
so
this
we
have
the
analysis
framework
as
it
stands
is
primarily
aimed
at
multi
resource
validation.
If
you
have
a
case
for
single
object,
validation
where,
for
some
reason
it
doesn't
fit
in
the
existing
single
object,
validation,
code
or
options
that
we
have,
then
that
can
go
here
and
that
does
make
sense.
B
So
an
example
is
something
as
an
analyzer,
ed,
just
wrote
where
our
single
object
on
that
we
run
with
the
webhook
and
so
forth,
doesn't
allow
deprecated
fields,
or
rather
we
don't
have
any
warning
level
output
to
that.
So
we
couldn't
stick
a
check
for
deprecated
fields
where
we
want
to
merely
warn
the
user
rather
than
blocking
them
from
applying
things.
We
couldn't
put
that
there
so
and
created
an
analyzer
for
it,
which
is
great.
So
we
have.
We
have
that
now
and
we
can
add
more
stuff
like
that,
as
it
makes
sense
all
right.
B
A
B
B
Or
are
you
trying
to
walk
somebody
through
it,
so
what
I
was
kind
of
hoping
for
is
seeing
if
we
could,
as
a
group
kind
of
walk
through
this?
So
there
is
a
I
should
point
this
out.
There
is
a
readme
under
gali
package,
config
analysis
that
kind
of
walks
through,
hopefully
everything
you
need
to
know
to
create
an
analyzer,
and
it's
got
some
example
stuff
and.
B
If
you
guys
agree
that
this
would
be
a
good
use
of
time,
then
we
could
have
somebody
drive
and
try
to
walk
through
just
kind
of
building
and
analyze
it
together
if
you'd
rather
just
poke
at
it
on
your
own,
and
then
you
can
follow
up
with
any
questions
or
concerns
that
come
up
I'm.
Okay,
with
that
too,.
A
A
Do
you
see
nowadays,
because
one
of
the
feature
that
that
we
searched
introducing
Jiali
our
validation,
that
are
summarized
it
burn
a
myspace?
For
example,
one
of
the
feature
done
that
Adi
offers
to
the
user
is
that
the
booking
for
namespace
may
have
a
TLS
enable
for
the
name
is
base
level
so
and
if
it's,
for
example,
if
there
is
something
that
is
not
fully
defined,
for
example,
if
the
idea
as
configuration
is
present,
but
you
know
how
the
destination
brews
that
we
need
also
to
enable
TLS
at
name
is
this
level.
A
My
question
is:
is
were
for
sample
this
kind
of
situation.
Where
should
look
for
the
status
because,
from
logic
perspective
of
the
user
is
at
namespace
level,
but
us
is
something
that
relay
is
running.
The
analysis
and
gallery
need
to
prints
out
some
kind
of
resource
in
the
status
or
in
a
TLS
object
on
a
destination
root
object
both
so
yeah.
My
main
question
is
okay.
A
How
do
you
envision
that,
for
example,
this
kind
of
validation
that
could
involve
several
resources,
but
probably
the
name
is
P-
would
be
a
nice
place
to
locate
them
if
house?
If
use,
you
have
thought
about
that,
or
you
know
you
have
some
idea
or
is
something
that
you
know
it's
more
for
the
feature
so.
A
Yeah,
if
somebody
mail
in
that
that
you
know
that
we
want
to
enable
TAS
in
booking
for
right,
we
create
the
configuration,
but
we
need
to
provide.
You
know
the
destination
rule
right,
then
a
nicer,
probably
we'll
just
to
update
the
status
in
a
TLS,
something
like
okay,
probably
this,
this
TLS
needs
it's
missing
a
list.
Initial
rule
configuration
or
something
like
that
right,
but
I,
wonder
if
you
know
if
this
this
may
happen,
when
we
have
several
resources
in
more
sample.
If
that
may
happen,
also
we
want
to
add
the
status
at
name.
C
A
Yeah
I
know
that
is
something
that
this
is
breaking
the
rules,
because
you
know
it's
trying
to
have
some
nice
tattoos
out
set
of
whiskey.
Oh,
but
probably
is
what
the
user
would
look
about.
So
yeah
I
know
that
these
are
probably
a
complex
question,
but
the
guy
is
something
that
inky
Alice
is
more
easy,
because
what
we
do
is
to
provide
an
API
that
okay,
Molly
dynamic,
face
right
and
and
this
the
resources
will
respond
to
that.
So
and
I
was
wondering
how
to
try
to
add
this
kind
of
logic.
B
Okay,
so
if
we
wanted
to
do
namespace
level
validation,
we
can
in
fact
do
that
where
we
say
okay,
this
namespace
is
essentially
we
could
say
this
namespace
is
misconfigured
or
we
think
you
should
change
something
about
how
this
namespace
is
configured
I
think
we
may
actually
have
that
here.
So
david
mo.
B
B
That
that
particular
detail
is
something
that
we
we
are
aware
of,
we're
thinking
about
it.
I,
don't
think
anybody
has
actually
worked
out
the
details
of
whether
we
can
attach
something
directly
to
these,
whether
we
can
attach
a
status
field
directly
to
the
namespace
resource,
or
maybe
we
need
to
have
a
special
CRD
object
that
just
has
status
fields
on
resources
that
we
can't
attach
a
status
field
to
directly.
We've
got
a
few
different
ways.
B
A
C
B
That's
it
I
guess
another
feature:
I
guess
that
I
I
am
I
have
on
a
list.
We
don't
currently
have
a
way
to
read,
restrict
analysis
to
a
particular
namespace,
but
that's
something
that
we
can
and
should
add
right
now,
if
you're
running
as
part
of
Galle,
if
it's
running
live
in
galley,
it
will
be
paying
attention
to
whatever
galley
is
paying
attention
to
I.
B
B
B
So,
okay,
so
going
back
to
do,
we
want
to
try
to
walk
through
building
one
another
if
you
guys
think
that
isn't
the
best
use
of
time,
I'm,
okay,
with
doing
something
else,
I
know
Edie
and
David
have
both
been
working
on
building
analyzers,
and
so
you
have
very
recent
fresh
experience
and
feedback
that
maybe
we
could
talk
about,
or
maybe
things
that
you
ran
into.
B
E
I
mean
I
was
successful
in
writing.
One
analyzer
I
didn't
come
prepared
for
feedback,
so
you
can
take
me.
Oh
yeah,
that's
that's.
Fine.
Yeah
I
mean
journey.
I
was
successful.
It
took
me
a
little
while
to
wrap
my
head
around
it,
but
I
thought
the
process
was
relatively
palatable.
I.
D
B
D
B
We
we
don't,
we
don't
have
a
I
guess,
canonical
list
of
specific
validators.
We
want
I,
have
so
far
been
kind
of
trying
to
take
inspiration
from
what
already
exists
in
ki,
Ollie
and
sto
vet.
Those
are
things
that
have
basically
proven
their
usefulness
to
some
level.
So
how
much
are
these?
Do
they
have
you
covered
already,
not.
D
B
D
B
Got
this
as
his
handy
doc
that
I
wrote
a
little
while
back
that
just
kind
of
surveys,
the
existing
space
of
tools
and
so
here's
what
sto
vet
covers
today,
ya
say
so,
but
809
Lazer's
yeah
and
some
of
these.
Some
of
these
we've
already
sort
of
covered
he's
Niraj
in
the
room.
Unfortunately,
not
he
couldn't
make
it
okay,
but
he
was
gonna
watch
the
recording
I
neeraja.
B
B
Now
there
we
go
is
that
I
actually
have
a
it's,
probably
not
as
nice.
As
your
less
often
look
at
that
link
again,
I
I
know,
I've
looked
at
that
link
before
here's
I
found
a
good
summary
when
I
wrote.
This
doc
was
just
kind
of
the
list
of
error
messages
that
y'all
II
can
output
it
pretty.
There's
a
lot.
I
mean
that's
pretty
pretty
comprehensive
and
it
covers
a
lot
of
really
useful
stuff.
I.
E
B
E
A
B
B
Okay,
well,
I'll
go
ahead
and
I'll
take
on
that
I'll
create
the
epoch
and
try
to
initially
populate
it
with
some
stuff.
I
am
I
will
probably
not
get
it
right
and
there
will
probably
be
things
that
need
to
be
added
or
removed
from
that
list,
and
so
I
will
request
once
I've
got
that
in
place.
I
will
request
everybody's
help
kind
of
making
sure
that
we
have
the
right
list
of
things
and
that
you
know
if
there's
something
works
like
yeah.
We
don't
really
need
to
do
that.
B
One
or
here's
this
other
thing
that
we
should.
We
should
absolutely
be
checking,
let's
make
sure
those
get
on
there.
It's
also
seems
likely
that
not
all
of
this
will
make
it
into
one,
for
we
can't
build
all
this
I
think
in
like
that
short
of
time,
but
I
think
we
can
make
a
a
good
start
at
it.
So
thoughts
on
priority
would
also
be.
E
B
B
B
So
if
you
guys
I
can
go
ahead
and
create
that
that
epic
today
and
start
putting
stuff
up
on
there
and
then
it'd
help
everybody
who's
interested
in
helping
out
which
is
hopefully
most
of
you
would
be
willing
to
pick
one
up
and
just
start
hammering
at
it.
I
will
be
very
open
and
available
as
much
as
possible
just
to
answer
any
questions
or
concerns
that
come
up.
Is
there
a
slack
channel?
We
should
be
posting
question.
Thank
you.
That's
a
great
great
question
and
yes,
there
is.
B
B
B
Alright,
thank
you.
Guys
really
appreciate
your
your
time
and
your
willingness
to
look
at
this
and
help
us
I
think
this
is
I,
think
it's
actually
going
to
be
a
really
impactful
for
customers
and
for
our
users
to
be
able
to
more
easily
do
all
this
stuff.
We've
had
there's
been
some
great
tools
out
there
and
I
I've
been
super
impressed
with
ki
Olli
and
sto
vet.
Just
as,
as
you
know,
these
are
useful
tools.