►
From YouTube: Policies and Telemetry WG Meeting - 2019 03 13
Description
Agenda:
- Review 1.1 release notes PR (https://github.com/istio/istio.io/pull/3558/files)
- Roadmap for 1.2+
- Rough proposal: https://bit.ly/2EZlSWC
Looking for community feedback (what’s missing, what’s wrong, etc.)
- “connection.mtls needs improvement” doc discussion
- gRPC streaming message counter doc discussion
- Mixer v2 / WASM investigation update
NOTE: istio-policy is going to be off by default in 1.1
A
A
Great
is
there
anything
in
particular,
you're
interested
in
with
regards
to
policy
and
telemetry
nope,
just
trying
to
learn
more
figure
out.
You
know
where
the
best
place
myself
awesome.
Well,
there's
plenty
of
good
spots.
A
C
Right,
so
this
is
a
proposal
to
improve
situation
around
connection
Angeles.
We
have
right
now
it's
a
bit
miss
grinding
at
this
point,
connection
Phyllis,
really
talks
about
the
upstream
downstream
connections
and
then
boy.
That
means,
if
you
look
at
the
article
from
the
client-side
proxy,
looks
like
it's
not
enabled
because
it's
a
connection
from
application
to
the
side
card
and
that
that
is
confusing
and
everything.
D
C
C
C
C
Is
not
secure,
so
I
started
looking
into
anyway
it,
but
it
has
some
problems
with
this.
Tv
is
inside
and
vehicle
that
makes
deriving
this
stuff
a
bit
difficult,
but
I,
don't
think
it
is
nothing
blocking
a
theory.
The
problem
really
is
that
when
you
try
to
understand
upstream
connection
during
the
check
call,
we
don't
really
have
an
upstream
connection,
because
that
happens
asynchronously.
So
we
don't
cannot
reliably
tell
what
status,
but
we
can
tell
is
the
desired
status.
C
B
C
C
C
B
C
C
F
The
only
issue
is
that
it
mixes
attributes
that
look
like
configuration
state
with
actual
runtime
state
right.
The
downstream
connection
is
that
children,
time
State,
so
even
if
you're
in
permissive
mode,
you
will
actually
know
whether
the
connection
was
actually
made
using
TLS
or
not
or
or
whatever,
but
I
the
yeah
where's
the
other
way
round
and
if
they're
abstracting,
everything
in
one
place
and
kind
of
mixing
configuration
with
state.
D
F
So
when
it
does
so,
I
think
so,
I
think
that
if,
if
we
are,
if
you're
certain
that
observed,
so
if
they're
certain
that
what
would
it
be
not
deserved
would
be
equal
to
configured,
then
we
don't
have
to
make
that
distinction.
Then
we
can
actually
say
that
no,
this
is
actually
the
observed
state
but
which
is
observed
with
using
some
other
magic.
So.
C
B
G
F
F
C
No
way
to
get
the
exact
connection
that
you
and
we
used,
so
it's
a
listening
problem,
but
right
now
you
only
have
no
method
at
about
asking
for
day
off
you
cool,
okay,
not
what
they
do,
do
it
but
yeah.
So
this
I
think
it's
mostly
the
challenge.
They
were
just
making.
Em
boy
behave
in
a
way
that
we
can
collect
stuff.
C
F
F
F
B
B
D
C
E
E
D
C
D
C
B
The
one
thing
I'm
wondering
about
this
is
whether
these
attributes
need
to
be
our
PC
specific
or
if
they
should
be
more
generic.
If
you
imagine
at
some
point
when,
when
everything
else
is
done-
and
we
start
looking
at
supporting
other
protocol,
that
way
are
you
gonna
have
the
same
issue
so
should
we
start
building
a
set
of
generic
attributes
that
can
be
applied
with
these
other
protocols
as
well?
So.
A
B
B
C
B
But
so
right,
so
the
idea
is
we.
We
already
report
the
telemetry
about
setting
up
and
tearing
down
all
these
connections.
Now
we
want
to
go
one
level
deeper
and
look
at
what's
inside
these
these
transports
and
provide
some
some
information
about
that.
So
I
think
message:
message
related
stuff
seems
like
the
8th
route,
pretty
generic
kind
of
cancer
right.
C
B
H
C
C
There
is
a
loop
that
counts
messages,
I
were
just
measuring.
How
much
does
it
function
constant,
but
the
real
of
limitation
should
be
sent
the
messages
every
once
in
a
while
that
the
time
will
buy
stuff.
It
can
happen
in
parallel,
because
we
also
need
to
report
like
bytes,
and
we
don't
already
so
so
that
best
pull
up
a
token
on,
because
within
only
even
two,
it
work,
I'm
Tanner.
C
C
C
F
A
C
E
C
A
I
just
want
to
move
on
first
time.
I
put
on
the
agenda
was
the
release,
notes
PR,
so
I
want
to
encourage
everyone
to
take
a
look
at
the
release,
notes
and
see.
If
we've
missing
things
there
I
think
we
have
some
documentation
that
we
need
to
clean
up
with
things,
especially
in
the
policy
until
I'm,
chair
Adams.
This
is
a
p0.
If
we're
getting
release
one
one
out,
it's
getting
the
release,
notes,
solidified,
so
courage,
everyone
take
a
look
at
that.
Yeah.
A
A
A
Okay,
so
I
put
together
this
sort
of
draft
set
of
things
that
I
think
we
should
tackle
in
the
1/2
timeframe
and
then
beyond
that,
as
well
as
sort
of
try
and
give
us
a
little
bit
of
thought
ahead.
This
is
in
line
with
sort
of
the
direction
that
the
working
group
should
present
to
a
larger
community.
What
they're
working
on
for
the
next
couple
months,
so
I
thought
I'd
try
to
organize
thoughts.
A
Here,
I
tried
to
stick
to
the
theme
of
1/2
being
about
focused
on
stability
and
improving
the
user
experience
what
we
have
instead
of
adding
new
features.
So
that's
mainly
how
I've
tried
to
organize
the
work
items.
I
can
go
through
them,
one
by
one
I
guess,
but
so,
if
that's
the
high-level
overview,
so
we've
had
a
lot
of
reports
about
a
lot
of
questions
on
the
discussion
forums
about
add-ons
and
how
to
use
them.
How
to
integrate
with,
say
I
already
have
Jaeger
installed.
How
do
I
get
it
to
integrate
with
this?
A
Do
of
that
kind
of
thing,
so
I
feel
like
a
lot
of.
What
we
need
to
do
is
focus
on
documentation
around
how
to
use
yeah
the
add-ons
links
about
how
to
do
production.
Ization
of
them,
then
maybe
do
some
support
of
existing
things
like
operators
for
those
those
add-ons
there
also
a
number
of
issues
that
people
have
opened
about
things
they'd
like
to
see
in
the
dashboard,
so
I
added
them
here
is
sort
of
addressing
the
the
known
issues,
the
dashboards.
F
E
F
A
F
Release
workflow
about
so
one
one
issue
that
we've
had
frequently
is:
we
either
have
an
enhancement
to
the
dashboard
which
otherwise
satisfies
rest
of
late
care.
It
doesn't
require
any
new
counters
or
anything
like
that,
but
that
has
to
wait
for
an
Easter
release,
which
is
which
is
basically
unnecessary.
So
but
a
new
dashboard
is
very
much
useful
to
people
right
like
some
someone
added
something
and
now
yeah
we
want
to
get
it
out
kind
of
orthogonal
to
when
the
next
is
to
release
goes
up.
A
B
A
F
Just
right
so
with
Griffin
right,
it
hasn't
actually
at
least
I've
never
heard.
Anyone
specifically
saying
oh
I
wanted
the
latest
version
of
Griffin
of
binary
and
I
have
to
wait
too
long
for
that.
Okay.
So
it's
just
our
configuration
state,
but
right.
A
The
only
thing
I
think
this
sort
of
hidden
in
here
is
this
first
class
tracing
API,
and
that
is
sort
of
a
new
feature,
but
as
we
start
to
support
more
and
more
tracing
options
like
lights,
that
people
are
asking
out
for
open
tracing
based
on
boy
integrations,
I
feel
like
we
should
have
a
unified
way
to
talk
about
tracing,
so
mixer
isn't
doing
its
own
separate
tracing
flags
that
you
know
have
to
be
configured
differently
than
the
rest
of
the
system.
That
kind
of
thing
okay,.
A
The
section
is
about
just
custom
customizing.
How
telemetry
in
this
way,
probably
also
include
policy,
is
done
per
workload.
I
think
we've
had
a
lot
of
requests
to
control
at
a
finer-grained
resolution
than
we
currently
have,
whether
or
not
we're
generating
access
logs
for
debugging.
How
treat
you
know
can
I
some
work
loads
from
tracing
turning
policy
on
off
that
kind
of
thing,
so
we've
used
annotations
for
some
of
this
and
I
think
we
should
expand
that
support
and
then
clean
up.
How
that
all
works
in
this
document.
D
A
That's
sort
of
my
static
describing
what
needs
to
be
done
there.
I
know:
we've
talked
about
the
the
stats
generation
filtering
how
that,
but
we
don't
have
sensible
defaults
there.
Maybe
we
improve
that
so
that
there's
sensible
defaults.
We
keep
people
from
treating
themselves
in
the
foot.
That
kind
of
thing.
I
Mean
I
think
it
might
be
covered
kind
of
what
you're
talking
about
with
the
the
tracing
API
CRD.
It
depends
what
resolution
you
were
thinking
about
going
down
to,
but
I
mean
in
Jaeger
there's
a
beginnings
of
an
adaptive
sampling
mechanism
that
allows
users
to
configure
the
sampling
rate
based
on
services
or
individual
endpoints,
and
so
that's
to
enable
them
to
you
know
users
to
be
able
to
focus
on
where
the
the
errors
may
be
occurring.
So
you
know
to
reduce
the
amount
of
unnecessary
tracing
information.
That's
been
captured
so
thinking.
I
If
we
can
build
a
similar
type
of
mechanism
into
this
do
and
then
maybe
have
you
know
other
high
level,
tooling,
possibly
key
ally
that
would
allow
users
to
easily
you
know
enable
a
high
level
of
sampling
when
they
find.
You
know
a
certain
type
of
request.
These
causing
problems.
You
know
it
allow.
You
know
more
fine,
grained
adjustments,
that's.
A
A
lot
I
think
that
that's
something
that
this
working
group
can
take
on
and
also
improve
their
value
right
and
then
then
the
next
bit
is
about
what
extra
information
we
can
expose
to
based
on
data
that
we
already
have
you
know,
should.
Should
we
make
regions
own
locality,
information
available
for
logs
traces
et
cetera?
A
A
We
didn't
have
a
good
way
of
answering
questions
like
are
there
policy
rules
in
place
in
mixer,
so
I
want
to
add
some
some
metrics
around
that,
as
well
as
providing
examples
on
say
if
I'm
monitoring
pilot,
how
do
I
know
when
things
are
going
bad
and
should
we
provide
example,
alerts
to
show
people
how
they
can
monitor
proactively,
monitor
these
steel
components
themselves
and
maybe
extend
that
to
the
mesh.
So
that's
what
that
last
bullet
point.
So,
though,
does
that
make
sense.
F
Two
more
things:
what
one
is
the
I
think?
Maybe
you
mix
not
in
the
lightness
but
mixer
restarts
when
Kali
is
not
available.
Is
that
it
did
you
cover
think
of
it,
okay,
so
so,
basically,
the
the
out-of-the-box
experience
sometimes
actually
often
is
that
you
see
mixer
has
already
gone
through
a
couple
of
restarts
right
away
and
then
then
the
operator
needs
to
go
and
check
by
okay.
What
why
didn't?
We
start
if
it's
something
bad
happened
like
so,
we
do
have
to
do
yeah,
so
I
think
we
need
to
do
something
there.
D
A
And
then
I,
just
sort
of
had
this
catch-all
category
at
the
bottom
or
I
think
there's
a
lot
of
work
that
we
need
to
do.
I
think
we
can
improve
our
documentation
and
test
coverage
for
the
process.
Adapter
with
a
couple
of
scenarios,
I've
been
asked
questions
about
how
it
works
across
clusters,
boundaries
and
things
like
that.
I
don't
have
great
answers
there,
so
it'd
be
nice.
A
If
we
could
put
those
into
FAQs
or
updated
on
from
that
there's
some
cleanup,
then
we
need
to
do
I
mean
we
deprecated
adapters
to
your
DS,
but
not
really
because
it
took
so
long
get
one
that
one
out
and
I
think
it's
time
to.
Finally,
just
turn
it
off
stop
pushing
CDs
in
that
also
improve
our
install
experience.
A
We've
had
a
couple
issues
with
the
ingress
gateway
where
people
are
calling
from
inside
the
mesh
and
it's
forwarding
sto
headers
and
then
that's
changing
source
information
and
hiding
in
so
it
looks
like
traffic
is
coming
from
two
places.
Instead
of
one
place:
oh
okay,
yeah
that
I
had
okay
and
I
know:
we've
we've.
We
thought.
A
Before
we've
been
yeah
at
least
yeah,
we
didn't
think
that
it
would
be
an
issue
right
away
right,
but
I
think
in
the
one
two
time
frame
you
want
to
get
out
and
then
I
write
below
the
bottom
I
think
it
fits
in
with
some
of
the
v-2
stuff.
That's
going
on
where
we're
gonna
have
these
configuration
free
implementations
of
default,
metrics
and
telemetry,
and
so
maybe
this
is
the
time
to
start
documenting
that
and
providing
sort
of
here's
a
simpler
way
to
adopt
sort
of
partially
adopt
SEO.
So
I
don't
know.
A
B
B
Thinking
in
terms
of
the
like
the
multi
process,
the
multi
release
messaging,
we
when
V
2
is
available
we'd
like
to
to
not
have
the
any
burden
of
the
legacy.
Adapters
need
to
be
removed
from
the
system
by
the
time
T
2
hits.
So
we
need
to
have
some
advance
notice
to
tell
people.
Maybe
in
1.1
we
say:
hey
in
1.2,
we're
gonna
delete
those.
So
so
so
it
looks
like.
F
Right,
okay,
so
I
think
I
think
that
right
after
1.1
is
the
perfect
time
to
cue
that
up
and
say:
ok
here
we
did
migration
with
the
caveat
that
what
you
mentioned
and
then
ok,
adapter
authors,
please
like
please,
go
and
make
sure
that
all
your
adapters
are
still
I
mean.
If
the
adapters
have
good
test
coverage,
I
think
with
integration
test,
you
should
be
able
to
tell
if
it
works
or
not.
If
they
don't,
then
we
can
ask
the
reporters
to
bring
it
kind
of
back
up
to
code.
B
F
Yeah
I
think
we
should
definitely
put
that.
Ok,
ok,
so
now
the
question
is:
do
you
think
that,
because
1.1
release
notes
are
so
dense,
it
would
be
I
mean
the
alternative.
Is
we
send
out
a
separate
email
saying
that
a
like
this
is
happening
and
okay
now
we
just
moved
everything
out
using
a
tool
also.
G
B
D
Is
there
or
should
there
be
a
framework
for
developing
these
adapters,
or
maybe
maybe
an
example?
Well
I,
don't
want
to
say
example,
maybe
maybe
a
project
that
that
could
be
a
base.
You
know
for
creating
these
things
that
you
know
takes
care
of
a
lot
of
the
details
that
you
end
up
having
to
deal
with
so.
F
D
B
B
F
A
B
B
K
K
D
Unless
it
relies
on
some
maintainable
scaffolding,
outside
of
that,
though,
you,
how
do
you,
how
do
you
deal
with
the
bugs
and
updates
to
it?
I
I
would
like,
like
maybe,
if
there's
like
I
said,
if
there's
a
framework
or
a
library
or
something
that
could
be
relied
on,
that
could
be
maintained,
and
that
would
you
know,
have
downstream
effects
or
could
have
downstream
effects
on
existing
adapters
that
you
could
pull
from.
Actually.
D
D
Like
secret
stuff,
because
a
lot
of
the
adapters
have
to
connect
to
external
things
right-
and
we
have
talked
about
this
before-
but
but
an
idea
of
having
secrets
that
isn't
tied
specifically
to
the
implementation.
So
if
I
create
an
adapter
right
now
for
me
to
have
secure
configuration,
I
would
have
to
say:
okay,
where
I'm
going
to
use
kubernetes
secrets.
But
then
that
only
works
in
kubernetes
I
mean.
D
D
As
a
you
know,
I'm
going
adapter
issues
as
well,
oh
and
one
other
thing,
while
I'm
talking
I
had
a
proposed
an
issue
raised
a
long
time
ago,
like
a
couple
of
years
ago,
now
I
guess
to
have
more
requests
and
response
time
stamps
to
be
able
to
bracket
performance
information
or
incoming
and
outgoing
things
like
the
stuff
that
you
that's
useful
in
a
in
a
proxy
setting
for
like
start
and
end
of
received
time
sent
time.
You
know
that
kind
of
thing
so
that
you
could
fully
measure
the
performance
of
every
hop.
A
F
B
A
A
B
Those
are
good,
that's
all
for
this
kind
of
stuff
is
that's.
Definitely,
it's
gonna
be
easier.
Once
mixture
is
in
the
envoy,
because
we're
not
paying
a
big
tax
to
sand
the
Byzantine
attributes
and
I
think
we
do
need
to
have
the
support
to
turn
off
functionality.
If
we
observe
that
there's
no
nobody's
consuming
the
particular
attributes.
A
A
F
B
F
B
B
F
A
We
started
to
have
a
very
limited
WebSocket
support
now
and
so
it'd
be
nice
to
provide
attributes
around
when
connections
are
upgraded
and
maybe
start
supporting
some
basic
analytics.
So
I'm
gathering
signals
to
make
recommendations
for
mesh
at
some
point.
There's
a
question
about
mirrored
traffic
and
I.
Don't
think
we
have
any
good
answers
about
what
the
right
way
to
monitor
your
traffic
is.
A
That's
something
we
should
look
into
post
sort
of
one
two
and
there
have
been
suggestions
of
providing
sort
of
proxy
Lissa
steel.
Fridays
may
be
the
mixer
v2
functionality
inside
of
GRDC,
and
so
I
don't
know.
Well,
we
need
to
do
there
so
sort
of
group
that,
as
a
possible
thing
to
work
on
and
then
sort
of
I
was
also
thinking.
We
have
this
experimental
command
line
for
consuming
sort
of
providing
a
dashboard.
What
else
could
we
do
on
the
command
line
policy
into?
A
Why
don't
you
really
just
sort
of
make
that
more
interesting,
so
I
don't
know
if
other
people
have
other
ideas
of
things
that
should
be
looked
at
or
we
should
be
started
planning
for,
but
this
would
be
the
spot.
So
please
comment
and
add
to
this.
Just
sort
of
just
things
that
came
to
me
came
to
my
head.
The
long
list,
Yeah
right,
yeah.
D
A
A
F
I
think
I
think
right
now,
right
now
we
are
okay,
I
mean,
but
right
after
we
do,
the
do
the
know
can
take
like
going
from
no
config
to
config
is
gonna.
It's
going
to
need
a
lot
more
plumbing
in
lot
more
thought.
So
that's
that's
kind
of
right
afterwards.
Wait
but
I
think
right
now,
it's
your
find.
It's.
G
J
F
F
F
So,
for
example,
it
still
already
supports
the
on
work
will
Drake,
so
we
can
actually
configure
on
what
they
drink
and
like
push
these
adapters
in
with
some
things
that
are
statically
configured
up
front
and
have
a
working
system
without
all
the
config
being
fully
dynamically
in
the
world.
So
there
are,
there
are
still
multiple
paths
and
it
won't
block
right
away,
but
eventually,
it's
being
given
the
lead
altar.
A
F
F
B
F
D
A
B
A
B
F
So
we
so
did
the
operator
steel
could
be
yeah
yeah,
so
the
operator
would
would
would
would
do
that,
but
I
mean
since
we
don't
have
it
right
now,
but
the
operator
would
exactly
be
able
do
that.
To
say
hey
looks
like
you,
you
don't
have
policy,
so
it's
safe
to
switch
it
off
when
I
move
over
or
you
do
have
policy
and
yeah
I'm
not
going
to
switch
it
off.
So
if
we
had
operator
it
could
have
done
it.
Okay,.
B
So,
but
right
now,
I'd
feel
better.
If
we
had
a
if
we
could
tell
the
users
run
this
command
line
and
it'll
make
sure
everything's
okay
for
you,
so
that
command
line
will
check
to
see
if
there's
resources.
If
there
are
then
it'll
turn
on
the
policy
check,
if
there's
no
resources
leaves
it
off.
Okay,.
F
So
you
want
so
you
want
the
operator
command
to
be
made
available
as
like
an
ad-hoc
thing.
It's
just
so
we
can
stick
it
in
there.
Okay,
okay!
So
that's
a
that's
a
good
idea,
so
I
I
think
we
could.
We
could
couch
it
as
as
a
release
like
new
release,
validation
tool
or
something
like
that
right
that
says
I
mean
the
operator
will
do
that.
Actually,
but
even
if,
without
the
operator,
someone
can
run
and
say,
I,
just
upgraded
and
I
want
to
see
if
the
stuff
that
I
have
is
still.
B
A
F
So,
actually,
to
do
to
do
it
correctly.
We
have
to
do
what
they're
saying.
However,
we
don't
have
to
go
all
the
way
right.
What
we
can
very
easily
detect
is
whether
you
were
using
previous
default
configuration
which
had
no
check
adapters
deployed
right
and
if
we
find
anything
else,
apart
from
that
we'll
say:
hey
it's
unsafe,
you
go
figure
it
out,
so
I
don't
know
in
order
to
do
it
completely.