►
From YouTube: Policies and Telemetry WG 2018-05-23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Have
anyone
to
talk
about,
or
should
we
just
sort
of
post
updates?
Okay
I
know,
maybe
you
want
to
talk
about
the
final
update
to
the
the
attributes.
Just
give
me
one
a
status
update,
sure.
B
B
This
it's
essentially
a
small
Delta
over
what
we
have
today
with
so
a
few
changes,
and
then
the
numbers
at
this
point,
so
we're
kind
of
realizing
that
what
we
really
have
is
a
relationship
between
a
set
of
workloads
and
services
are
kind
of
second
order.
Second
order
things
relative
to
how
we
route
and
how
we
apply
policies.
B
So,
as
a
result,
the
biggest
change
from
what
we
had
previously
is
the
notion
of
source
dot
service
is
removed
and
instead
you're
expected
to
use
source
workloads
and
workload
instances
so
which
correspond.
The
workload
instances
corresponds
to
pods
and
the
workloads
would
correspond
to
a
deployment
and,
in
addition,
we've
added
want
to
scroll
down
that
yeah.
B
So
in
this
table
we've
added
source
dot
services.
Is
that
in
this
table?
Actually
you
know
yeah
okay
source
our
services,
so
because
of
the
way
the
mesh
runs,
we
can't
authoritative,
Li
tell
you
which
service
a
traffic
comes
from
is
not
a
question
na,
but
we
can
answer
which
services
it
might
come
from
and
if
you've
organized
your
mesh
correctly
or
not
correctly,
if
you,
if
you
organize
your
mesh
accordingly,
then
that
can
be
a
singleton
you'll
be
told
the
traffic
comes
from
this
service.
B
B
Aside
from
that
yeah,
so
there's
now
attributes
that
describe
workloads
in
addition
to
workload
instances.
So
you
can
now
query
information
about
kind
of
globally,
which
is
the
workload
that
generated
this
traffic
and
and
where
is
this
traffic
headed
to
and
apply,
policies
to
that
I
expect
we'll
be
adding
on
some
more
a
few
more
things
in
there
over
time.
B
But
that's
kind
of
this
oh
yeah,
so
we
explicitly
do
not
did
not.
We
started
defining
attributes
related
to
clusters
and
multiple
multi
cluster
behavior
and
decided
that
we
don't
know
enough
about
this
yet
so
any
attribute
we
define
would
probably
be
wrong.
So
we'll
wait
until
we
actually
have
multi
cluster
functionality
and
then
we'll
figure
out
which,
which
attributes
to
put
and
I
think
Doug's
started
to
work
already
to
put
put
these
new
attributes
in
place.
There.
A
Yeah
so
I
expect
this
is
the
one
a
branch
or
one
of
them
is
available
to
be
committed
to
this
code.
We'll
go
in
and
we'll
have
support
for
this
there'll
be
some
changes
a
little
bit
around
destination
service
and
how
we
derive
service
names,
etc.
As
Martin
was
talking
about,
but
yeah.
Those
mostly
exist
at
this
point,
just
waiting
waiting
for
permission
to
start
committing.
A
C
A
B
A
B
A
D
A
C
B
B
Okay,
so
that's
why
there's
no
labels,
the
namespace
there's?
No,
there's
no
labels
associated
with
that
with
a
workload.
Why
is
that
there
are
labels,
so
not
when
they're
quelled?
There's
too
many.
So
if
you
believe
a
certain
a
workload
is
a
cross
cluster
concern
and
there's
going
to
be
multiple
different
deployment
objects
which
have
can
potentially
conflicting
sets
of
labels.
B
A
D
D
B
D
A
B
I
would
not
so
we're
not
moving
away
from
that
I
am
as
a
as
a
replacement,
so
a
lot
of
policies
that
we
want
to
apply
used
to
be
on
source
service
for
our
destination
service,
so
service
is
going
away
because
it's
unreliable
so
like
so,
the
reliable
thing
is
either
you
apply.
The
policy
based
on
word
load
that
name
or
workload
that
you
ID,
which
is
about
the
same
as
source
that
service
or
you
use
it.
B
D
D
A
B
D
B
A
C
B
D
And
that
decision
is
made
how
in
the
in
the
code
in
the
deployment,
because
I
think
that
that
is
so,
some
of
it
is
going
to
be
in
the
code
right
once
I
think
we
can
now
preserve
source
IP
I,
don't
know
whether
that
is
that,
but
there
can
be
three
other
proxies
in
front
of
us
and
then
we
don't
know
what
the
user
wants
right
the
policies
or
do
they
want
to
select
X
the
x-forwarded-for
right
with
which
of
those
things
we
want
in
your
soul/psyche.
You.
A
A
D
A
B
B
B
Pattern
at
behind
dispatch:
this
is
right
at
the
tippy-top
as
soon
as
JP
see
channel
a
call
in
to
enters
immediately
checks
to
see
if
there's
a
match
for
the
set
of
attributes,
and
it
sends
a
response,
nothing,
nothing
else
runs
downstream.
The
other
a
pas
don't
run,
that's
it
so
I
guess,
like
I,
said
I'm
finishing
the
proof
benchmarks
around
as
soon
as
the
branch
is
open,
I'll
check
that
in
and
it
doesn't
visit,
use
the
same.
B
A
E
D
Mean
in
independently
right
I
would
say
that
for
kubernetes,
adapter
I
think
cache
is
probably
not
gonna
help,
but
it
like
there
may
be
other
API
is
where
cache
will
help
so
I
think
having
the
ability
to
deploy
cache
in
front
of
a
PA
is
is
a
feature
that
we
want
yeah,
regardless
of
what
the
exact
measurements
are,
or
what
our
particular
dr.
right
now.
Yeah.
B
B
Yes,
so
the
one
thing
we
never
closed
on
with
these,
the
a
pas
is:
do
proper
usage
tracking
of
attributes
a
pas
tend
to
dirty
up
more
attributes
than
is
then
the
actual
user
config
consumes
and
really
plumbed
that
through
I'm
hoping
we
can
improve
the
situation
after
the
new
adapter
models
in
place.
We
can
improve
this,
which
can
enrich
the
protocol
a
little
bit,
so
we
can
do
better.
Caching,.
C
I
have
a
PhD
working,
the
mixer
from
the
latest
master
and
I
misuse
that
and
I
am
enrolled
and
well
kinetics
to
supply
client
side
mixer
to
Ducati
and
I
haven't.
It
seems
to
work
so
I
see,
charts
for
client
side
stuff
that
is
identical
to
server
side,
except
for
the
poles
that
manually
injected.
So
we
have
data
available,
maybe
few
patches
to
proxy
data
waiting
to
master
than
frozen
that
allow
this
kind
of
things
so
there's
a
patch
to
get
the
constrictor
out
in
the
Flying
type
and
then
is
attached
to
lie
metadata.
C
E
C
C
A
The
other
thing
I
mentioned
is
sort
of
I.
Think
I'm
into
some
people,
benign
in
the
official
channel
as
I
started.
Work
on
a
group
of
concept
for
what
I'm
calling
is
the
estate
metrics,
which
will
parallel
acute
state.
Metrics
main
idea
here,
is
to
expose
a
configuration
information
for
this
meal
as
a
metrics.
So
the
first
thing
I
did
was
look
at
mixer
rules,
so
there's
a
metric
for
actions
showing
the
instances
the
match
data
rules.
So
the
rules,
any
instances
are
known
in
the
system.
A
So
you
can
query
and
get
the
latest
state
of
mixer
through
through
metrics
that
are
just
built,
I'm,
coordinating
a
server
and
exposing
that
information.
So
it's
still
really
early
stages
in
my
only
foe
needs
of
documentation
and
testing
etc.
But
during
the
approach
I'm
exploring
for
just
a
different
way
to
get
data
for
debugging
and
understanding
what's
happening
inside
of
it
yeah.
So
I.
B
B
A
This
is
more
of
like
pumping
data
out
into
metrics
that
can
be
looked
at
sort
of
independently.
It's
like
a
control
Z
across
the
mesh
or
a
control
z
for
the
API
server,
but
specific
to
configuration
on
the
x-ray
studio
anyway.
I
haven't
spent
a
lot
of
time
in
it,
just
something
we
can
talk
more
detail
about
it,
as
it
becomes
more
of
a
thing
than
just
a
proof
of
concept,
but
I.
B
Just
I'm
trying
to
get
a
mental
model
of
what's
the
what's
the
right
way
to
do
things,
because
so
what
control
Z?
If
something
this
behaves,
you
can
connect
to
the
individual
mixer
instance
and
look
at
it
and
figure
out.
Oh
it's
misconfigured,
or
something
what
your
stuff,
no
more
of
an
auditing
kind
of
thing.
Perhaps
yeah.
A
Different
instance,
you
might
want
to
check
in
it
is
just
what
is
the
speed
of
contained
inside
of
this
field,
absent
any
sort
of
binary
or
anything,
that's
processing
it
like
what
is
the?
What
did
it
look
like
in
the
API
server?
And
what
do
we
know
about
it
at
a
given
time?
So
then
yeah,
you
could
go
back
an
audit,
but
you
could
also
just
query
directly
to
say
what
are
all
the
instances
available
of
type
metrics
and
get
a
list.
A
D
The
other,
the
other
difference
between
the
two
approaches
right
is
like
I
said
because
it
was
the
kubernetes
8
metrics
kubernetes
has
a
very
centralized
view
of
the
configuration
and,
even
though
the
actual
controllers
are
distributed,
the
individual
controllers
always
act
on
exactly
one.
You
can
actually
that's
right.
Instead,
that
I
can
look
at
the
deployment
object
and.
D
B
B
D
E
Said
in
the
dark,
I
think
it
is,
it
is
nay
headers
of
negative
utility,
because
I
think
people
are
going
to
think
that
it's
the
same
old
source
service
and
for
maybe
the
average
case
when
a
pod
is
only
serving
once
it'll.
Actually
look
like
that,
and
then
they'll
make
graphs
or
alerts
based
on
that.
And
then
all
of
a
sudden.
E
E
B
B
B
E
C
B
E
D
D
A
B
E
No
you
weren't
talking
if
you're,
if
you
have
multiple
services
on
a
pod,
that
policy
that
you
put
in
place
for
one
service
is
going
to
have
a
side
effect.
Other
services
behavior,
and
you
didn't
expect
that,
whereas,
if
you're
setting
it
on
a
workload,
then
you
know
exactly
what's
going
to
be
affected,
it's
more
explicit.
What
what's
going
to
be
effective?
That's
true!.
E
E
E
You
could
make
that
authenticated
easily,
and
so
the
question
is:
if,
assuming
that,
we
did
actually
get
that
information
from
the
application.
How
would
we
represent
that?
If,
at
all
in
attributes.
B
D
B
Is
if,
if
we,
if
we
came
up
with
this
as
a
string
and
later
we
change
it
to
a
string
list,
it
would
have
to
be
source
services
to
or
something
right
within
a
compatible
way.
Yes
yeah.
So
it's
it's
undesirable.
Why
I
agree
so
the
best
choice
would
be
let
if
I
can
do
the
string
list
and
Annie
worth
of
work.
Then
it's
not.
E
A
problem
right
well,
I
mean
I,
don't
know
if
that's
actually
solving
anything,
because
then
we
still
have
the
same
question
of
how
do
you
export
that
into
metrics
and
probably
someone's
gonna
convert
it
back
into
a
comma-separated
list
and
that
doesn't
really
improve
anything
on
the
metric
side?
It
helps
it
helps
the
policy
side
right.
D
A
E
I
mean
my
fundamental
argument
is
that
there
is
no
service
level
information,
there's
workload
level,
information
resource
that
sir.
This
is
compute
of
the
workload,
but
you
can't
actually
break
down
the
data
by
service
because
that's
not
something
that's
source.
That
services
does
not
provide
a
breakdown
by
service.