►
From YouTube: Policies and Telemetry WG - 2021-01-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
I'm
I'm
thinking
of
submitting
a
a
really
short
lightning
talk
like
like
a
real
dive
into
awesome,
but
I
was
wondering
if
anyone
else
is
doing
any
wasmer
workshop
on,
while
some
longer
stuff
for
istio
con.
B
A
A
So
I
was
I'm
gonna
I'll,
submit
something
short
and
see
what
the
program
committee
thinks,
if
they
think
10
minutes
is
the
right
length
for
that.
D
So
so-
and
I
am
in
the
program
committee
and
I
can
tell
you
that
we
love
any
talks,
you
submit
so
don't
worry
about
the
length
and
if
the
you
know,
if
the
content
is
good,
but
we
feel
like
you
know,
the
length
is
too
long
or
short.
We
can
always
reach
and
we
can
adjust
that
way.
C
C
Yeah,
so
I
think
we
can,
I
mean,
since
you
already
have
a
lot
of
recent
stuff,
that's
kind
of
in
the
works.
Maybe
you
should
consider
having
a
talk,
but
but
you
have
to
think
think
about
it,
but
I
think
okay,
yeah.
I
think
you
have
quite
a
bit
of
stuff
that
you
can
present.
D
B
A
If
any
well,
so
I
was
looking
for
two
things.
If
anyone
said
they
were
already
gonna,
do
like
a
big
workshop
or
something
but
needed
help,
I
was
gonna
volunteer
if
anyone
said
that
they
already
were
gonna
do
an
intro
thing.
I
was
gonna,
not
conflict,
but
it
sounds
like
nobody
is.
So
I'm
going
to
push
forward
on,
like
a
basics,
to
advanced
intro
on
to
get
more
people
excited
about
webassembly.
C
Hey,
do
you
are
there
like
since
you're
on
the
committee?
Are
there
any
submissions
like
just
just
to
kind
of
complete
the
question
that
ed
is
asking?
Is
there
any
anything
else
on.
D
A
B
Anything
else
we
should
talk
about,
relate
to
that,
or
should
we
move
on
move
on
all
right
nathan?
I
see
that
you
are
here
today
and
you've
had
something
on
the
future
agenda
for
probably
about
a
month
now.
Do
you
want
to
talk
about
writing
good
integration
tests
now?
Sure?
F
Please
all
right,
yeah
I've,
I've
already
been
kind
of
making
the
rounds.
I
think
this
is
the
last
working
group
just
for
completeness
likely.
F
Many
of
you
have
already
seen
this,
but
we
had
a
bit
of
a
an
issue
when
we
were
trying
to
convert
over
our
integration
tests
so
that
they
were
basically
all
kind
of
multi-cluster
just
so
we
don't
have
to
test
so
many
different
combinations
of
things,
just
everything's
multi-cluster
and
we're
just
automatically
testing
effectively
every
feature
with
with
all
the
various
combinations
of
multi-cluster.
F
While
we
were
doing
that
transition.
However,
there
were
a
few
tests
that
were
being
added,
that
kind
of
crept
in
that
ultimately
didn't
test
multi-cluster.
F
So
I
just
want
to
kind
of
go
through
kind
of
like
an
overview
of
real
quick
of
just
things
to
think
about.
When
we're
writing
our
integration
tests,
obviously
rule
number
one
keep
test
times
low.
This
especially
becomes
an
issue
when
ready
tests
that
leverage
multiple
clusters
just
provisioning.
The
clusters
themselves
takes
a
bit
of
time
and
if
we're,
if
we're
actually
sending
traffic
to
and
from
each
cluster
that
that
adds
as
well.
So
there's
a
few.
F
D
F
Scroll
down,
yeah
don't
be
flaky
should
be
obvious,
but
I
think
I
think,
probably
I'm
not
sure
if
we'd
agreed
on
the
tfc
for
for
how
to
deal
with
flaky
tests,
I
guess
in
our
process,
but
if
you
do
see
a
flaky
test,
you
should
probably,
if
you
see
something,
say
something
we
should
probably
raise
an
issue.
Obviously,
if
it's
not
your
flake,
you
probably
don't
have
to
deal
with
it,
but
but
we
should
at
least
raise
awareness
and
try
to
get
it
fixed
using
feature
labels.
F
So
when
you,
when
you
do
add
a
new
test,
try
to
assess
the
features
that
you're
testing
and
make
sure
that
if
the
feature
labels
don't
exist,
add
them
and
and
yeah,
just
just
make
sure
that
you
know
we
can
actually
track
what
you're
testing
so
rule
number
four,
that's
the
one
that
bit
us
is
use
all
clusters,
so
the
the
test
framework
has
an
abstraction
of
cluster
and
we
we
now
basically
that
that
the
set
of
clusters
is
flexible.
F
Like
you
don't
know
from
the
test
itself,
how
many
clusters
you're
going
to
have
it's
kind
of
passed
in
as
a
as
a
bootstrap
argument
to
the
to
the
test
framework,
so
your
test
should
just
effectively
take
whatever
clusters
you're
given
and
just
run
your
test
kind
of
scale
up
your
test
to
cover
all
those
clusters.
F
Now
the
nice
thing
is
that
you
don't
have
to
worry
about,
like
all
the
details
of
what's
going
on
behind
the
scenes.
Like
are
those
clusters
vms
or
are
they?
Are
we
using
a
multi-primary
or
primary
remote
configuration
for
multi-cluster?
We
don't
care
your
test
just
needs
to
deploy
things
to
each
cluster
and
and
verify
that
the
test
works
in
in
all
the
combinations
from
from
every
cluster
to
every
other
cluster
yeah.
So
you
can.
F
You
can
read
the
details
there,
but
that's
the
basic
idea,
and
then
this
last
one
is
is
kind
of
more
of
a
knit.
But
there
were
a
few
cases
where
folks
were
we're
kind
of
using
the
the
built-in
go
things
for
doing
things
like
cleaning
up
and
whatnot.
But
you
know,
check
check
out
the
framework
and
use
framework
features
whenever
possible,
so
like
use
the
framework
test
contacts
instead
of
testing.t,
as
well
as
like
the
cleanup
capabilities.
F
So
then
my
internet,
but
yeah
the
other
ones,
are
probably
more
of
a
focal
point
here
anyway.
That's
all
that's
all
I
got
if
anybody
has
any
questions.
C
Links
to
good
examples
from
from
here
or
for
some
place,
if
not,
it
would
be
great
to
just
add
it
here,.
F
Yeah
I
mean
we
can
definitely
add
that
effectively
look
at
the
networking
tests
because
they're,
probably
the
pilot
test,
in
other
words
under
under
the
integration
folder
steven
land,
has
been
kind
of
like
leading
this.
This
effort,
for
you,
know,
kind
of
minimizing
the
test
times
and
reducing
flakiness
and
all
that
stuff
so
yeah.
I
can
certainly
add
a
link
here.
H
This
is,
this
is
good
work.
Are
you
following
these
on
a
dashboard
somewhere?
I
know
mitch
has
created
the
dashboard,
so
if
you're
planning
to
make
improvements,
how
are
we
following
that?
You
know
test
selecting
this
time
and
all
those.
F
There
is
nothing
tracking
this
right
now
for
for
now.
This
is
just
an
fyi
to
the
team
for
how
to
write
good
tests
yeah,
we
are,
as
far
as
I
know,
we're
not
tracking
like
test
time.
F
B
And
try
to
thanks
thanks
for
the
reminder
of
sharing
my
screen.
Hopefully
the
the
flakes
dashboard,
yeah,
you're
missioning
and
everyone
who
develops
code
should
should
be
paying
attention
to
this.
Just
to
see
if
you
have
tests
or
you've
written
tests
that
have
turned
out
to
be
flaky
and
we're
trying
to
trying
to
hunt
them
down
and
reduce
the
number
of
flakes
and
the
hours
wasted
with
flakes.
H
Yeah
thanks
thanks
doug
for
bringing
it
up
so
we're
planning
to
go
over
a
little
deep
dive
on
this
dashboard.
We
didn't
get
the
time
this
working
group
leads
this
week,
so
we
are
planning
to
do
it
following
week.
Anybody
I
was
planning
to
join.
Please
do
so.
I
think
we
can
do
a
couple
of
improvements
on
the
dashboard
in
terms
of
alerts
and
stuff,
but
I
think
this
is
a
very
good
start
from
mitch.
A
really
good
one.
B
So,
okay,
the
next
thing
on
the
agenda
that
I
wanted
to
go
over
just
because
it's
I
guess
feature
freeze
day
and
I
haven't
talked
to
most
of
the
people
in
this
working
group
in
a
while
is
the
status
of
our
features
that
we
said
we
were
going
to
try
and
track
for
1.9
I've
taken
a
stab
at
some
of
them
in
terms
of
green
red
and
yellow.
But
I
wanted
to
get
feedback
from
the
working
group
on
some
of
these
items.
B
C
B
I
think
with
the
tracing
api,
there's
still
discussions
and
given
that
today
is
the
feature
phrase
I
think
it's
fair
to
say:
that's
not
happening
for
the
one
nine
release
out
of
mesh
metadata
peter.
Do
you
feel
like
this?
Is
this
is
right
being
green.
E
Yeah
so
sorry
I
haven't
finished
the
integration
test.
Yes,
it
has
been
pending
for
a
while,
but
after
one
night
like
after
today,
I'm
going
to
like
revive
that
integration
test
and
make
it
work
again
and
after
that
I
think
it's
fine
to
market
going.
I
will
make
sure
that
when
I
branch
would
have
that
integration.
E
A
like
a
feature
that
we
need
to
go
through
like
the
whole
feature
release
phases,
but
I
don't
know
whether
alpha
or
beta
makes
sense
to
this,
but
yeah
anyway
yeah,
if,
if
alpha,
if
we
feel
like
alpha,
is
the
right
word
then
yeah,
I
think
it's
fine.
B
Okay
and
for
the
documentation
for
have
we
started
to
work
on
that.
Should
I
mark
that
as
yellow
yeah,
we
can
market
yes,
yeah,
yeah,
okay,
quad.
How
do
we
feel
about
wasm
beta
promotion.
G
B
Okay,
I'll
mark
it
as
shadow
peter
you're
on
the
hot
seat,
again
yeah.
This
is
yellow
as
well,
but
I.
A
B
Any
of
them
all
right,
I'm
going
to
go
ahead
and
mark
that
as
red,
because
I
I
think
that
we're
not
going
to
get
there.
I
have
not
done
anything
extra
for
multi-cluster
scenarios.
I
think
we
talked
about
changing
some
of
the
metrics
to
be
default,
but
and
adding
a
bunch
of
more
tests.
I
don't
think
that
we're
there
yet.
So
I'm
marking
that
as
not
done,
daniel
and.
B
B
More
digging
there
quentin.
I
B
Changes
there,
I
don't
think
that
that
would
be
a.
B
Significant
code
change,
if
so,
maybe
maybe
I
could
try
and
look
at
that
today
and
see
if
I
could
possibly
I
don't
know
if
we've
already
made
the
cut
or
not.
I
could
probably
turn
that
around
in
an
hour
or
two.
I
B
Let's
see
what
the
day
looks
like
qui
and
daniel,
I
know
that
there
are
several
docs
now
floating
around
for
extension
distribution
apis.
Do
we
feel
like
we'll,
have
one
ready
to
go
by
one
nine
release.
G
C
Okay
right,
so
I
think,
even
even
with
almost
a
month
more,
I
yeah,
I
still
would
call
it
as
red.
There
is
the
alternative
which
is
not
listed
here,
but
the
the
work
that
peter
is
kind
of
in
flight
of
doing
for
extension
distribution.
C
Neither
yeah
I
need
neither
neither
have
I
I
does
anyone
from
networking.
Neither
do
you
know
how
like
how
far
is
bts
this
time
like
I,
I
did
not
receive
any
new
design
docker
or
anything
like
that.
D
Yeah
I
haven't
and
I
haven't
attended
a
networking
meetings.
Oh
well,
I
did
attend
one
after
the
holidays,
but
we
didn't
discuss
this
so
I
would
say
no
updates.
Unless
john
knows
more.
J
B
Okay
and
then
I
know,
caroline's
been
doing
a
bunch
of
work
on
the
dashboards,
so
I
put
that
as
yellow
just
because,
hopefully
they
make
it
into
one
nine,
but
I'm
not
sure
how
strict
we're
going
to
be
on
whether
or
not
they
can
be
cherry
picked
back.
But
there
is
a.
There
are
a
bunch
of
pending
pr's
related
to
grafana
dashboards
that
we
should.
C
B
Okay
thanks
everyone.
I
just
wanted
to
make
sure
we
had
that.
I
know
that
there's
a
an
email
going
out
with
that
status
to
everyone
later
this
week,
so
I
wanted
to
make
sure
we
had
that
fully
covered.
B
B
B
I
believe
at
some
point
I
may
have
asked
someone
to
do
some.
So
if
I
put
your
name
like
mandarin,
just
sorry
to
put
you
on
the
spot,
I
picked
some
tests
that
maybe
you
might
be
interested
in
helping
to
automate
so
feel
free
to
to
sign
them
to
someone
else
or
let
me
know
if
that's
not
okay,
but
always.
H
Sorry
I
was,
I
was
just
saying
that
doug
thanks
for
doing
that.
Can
you
also
add
the
group
name
so
that
I
know
which
groups
have
done
the
job
and
nag
on
the
others
which
haven't.
B
B
D
B
Okay
and
continuing
on
with
some
of
the
the
working
group
leads
meeting
agenda.
I
wanted
everyone
to
take
a
look
at
the
features
on
our
feature
set
our
official
feature
status
page
and
see
if
we
believe
that
we're
missing
features
or
commenting
about
the
wrong
set
of
features,
especially
as
it
relates
to
observability
and
extension.
B
So
so
this
is
the
feature
status
page.
As
you
can
see,
it's
probably
not
totally
inclusive
and
we
might
disagree
with
some
of
the
features
as
they're
listed
and
their
status.
B
B
For
instance,
logging,
I
don't
see
anything
related
to
logging,
I
I
I
don't
know
if
there's
a
if
you
want
to
mention
wasm
for
extensions
somewhere
in
this
set.
C
B
And
I'll
follow
up
with
you,
then
offline,
yep,
okay,
oh
and
then
the
last
thing
that
we
were
asked
to
do
and
again
this
can
be
done
sort
of
offline,
but
I
wanted
to
make
sure
everyone
was
aware
of.
It
is
there's
the
a
draft
of
the
year-long
roadmap,
and
so,
if
we
feel
like
as
a
group,
there
are
features
missing
that
we
think
should
be
addressed
this
year.
B
B
So
I
want
I
wanted
everyone
to
be
aware
and
let
me
share
the
roadmap
screen
actually
with
everyone,
so
you
can
see.
There's
only
three
extension
features
on
the
roadmap
right
now
and
we
might
want
to
think
about
some
additional
observability
features,
etc.
So
please
take
a
look
at
that.
H
So
dog,
the
way
this
2021
roadmap
came
along
was
I
picked
up
this
below
a
work
from
2020,
so
all
the
working
group
leads
have
helped
mark
the
ones
which
are
done.
The
ones
which
were
not
done
during
2020
were
taken
over
to
2021,
and
then
I
compared
that
with
what
was
presented
in
1.9.
H
That
is
also
now
added
to
the
end
of
this
sheet.
I
was
talking
to
louie
and
swain
just
preliminary
discussion
on
this
yesterday.
We
should
also
add
some
of
the
stability
work,
because
this
year
is
going
to
be
all
the
toc
members
mentioned
that
this
year
is
we
want
the
product
to
be
stable,
so
if
we
can
add
some
of
the
features
which
can
help
with
the
stability,
that
will
be
great
too
great.
B
Yeah-
and
so
I
know,
this
should
be
hard
to
sort
of
digest
right
now
for
everyone.
So
please
take
a
look
over
the
next
two
weeks.
Add
comments
jot
down,
notes,
etc,
and
we
can
come
back
and
maybe
take
a
closer
look
at
the
end
of
the
month.
Does
that
sound
fair?
Is
that
the
right
timeline
or
do
you
need
input
before
then.
B
When,
when
do
when
do
we
want
the
feedback
to
be
sort
of
finalized
for
this
roadmap?
What
is
the.
H
Yeah,
that's
a
good
question.
I
would.
I
would
actually
give
the
question
back
to
you.
When
do
you
think
you
are
comfortable,
giving
the
feedback
to
me
because
in
the
working
group
leads,
we
discussed
that
I
will
connect
with
each
individual
group
leads
one
of
line
discussion
once
you
discuss
that
in
your
working
group
meeting.
So
when
do
you
think
I
can
connect,
because
there
are
some
of
them?
John
has
gone
through
this
list
offline.
H
B
B
So
I
don't
have
anything
else
on
the
agenda.
Are
there
other
things
that
people
have
run
into
now
that
we're
sort
of
everyone's
back
and
looking
at
one
nine
that
they
want
to
discuss
or
raise
or
highlight.
C
So
so
one
one
thing,
and
then
this
is
for
kind
of
next
year's
agenda
right.
So
one
thing
I
would
like
to
put
there
is:
is
telemetry
for
for
middle
proxy
and
and
sort
of
we
like
we
produce
some
telemetry
with
with
middle
proxy,
which
just
becomes
a
gateway,
but
but
I
think
I
think
that
there
is
like
in
some
modes
of
deployment
we
we
could
do
a
lot
better
with
the
with
the
middle
proxy.
So
that's
that's
one
of
the
things
that
I
would
like
us
to
consider
for
for
2021.
B
Okay,
on
a
related
note,
should
we
put
anything
about
ebpf
generated
telemetry
or
anything
along
those
lines?
Oh.
C
H
I
have
one
item:
if
there
is
nothing
else
yeah,
so
there
was
some
discussion
going
on
to
update
the
docs
page
for
the
contents,
which
are
stale,
so
we
may
get
some
contractor,
but
I
can't
promise
what
I
have
been
asked
is
if
we
can
generate
the
list
from
each
working
group
for
their
area
where
they
can
help
us
understand
what
are
the
you
know,
pages
which
needs
update?
That
will
be
great.
H
So
that
way
we
can
look
for
contractors
and
the
time
effort,
and
how
long
will
it
take?
If
you
can
generate
that
information
from
your
group,
that
will
be
great.
B
H
Yeah,
if
there
are
pages
which
are
not
in
the
testing
sheet,
I
know
jacob.
This
time
has
done
a
great
job
by
going
through
all
the
pages.
If
there
are
pages
you
feel
like
they're,
not
in
the
testing
sheet,
but
still
needs
update.
Let
me
know
that
as
well,
because
I
know
somebody
also
reached
out
one
of
the
architecture
mentioned
was
still
mentioning
pilot
and
other
areas,
galileo
and
it
should
be
removed
with
sdod.
I
do
not
recall,
which
page
was
that,
but
those
will
be
helpful.