►
From YouTube: Policies and Telemetry WG Meeting 2020-03-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
C
Yeah,
so
I
was
I.
Put
this
on
the
agenda
like
right
after
the
last
one,
so
things
have
kind
of
changed
inside,
but
I
thought
I
would
leave
it
here
to
make
sure
everyone
was
aware.
Basically,
I
was
hoping
that
we
could
remove
some
of
the
features
of
mixer
to
clean
up
some
tech
debt
without
investing
too
much
effort
into
it.
But
in
this
creative
working
group
we
decided
to
just
invest
that
effort,
so
we
made
mixer
support
the
new
Easter
dsds
and
that's
that's
already
been
done.
C
Hopefully
going
from
mixer
reviewed
it
I,
don't
remember.
If
not,
we
can
retro
actually
review.
This
is
on
master
of
course.
So
basically
what
that
means.
It's
just
that
we
mixer
will
function
exactly
as
before,
except
for
we
now
no
longer
need
to
support
all
the
file
melt
code
that
we
had
since
now,
it's
using
SDS,
so
it's
basically
at
before
I
was
hoping.
We
could
just
make
it
only
support
plain
text,
but
I
didn't
think
I
could
convince
a
and
won't
you
do
that
and
I.
D
D
A
A
C
A
A
A
A
H
A
D
A
A
C
C
So
in
the
issue
I
I
drafted
what
I
think
we
should
put
there
if
anyone
disagrees
or
has
any
other
ideas,
if
there's
an
full
metrics
for
not
capturing,
they
should
show
ya
freezes
like
a
you
know
that
it
shouldn't
be
anything
to
do
groundbreaking,
just
kind
of
moving
panels
around.
So
that
makes
sense.
A
D
C
E
C
C
A
C
J
Okay,
maybe
once
you
are
ready,
we
can
do
that.
Also
during
the
user
experience
I
mean
it
may
not
be
connected
with
this
thing,
but
I
I
think
there
were
some
discussions
on
overlap
between
the
environment
and
the
user
experience.
We
can
take
it
off
well
and
so
I
think
it's
a
good
good
idea
to
discuss
with
them.
A
D
D
Centric
API,
where
you
can
define
how
you
want
to
extract
this
information
from
from
the
paths
for,
for
example,
so
we're.
So
we
want
to
be
able
to
do
what
kind
of
open
API
suspect
does
and
the
details
are
actually
in
an
issue
and
also
also
dock.
But
we
want
to
create
a
service
level
API,
where
we
can
hang
these
telemetry
specific
things
off,
and
then
there
are
some
other
things
which
can
also
follow
later.
But
right
now
we
are
going
to
focus
on
the
the
telemetry
specific
stuff.
H
Manner
so
I
think
having
a
service
specific
API
like
we
were
saying,
service,
centric
API,
make
sense,
be
I'm,
still
not
sure.
As
in
when
we
lower
this
to
envoy
configuration
on
the
server
side.
Where
will
this
live?
We
don't
have
a
spot
there.
Also
for
it
like
there
is
a
user
spacing
API,
which
is
missing,
and
there
is
an
envoy
API,
also
which
might
be
passing
right.
H
D
So
we
already
have
two
mechanisms
of
conveying
this
information
to
the
stats
filter
and
at
like,
as
part
of
this,
we
will
we
will
choose
one,
which
is
also
the
reason
why
we
should
be
able
to
like
prototype
all
this.
Just
with
almost
will
take
you
and
say:
ok
here
is
your
own
work,
filtrate
sky
and
that's.
F
Yeah
I'll,
just
I'll,
just
laptop
all
right,
so
yesterday,
I
put
up
a
PR
to
put
an
integration
test
around
a
regression
that
was
had
in
telemetry
v2
that
just
a
bit
is
I'm
reaching
out
to
you
know
some
external
URL
to
actually
hit
like
black
hole
and
pass
through
capabilities,
and
so
there
was
a
discussion
that
was
had
of
like
okay,
should
we
use
the
the
sidecar
sidecar
scoping
API
to
actually
address
this
so
that
we're
not
you
know
reaching
out
to
an
external
URL
and
whatnot
and
I
just
wanted
to
know
like
what
is
the
resolution
you
know
and
as
far
as
like
one,
you
know,
should
telemetry
integration
test
be
in
their
own
kind
of
telemetry
package
because
there
was
concerned
about
you
know
adding
kind
of
telemetry
testing
to
a
pilot.
F
So
that's
one
and
then
the
other
is
is
you
know,
should
we
should
we
actually
try
to
use?
You
know?
Other
API
is
within
SEO
as
we're
doing
integration,
testing
kind
of
around.
You
know
telemetry
testing,
because
I'm
assuming
in
the
future
we're
going
to
continue.
Having
kind
of
more
of
these,
you
know
kind
of
kind
of
use
cases
and
maybe
that's
presumptuous
meter,
but
you
know
what
are
people's
thoughts
and
you
know
just
so:
I
can
kind
of
incorporate
this
back
in
a
you
know.
This
integration
test.
D
H
C
H
D
It
depends
on
the
fact
that
we're
not
letting
everything
pass
through
and
if
you
had
added
a
service
entry,
it
would
not
have
fit
a
black
hole
so
so
that
that's
the
relying
on
implementation,
part
right
so
looks
like
we
have
to
rely
on
implementation
either
way,
so
that
then
the
question
is
which
implementation?
Would
we
rather
rely,
or
is
it
like,
which
is
closer
to
the
subject
matter
right
is
what
we
are
already
relying
relying
on
closer
to
the
subject
matter
or
a
sidecar
scoping
close
to
subject
matter
and
I.
D
C
C
Pr
was
was
really
not
about
like
I,
don't
I,
don't
care
if
you
guys
use
networking
features
on
your
test
and
that's
up
to
you
guys
really.
My
guesser
me
was
a
purely
firm
like
a
test
and
release
perspective
that
every
test,
especially
ones
that
are
doing
fold
deployments
of
east
geo,
add
a
lot
of
time
and
potential
flakiness.
C
We
already
have
a
test
doing
exactly
this
same
thing,
except
for
the
fact
that
we're
not
checking
the
metrics
so
I
would
much
prefer
that
we
collapse
these
tests,
because
the
concern
I
have
and
that
I've
been
trying
to
prevent
in
other
case
is
that
every
single
time
someone
wants
to
test
one
little
feature:
they
do
a
full
test,
suite
a
full
install
of
East.
Do
you
know
all
these
new
pods?
C
And
now
we
just
do
one
request
test
is
one
little
feature,
and
so,
as
we
test
more
and
more
things
our
test
time
just
explodes,
so
we
need
to
be
testing
more
things
in
like
the
same
set
up.
Essentially,
my
other
concern
too,
was
just
the
external
call.
I,
don't
think
that
our
test
should
rely
on
external
websites
as
much
as
possible.
Yeah.
H
Yeah,
let's
stick
to
these
two
things
separately.
So
for
me,
if
we
have
the
concepts
of
working
groups
which
are
in
charge
of
smaller
domains,
by
definition,
they
should
write
more
tests
which
test
small
things
and
little
things
as
much
as
possible,
rather
than
one
big
test
that
tests
the
entire
list,
your
functionality
right
so
I
hear
you
because
I
deal
with
it
and
a
Spanish
also
like
there's
so
many
tests
and
now
they're
more
flaky,
and
they
take
the
time.
But
that's
how
it
should
be.
H
C
The
integration
tester
right,
if
you
want
to
do
a
unit
test,
absolutely
I,
think
the
drivers,
but
this
is
the
integration
test.
I,
don't
think
we
need
two
different
integration
tests,
because
at
the
end
of
the
day,
we're
shipping
a
feature
to
users
that
you
can
send
traffic.
It
will
get
black
hold
and
you'll
get
metrics.
It's
one
creature
and
I
don't
see
why
we
need
to
test
in
two
separate
ways.
I
think.
That's
part
of
the
reason
why
things
like
this
break:
it's
because
it's
so
isolated
we're
in
networking.
C
We
know
we
I
want
to
test
this
new
networking
feature,
but
I
never
check
the
metrics,
because
that's
policies,
group
and
then
you
guys
don't
have
the
test
because
you
may
know
about
it
or
something
and
then
you
know
vice
versa.
It
causes
all
sorts
of
issues
and
my
main
concern
is
not
those
issues
but
I
do
think
they're
valid
I'm.
My
main
concern
is
the
test
time
instability.
So.
H
I
hear
you
I
feel,
like
you
still
so,
I
think
that
best
held
in
the
best
hierarchy.
What
you're
missing
here
is
what
I
call
functional
tests.
I
mean
policies
in
the
laboratory
group.
Still
they
should
run
unit
tests
and
they
should
write
some
tests
which
rely
on
more
real
things,
but
doesn't
rely
on
all
networking
and
then
you
need
integration
tests
that
can
verify
the
end-to-end
functionality.
I
think
what
we
are
saying
right
now
is:
is
this
functional
test
even
required
or
not
I
guess
so.
C
A
E
D
So
I
think
I
think
that
if,
if
we
can
just
just
like
to
run
it
somewhere
and
get
get
get
a
next
year's
dumb-off
that
right,
whatever
the
file
XTS
or
whatever
like
that,
is,
and
then
we
don't
care
about
pilot
like
up
keeping
those
tests
is,
is
still
an
issue
that
the
reason
that
reason
I
say
that
is
is
because
we
don't
have
an
explicit
way
to
denote
a
black
hole
and
and
I
think
kuat
has
raised
this
several
times
before
that
there
is
no.
There
is
no
data,
plain
definition
of
a
black
hole.
D
That
is
why
necessarily
we
are
trying
to
test
implementation
detail
of
what
Pilate
does
right
like
so,
for
example,
traffic
director
may
choose
to
implement
this
in
a
completely
different
way.
Actually,
I
I,
don't
I,
don't
know
what
they
do
and
keyboard,
but
right,
but
like
whatever
the
502
direct
response
may
not
be
the
way
every
like
may
not
be
the
way,
other
things
actually
do
it.
E
A
But
that
can
be
done,
I
think
that's
what
someone
else
is
saying
that
can
be
done
with
a
unit
test
right
test
pilot
to
generate
that
config
separately,
test
that
configure
in
an
envoy
and
see
the
metrics
behaved
right.
So
there's
a
two
separate,
isolated
tests
that
are
small
and
then
a
third
test
that
just
joins
the
integration
test.
Let's
see
the
whole
thing
working
right,
yeah.
E
H
Yeah
so
I
feel,
like
basically,
I,
am
now
board
more
agreeing
with
John
here
that,
basically,
there
is
no
way
around
here,
but
we
need
to
have
at
least
less
integration
tests
which
benefit
more
and
more
of
the
functionality
together,
so
that
we
do
not
have
many
tests
and
they
take
a
large
amount
of
time
and
without
explicit
contracts.
We
can
try
to
write
unit
tests,
but
they
will
catch
less
less
things,
because
when
the
API
contracts
break
even
ask
them,
you
only
catch
them
and
integration
does
yeah.
A
So
I
mean
that's
sort
of
the
approach
we
took
on
in
dentists
that
have
now
been
killed.
At
some
point,
it
was
the
same
sort
of
decision
where
we
said
I'd
spin
up
a
cluster
and
do
everything
you
need
to
do
in
it
and
then
look
for
the
metrics
and
yeah
I
would
be
supportive
that
plus
I
want
to
kill
the
mixer
integration
test
path
like
this
test.
Shouldn't
live
inside
a
mixer
anymore,
so
I
support
that
as
well.
Okay,.
H
H
D
D
C
D
Well,
but
well,
what
would
know
when
we
always
need
some
networking
right,
Tran
and
pilot
test
test
lots
different
kinds
of
networking,
so
the
question
is:
do
we
move
every
each
and
every
of
our
tests
to
those
tests?
Because
there
is
like
it's
the
same
thing
right
like
we
always
need
the
networking
and
the
tests
that
don't
rely
on
the
trucking
for
stats,
for
example,
belong
to
it's
your
proxy
and
those
we
already
have.
H
D
Not
yeah
that
that's
not
a
consistent
message
at
all,
so
either
either
we
say
that
okay,
networking
test
set
up
the
set
of
the
framework
and
what
we
want
to
do
and
then
any
other
test.
Not
just
telemetry
tests
can
go
in
there
and
say
yeah.
This
is
almost
what
I
want
here.
I
add
this
extra
one
extra
rule
and
I
test
my
thing.
Otherwise
it's
gonna
be
like
this
ad-hoc
basis
right
one
test.
D
A
Yeah
I
mean
I
am
very
much
in
favor,
just
do
it
like.
We
did
the
dashboards
right,
and
maybe
this
is
a
longer
term
approach,
but
we
need
to
have
a
unified
control,
plane
integration,
testing
framework
and
things
attached
to
there,
and
we
don't
break
it
down
by
pilot
or
telemetry
or
whatever
right,
and
then
we
set
up
one
mesh
and
we
do
it
things
consistently
across
all
the
tests,
so
that
we
know
like
that.
We
have
a
set
of
features
that
are
configured
and
tested
in
the
same
way
for
everything.
H
E
H
A
So
the
test
now
I
mean
this
was
a
much
bigger
problem
when
tests
weren't,
stable
but
I
believe
we're
now
at
a
state
where
we
have
stable
testing
right
or
we're
much
better
than
we
used
to
be
at
least
yeah.
It's
it's
fairly
good
at
this
point,
and
so
yeah
I
must
worried
about
everything
breaking
in
us,
not
knowing
why
then
I
used
to
be,
especially
as
the
number
of
components
have
minimized,
the
number
of
moving
parts
seemed
to
minimize.
A
C
Yeah
so
after
I
could
think
about
it,
a
bit
more
I,
totally
get
boom
and
I
said
I.
Think
it's
a
very
short
term.
My
biggest
concern
is
that
we
have
a
bunch
of
tests,
spinning
up
a
full
install
at
least
you
like
spinning
up
a
full.
You
know
a
client
and
just
doing
the
curl
request.
I,
don't
really
mind
that
much
it's
pretty
quick,
but
doing
the
full,
install
and
teardown
it's
pretty
heavyweight
and
how.
D
C
So
how
it
works
is
there's
a
bunch
of
test
Suites
which
are
based
on
the
top
level
folders
like
security
pilot
mixer
or
whatever
each
one
of
those
runs
in
parallel
on
a
PR
and
the
the
overhead
for
just
running.
Anything
at
all
is
eight
minutes
or
so
to
build
all
the
docker
images
set
the
kind
cluster
all
that
sort
of
stuff
all
right,
then,
for
each
each
folder.
C
Adding
is
only
ten
more
seconds,
but
each
east
geo
install
is
another
one
to
two
minutes.
So
three
of
the
East
yield
installs.
That
add
up
okay,
yeah,
so
specifically
in
this
case,
because
at
the
time
that
it
was
written,
this
allow
any
verse
registry
only
was
an
install
time
setting.
We
had
to
do
a
full
east
REO
install
that's
no
longer
true,
so
in
the
very
short
term,
for
this
specific
PR
we
could
just
make
it.
C
You
know
a
single
test
that
takes
ten
seconds,
because
we
don't
have
to
do
a
full,
install
and,
at
that
point,
I
wouldn't
be
too
concerned
with
duplicating
it
in
mixer.
As
long
as
it's
sharing
like
the
common
logic
or
not
like
reinventing
everything,
but
I
do
agree
that
some
longer-term
decisions
should
be
made
that
we've
discussed
okay.
D
So
so
I
think
I
think
I
think
that
if
we
can
change
the
adjust,
the
registry
only
setting
right,
which
does
not
require
a
fully
install
and
then
the
test
does
not
have
to
move
from
its
existing
location,
which
is
consistent
with
just
discoverability
and
then
once
we
have
a
plan
of
what
we
are
going
to
do
with
overall
tests,
they
can
move
there
and
then
that
way
we
satisfy
both
requirements.
For
now,.
D
So
right
now,
what
I'm
saying
is
that
we
only
keep
it
in
the
mixer
folder,
what
it
runs
with
the
other
stuff
yeah
right,
and
then
it
doesn't
need
to
do
the
full
reinstall,
because
now
we
can
change
what
you
just
described,
which
is
that
registry.
Only
we
just
change
that
are
extremely
setting
from
allow
any
right.
C
D
D
C
I
mean
it's
not
just
the
exact
same
test
right.
We
copy
the
code,
in
which
case
one
of
them
is
gonna,
be
worse
than
the
other
or
probably
they're,
both
worse
than
the
other
one
in
some
different
ways
or
then
we
need
to
have
some
common
library,
at
which
point
we're
investing
a
lot
of
effort
into
this.
For,
like
no
reason,
I
mean
get.
The
pilot
test
is
an
exact
subset
of
the
Tauri
one,
so
yeah.
D
So
so
then
so
till
then
we
can
just
say
telemetry
and
have
separate
assertions
that
test
for
networking
part
explicitly
before
and
then
test
for
telemetry
I
think
probably
a
good
thing
for
us
to
do
anyway.
Right
if
the
underlying
networking
is
not
working,
then
failing
just
the
telemetry
assertions
probably
doesn't
make
sense,
but.
A
D
D
C
They
named
concern,
which
I,
don't
think
is
a
major
deal,
is
that
you
know
we
have
these
folders
that
are
called
pilot
and
mixer
and
dips
limiter.
Thank
even
and
so
like
it's
weird
to
have
a
networking
one
and
the
other
one.
But
that's
just
because
we
made
like
the
wrong
decision
on
how
to
split
things
up
I.
Don't
think
that
we
need
to
block
us
from
doing
the
right
thing
just
because
we
named
folders
incorrectly
right
right.
A
C
H
Okay,
so
hang
on
so
I
want
to
summarize
this
to
that.
Yes,
since
you
have
some
time
so
it
looks
like
the
things
that
we
are
going
to
do
are
keep
the
test
under
mixer
have
two
separate
assertions
where
it
asserts
both
for
Network
failures
and
then
for
telemetry
failures
and
then
remove
the
pests
from
pilot.
Oh
that's
bound
to
everyone.
Yeah.
C
H
A
D
J
J
J
D
I
A
So
what
we
have
is
what
went
well,
I
think
this
was
mine
and
I
Jacob
I
acknowledge
your
signature,
in
that
it
puts
more
specificity
in,
but
I
think
we
actually.
If
you
look
back
at
the
road
map,
four
one
five,
we
actually
hit
almost
all
of
the
p0
MP
ones
and
I.
Think,
given
the
craziness
with
this
happening
over
the
holidays,
and
then
everything
else
I
think
that's
actually
kind
of
commendable.
A
So
thanks
to
everyone
for
all
the
hard
work
there,
there's
someone
else
put
in
I
think
that
the
path
towards
v2,
telemetry
and
the
envoy
extensions
with
Waze
own
is
really
exciting
to
a
large
part
of
the
external
community,
and
it
represents
sort
of
several
release
cycles
worth
of
thought
and
effort.
And
so
the
fact
that
that's
well
received
I
think
is
something
else
that
we
should
celebrate.
A
We
had
some
long-standing
issues
and
feature
requests
and
things
that
we
actually
managed
to
address
and
they
were
minor,
but
they
had
been
several
of
these
cycles.
I
know
Jay
had
pinged
me
about
several
of
them
being
like
hey.
Are
we
ever
gonna?
Do
this
so
I
think
it
was
nice
to
finally
get
some
of
that
done
and
we
actually
do
a
pretty
decent
job.
I
think
that
just
means,
but
also
others
about
answering
questions
either
through
direct
messaging
or
on
the
the
channel.
So
I
think
that's
a
good
thing
as
well.
K
A
So
I
personally
was
not
happy
with
my
my
testing,
the
cycle
and
I
think
you
know
the
next
line
about
scrambling,
all
that
all
the
bugs
that
Niraj
and
Jacob
and
others
sort
of
found
with
black
hole
and
passed
through
telemetry
and
the
v2
like
we
just
discovered
those
too
late,
I
think
we
should.
We
need
to
do
better
there
and
that
could
have
gone
better,
I.
A
Think
J.
You
added
this
and
I
think
some
of
it's,
because
we
didn't
tie
the
roadmap
to
issues
maybe
and
to
Docs,
and
maybe
so
we
need
to
do
better
about
making
it
clear
what
proposals
are
tied
to
which
issues
in
which
items
in
the
roadmap
and
that's
maybe
something
we
need
to
look
at
for
one
6j.
Did
you
want
to
talk
more
about
this
I.
K
Mean
there's
not
really
I
mean
it's
just
I've,
just
sometimes
I'm
saying
I
go
back
and
I'm
like
okay,
what's
coming
and
what's
gonna
actually
be
in
this
release
and
then
trying
to
track
down
what
what
it's
really
gonna
be,
and
then
you
know
where
the
proposals
are
and-
and
they
just
just
kind
of
like
consolidating
the
info
and
usually
I
can
track
it
down.
I
just
feel
like.
Maybe
we
can
continue
to
make
it
more
obvious,
but
I
mean
it's
not
a
it's,
not
really
bad.
It's
just
just
a
mild
suggestion,
ya.
A
Know
I
think
I
think
making
it
clear,
especially
I
mean
for
the
key
ally,
team
and
other
is
about
how
things
are
changing
and
you
know
get
providing
enough
runway
to
deal
with.
That
is
something
we
can't
improve
and
then
I
think
we've
talked
about
this
sort
of
in
different
forums,
but
you
so
we
made
this
decision
a
couple
cycles
ago
to
have
protocol-specific
status
field
and
I.
Think
everyone
looking
back
may
decide
that
that
wasn't
the
best
possible
choice.
Is
there
anything
more?
You
want.
K
To
say
there
too,
jr.
I
mean
this
one
I
mean
I
was
part
of
the
proposal
review
too.
So
you
know
this
isn't
really
about
blame.
This
is
more
about
just
a
comment
so
actually
having
to
go
then
and
figure
out
how
to
work
this
new
field
in
two
key
ally
for
support.
You
know
you
end
up
with
this.
Having
to
sometimes
do
you
know
these
big
or
queries
in
Prometheus
right
by
you
like.
K
If
the
protocol
is
this
or
if
the
protocol
is
this
now
I
have
to
use
this
code
and
then
I
use
that
code.
That's
that's
hard
to
do
so.
When
I
mean
consumable
I
mean
it
would
have
been
nicer
in
retrospect,
if
that
field
that
we
had
added
was
was
tied
to
the
protocol
right.
So
if
it
was
a
request
protocol
of
HTTP,
the
response
code
would
have
the
HTTP
status
in
it,
and
if
it
was
a
G
RPC
protocol,
it
would
have
had
the
G
RPC
status
in
it.
K
A
A
A
I
think
we
need
to
focus
more
on
getting,
but
you
know
we
did
a
good
job
early
days
of
mixer
of
setting
up
and
then
tests
and
I
think
we
sort
of
rested
on
our
laurels
there
and
then
not
having
focused
so
much
on
testing
across
all
the
things
we
expect.
So
that's
why
I
put
that
down
earlier
documentation
efforts,
some
one
plus
one
that
does
anyone
want
to
say
anything
about
that.
D
Yeah
I
think
that
one
in
the
next
one
is
probably
to
like
same
but
but
yeah
so,
for
example,
writing
documentation.
Concurrently,
when
the
feature
development
is
going
on
or
even
before
would
be
like
would
be
really
good.
Otherwise,
we
have
otherwise
and
again
it
it's
the
when
we
are
testing
things.
That's
when
there
is
the
crunch
to
write
documentation
and
if
you
have
to
choose
between
fixing
something
or
testing
and
documentation,
you
tend
to
choose
fixing
and
testing,
and
it
like
it
just
keeps
all
putting
one
more
pressure
towards
the
end
on
documentation.
J
J
C
H
John
I
think
what
is
this
thing
is
like
if
you
have
sequences
of
PRS,
doing
foundational
work
that
don't
require
dock
you
get
those
in,
but
finally
the
outward-facing
functionality
or
when
you
think
it's
going
to
be
feature
complete.
The
reviewer
can
say:
I
have
be
not
allowed
we're
not
merging
this
in
until
the
corresponding
dog
theory,
sir
okay.
H
C
J
Sorry,
by
the
way,
in
one
point,
six
release
planning,
while
everyone
was
filling
the
template,
we
do
have
the
features
where
we
asked
if
the
dock
is
required
and
those
features
if
they
have
yes,
I
mean
it's
an
easy
check
on
that.
There
is
a
yes
to
it
and
if
the
PR
does
not
attach
a
document
that-
and
we
should-
we
should
not
pass
it.
E
C
C
D
So
we
used
require,
like
a
release,
note
section
with
the
PRS,
but
the
compliance
with
that
was
pretty
bad.
So
basically
people
were
not
really
so
it
either.
People
were
filling
in
like
too
low-level
details
which
were
not
useful
for
release,
notes
or
not
filling
in
anything
at
all.
So
then
we
abandoned
that
and
we
said
okay,
it
it
it's
more
of
like
someone
else.
Nene
needs
to
do
that
work,
but
we
are
now
suggesting
here.
D
Is
that
not
every
pair
needs
it,
but
the
PRS
that
die
together
the
together
the
feature
needs
the
dog,
so
release
notes,
plus
Doc
I,
think
that
that
makes
sense
too.
But
it's
if
it's
like
one
PR
to
enforce
it's
easier
and
we
can
hold
it
up.
We
can't
do
it
for
ten
PRS,
which
are
like
different
levels
of
implementation,
and
things
like
that.
A
D
H
Didn't
maybe
this
is
already
happening
so
for
some
of
the
key
functionalities
that
affects
a
current
user?
That
means
it's
not
a
new
feature,
but
like
a
migration
from
v1
to
v2.
If
we
can
include
the
early
access
customers
like
Carl
earlier,
we
would
find
a
lot
of
issues
that
will
be
less
traumatic
to
to
us
like
fix
it
in
the
end,
like
I,
think,
release
and
testing
group
or
there's
another
working
group
which
handles
this
I.
Don't
know
the
name
right.
H
C
We
we
gotta
crawl
to
try
out
some
new
things
in
the
testing
release
and
I
I
II
yeah.
So
he
stresses
some
things,
but
it
requires
just
making
things
easy
to
tell
it
easy
to
test
so
hopefully
like
with
the
multiple
control
plane.
That
will
be
a
little
easier
because
you
can
more
easily
throw
something
that
may
be
absolutely
horrible
and
blow
everything
up
without
you
know.
Breaking
your
whole
cluster.
H
C
Yeah
it
only
person
that
I've
really
seen
that
we
have
actively
like
worked
with
on
trying
new
features,
really
is
Carl,
so
there's
really
only
one,
but
it
would
be
great
to
get
I
don't
know
if
I
communities
does
like
a
more
formal
early
program
other
than
just
publishing
the
you
know,
alphas
and
betas
and
hoping
people
try
them
out.
I
mean
that's,
that's
really.
First
step
right
is
making
them
available
and
potentially
documenting
how
to
use
them
and
canarian
them
safely,
and
then
I
don't
know
what,
beyond
that,
we
could.
A
For
certain
features
we,
the
Keala
team-
might
help
us
here
right
and
they
have
in
the
past
right
validating
that
we
haven't
broken
displays
in
Ciotti,
yeah
I,
don't
know,
I,
don't
have
any
other
good
ideas
right
now.
H
D
Yeah
I
mean
you
know,
you're
talking
about
like
a
really
early
access,
a
formal
program,
whereas
the
where's,
the
thing
with
Karl
is
more
off
like
he
is
I
mean
he
he's
part
of
the
community
right
and
he
is
right.
So
that's
yeah,
that's
that's
slightly
different,
like
a
formal
program,
would
be
that
someone
who
is
not
as
familiar
or
not
as
involved
would
also
be
able
to
pick
it
up
exactly.
H
A
J
The
concern
I
have
is
it's
very
hard
to
find
people
to
test
right?
It
would
be
better
off
if
we
do
understand
some
customers
who
are
desperately
looking
for
a
feature
which
we
are
planning
to
release.
Those
could
be
the
best
candidates
to
test
those
key
features.
Otherwise,
like
honestly,
I
am
new
to
1.5
and
arranging
for
different
testing
days.
Just
to
cover
p0
was
not
was
very
hard
because
we
don't
get
enough
coverage
and
then
specifically
for
ei
for
the
key
fee,
I'm,
not
sure
if
we
can
get
those
kind
of.
H
Agree:
Shweta.
That's
why
the
way
you
do
it?
Is
you
formalize
it?
So
it's
like
a
carrot
and
the
stick
approach.
You
incentivize
customers
to
join
the
working
group
and
they
feel
like
they're
part
of
the
community
where
they
will
have
access
to
new
features
right.
So
if
you
do
it
just
as
community
raised
it's
very
difficult,
yeah.
J
A
H
Yeah
I
mean
it's
pretty
clear.
Right,
I
mean
minor
releases
with
regressions,
hurts
vendors
like
us,
and
the
community
alike.
People
don't
want
to
try
zero
point
zero
releases
office
to
you
anymore.
They
wait
for
point.
One
point
two
point:
three
right:
the
patch
versions:
that's
not
good,
so
and
I,
don't
know
what
is
to
add.
Okay,.
A
B
H
A
D
Yes,
I
mean
for
for,
like
anything,
that's
major,
yes,
the
only
thing
is
that
so
for
any
anyone,
who's
actively
involved,
I
think
we
most
definitely
need
have
this
bar,
which
is
yeah.
If
you
want
to
do
anything,
that's
like
more
than
minor,
then
yes
like
make
sure
that
it's
either
formally
documented
or
at
least
documented
within
the
PR,
if
not
a
prior
design
document.
D
The
only
thing
that
I'm
worried
about
is
that
getting
new
committers
right,
like
someone
does
something
they
may
see
this
as
as
an
as
an
impediment
but
I
don't
think
new
committers
is
our
problem
today.
So,
okay,
I'm,
okay,
with
with
saying
that,
yes,
we
are
going
to
be
strict
in
let's,
let's
have
the:
let's
have
a
document,
that's
approved
or
fairly
further
along.
A
C
Have
a
I
mean
drive
away
every
time,
so
don't
take
too
much
time
on
this,
but
just
a
note
in
the
environments
group,
we
are
kind
of
starting
to
head
towards
the
path
where
at
installation
time
we're
not
configuring
east
geo
configs
like
we
got
rid
of
mesh
policy,
for
example,
and
now
we're
just
recommending
people
apply
the
mesh
policy
just
like
they
would.
You
know
any
other
CR
telemetry,
of
course,
has
all
these
Envoy
filters
now.