►
From YouTube: Technical Oversight Committee 2020/12/11
Description
Istio's Technical Oversight Committee for December 11th, 2020.
Topics:
- Top-level CODEOWNERS approvers
- 1.9 Roadmap for Test & Release Working Group
- Extension Providers (in context of Telemetry API)
- Updates on Security Working Group's 1.9 Roadmap
B
Just
just
as
forum
right,
no
pr
should
have
no
reason
right
so
like
like
any
other
pr.
I
don't
think
I
would
approve
a
pr
that
didn't
have
some
reason,
so
it
just
has
to
provide
some
rationale
right
like
I
don't
think
it
needs
to
be
very
much,
but
I
don't
think
we
should
just
pro
forma
approve
it
without
that.
B
So
we
have
informal
policy,
I
would
say,
but
we
don't
have
formal
policy
and
it
might
be,
it
might
be
unnecessarily
expensive
to
try
and
document
it
overly,
but
I'm
certainly
open
to
it.
C
B
B
Right,
it's
quite
constrained
actually
project
admins
have
that
superpower,
and
that
is
toc
members
right,
clint,
yeah.
B
So
we
have
a
policy
for
that
so
like
this
is
why
I'm
not
overly
invested.
It
just
needs
to
have
a
rationale
that
we
can
decide
and
if
it
really
is
a
better
bus
number
for
asia,
because
we
have
to
fix
things-
and
there
are
people
around-
it's
probably
fine,
but
we
need
something
to
rational.
A
Yeah,
I
think
you
are
next
for
test
and
release
roma
yeah.
You
have
the
link
now.
F
F
E
Good
yeah
so
for
1.9,
we'd
like
to
focus
on
mitigating
the
docker
hub,
docker
hub
image
rate,
limiting
that
was
recently
added,
and
it's
causing
some
users
issues
we'd
like
to
further
automate
the
release
branch
cutting.
We
started
that
in
1.8
so
that
there's
an
automated
process
for
branch
cutting
rather
than
spending
a
day
or
two.
It's
now
a
couple
of
pr's
and
we'd
like
to
work
on
improving
that
better
highlight
the
health
of
this
year
releases.
This
is
more
work
on
the
definition
of
done.
E
Part
of
what's
come
out
of.
Here
is
a
dashboard
that
greg
started
in
1.8
to
allow
for
review
of
the
current
health
of
a
release.
As
far
as
whether
it's
on
track,
where
issues
are
that
sort
of
thing,
add
validation
of
release,
notes
so
right
now,
there's
some
areas
where
release
notes
fall
through
the
cracks.
E
One
example
is
a
release:
note
that
doesn't
have
a
category
that
matches
the
template
and
it's
not
clearly
identified
so
going
through
and
adding
validation
and
making
it
so
that
we
get
notified
if
release
notes
aren't
caught
as
well
as
trying
to
pick
up
some
of
the
stuff
that
the
linter
the
seo
linter
will
pick
up
when
that
goes
through
code
review
regression
test
for
cves.
E
Currently
we
get
regression
tests
from
envoy,
but
we
don't
add
a
lot
of
our
own
and
we
would
like
to
change
that
support
from
all
multi-document
tests,
which
will
improve
our
ability
to
do
issue.
Io
testing
convert
integration
tests
to
use
multi-cluster
at
this
real
quick.
What
multi-document
tests
mean
like
tests
that
span
multiple.
E
Yeah
yep,
so
one
example
of
this
is
on
seo,
is
a
tutorial
on
how
to
use
microservices
in
kubernetes
and
that
builds
that
builds
on
top
of
each
other
across
multiple
documents.
Right
now,
the
testing
infrastructure
only
allows
you
to
specify
one
document
and
we'd
like
to
be
able
to
build
on
top
successfully
for
things
like
that,
and
then
convert
integration
tests
to
use
multi-cluster.
So
the
goal
is
to
have
all
integration
tests,
support,
multi-cluster
and
istio
and
yeah.
This
will
work
on
that.
This
is
a
great
list.
D
C
G
H
Yeah,
this
really
didn't
just
get
discussed
when
we
were
looking
at
the
test
and
release
roadmap.
This
was
only
kind
of
brought
up
in
the
last
week,
so
I
I
wouldn't
expect
to
see
it
here,
but
but
the
engi.istio.io
site
does
need
better
support.
We
have
discussed
it
in
the
in
the
test
and
release
working
group
in
the
past.
It's
just
something
that
needs
a
refresh.
E
Yeah,
I
agree
one
thing
going
on
the
stability
thing
for
seo
is:
it
might
be
worth
it
for
us
to
spend
a
little
bit
of
time
and
see
if
we
can
expand
on
what
john
hadn't
identified
some
outliers
for
flaky
tests,
but
yeah-
that's
not
in
here
at
the
moment.
B
So
mitch,
some
of
the
work-
that's
being
done
to
ensure
proper
declaration
of
the
api
surface
right
would
include
linting
tools.
I
believe
right.
B
Right
so
that,
like
the
recent
work
that
nate
did
to
declare
the
annotations
right
have
a
manifest
of
the
annotations
is
a
lintable
artifact
right.
B
H
H
B
Like
the
proto
version
stuff
right
so
that
we
don't
generate
backward
incompatible,
protobufs
delta
based
right,
so
I
think
that's
right.
While
we
have
said
some
of
that
work
is
part
of
ux.
I
think
the
developer
tool
chain
side
of
that
is
part
of
btr
right.
H
I
think
that
makes
sense
to
me.
You
know
in
my
discussions
with
nate
and
nate
you
can
weigh
in
here.
We
really
hadn't
discussed
anything
around
linting.
Everything
has
had
been
around
user
facing
to
my
knowledge
and
that's
what
ux
has
been
considering.
So
if
there
is
to
be
a
linting
component
yeah,
I
think
test
and
release
would
be
a
great
place
for
that.
B
E
A
So
one
thing
I
would
say
you
know
I
saw
eric,
actually
owns
a
p0.
I
think
he's
on
vacation
today,
I'm
not
sure
if
he
have,
he
would
have
bandwidths
for
that,
because
he
he
has
a
lot
of
work
around
upgrade.
I
just
want
to
give
you
guys
a
heads
up,
so
that
might
be
a
p1
might
be
worse.
Will
check
with
him
on
that.
E
E
See
we
had
this
conversation
in
toc
a
couple
of
weeks
ago,
yeah,
where
the
conclusion
was
kind
of
that
we
didn't
have
a
good
metric
to
figure
out
what
a
couple
of
months
ago
now
we
didn't
have
a
good
way
to
measure.
What
testing
coverage
is.
Is
that.
H
H
So
tesla
did
invest
quite
a
bit
in
measuring
feature
coverage
back
in
like
the
one
seven
time
frame,
but
we
by
and
large
did
not
as
a
project
adhere
to
our
own
standards
that
we
had
set
around
feature
coverage
which
definitely
limited
the
value
of
the
investment
there.
C
So
my
my
request
really
is
a
a
metric
that
can
be
surfaced
to
toc
so
that
we
have
visibility
into
the
current
state
and
in
changes
of
that
state
over
time
and
sort
of
trying
to
leave
it
up
to
you
in
terms
of
what
that,
how
that
looks,
but
that,
like
without
visibility
at
toc,
I
don't
think
we
will
make
progress
all
right
like
if
it's
not
being
shown
to
tfc
each
week
like
here's,
how
we're
doing
or
even
just
once
right
like
what
is
our
current
state
we're
not
going
to
make
progress
there
so
other
folks
on
tvc.
B
No,
I
think
we
need
it
right.
I
mean,
I
think
we
need
a
holistic
understanding
of
functional
coverage
and
where
we
have
gaps.
B
H
Yeah
I
mean
the
the
dashboard
was
built
and
is
in
a
mostly
ready
state,
the
tests
simply
aren't
being
labeled
as
a
project.
We
are
not
writing
test
plans,
the
way
that
we
agreed
to
back
in
one
six
and
so
there's
very
little
data
in
the
dashboard
today.
B
H
That's
correct
that
that
also
it
was
one
of
those
things
where
we
said.
Yes,
we
want
to
do
it
and
we
did
not
assign
who
is
responsible
for
doing
it,
and
it
did
not
get
done.
E
Yes,
I
think,
initially,
with
the
definition
of
done,
we
were
looking
at
like
lines
of
code
coverage
and
that
kind
of
thing,
standard,
metrics
and
the
conclusion
that
we
came
out
of
back
then,
was
that
it
was
just
very
hard
to
measure
and
the
conclusion
was
that
lines
of
code
coverage
isn't
effective
and
it's
very
hard
to
measure
whether
something
is
fully
tested
without
any
verifiable
metric,
like
that,
you
just
kind
of
have
to
assume
that
tests
were
written
against
apis
and
that
sort
of
thing
I'd
be
happy
to
look
into
whether
we
can
come
up
with
a
better
metric
there.
C
I
think
it's
a
it's:
it's
not
a
sufficient
metric.
That
doesn't
mean
it's
not
a
useful
metric
right
so
like
if
we,
if
we
know
that
we
run
all
the
tests
and
only
20
of
our
code
is
covered
right
like
I
would
like
to
know
that,
because
that's
horrible,
whereas
if
it's
at
eighty
percent,
ninety
percent
that
doesn't
tell
us
we're
actually
doing
a
really
good
job,
it
just
tells
us
that,
okay,
we
don't
have
a
huge
hole
there
right.
E
When
I
checked
a
couple
of
months
ago,
the
number
that
I
got
back
was
50
of
lines
covered
in
sto
and.
C
By
it's
too
hard,
it's
we
can't
cover
the
things
we
care
about
the
most,
which
is
yeah
integration.
Testing.
Okay!
Is
that
something
we
could
look
at
fixing
somewhere
on
the
roadmap.
I
H
That
would
be
useful,
so
would
that
be
different
from
feature
coverage
then
or
or
maybe
we
can
take
the
details
into
a
different
conversation,
so
we're
not
holding
this
up.
H
C
B
And
I
I
would
like
to
know
what
what
features
are
being
tested
or
not
tested
right
right.
We
also
want
we
know,
so
we
want
both
things
we
have
struggled
to
get
both.
I
think
we
struggle
to
get
the
code
coverage
for
infrastructural
reasons
right
it
was.
It
was
just
hard
to
do
and
I
think
people
have
tried.
B
I
think,
we're
struggling
to
get
the
future
coverage
for
two
reasons:
right:
one:
the
lack
of
feedback
loop
and
two
procedural
adherence
right
to
the
goal.
Yep.
H
C
Yeah,
I
agree
and-
and
I
think,
there's
a
there's-
a
broader
point
there,
which
is
in
general.
If
someone
wants
like
pressure
from
toc,
or
we
want
to
push
some
sort
of
horizontal
effort,
if
we
don't
have
visibility
into
how
that
effort
is
going,
it
will
fail.
So
you
need
we
whatever
we
do.
If
we
want
some
horizontal
effort
like
whether
it's
feature
coverage
or
unit
test
coverage
or
what
whatever
it
is
even
not
test
related
right
like
if
you
have
some
effort,
unless
you
have
visibility
into
how
it's
going
it's
going
to
fail.
C
H
B
Yeah,
so
I
think
it's
certainly
reasonable
right
to
like,
if
you
think
one
is
more
like
like
one
of
those
two
things
is
more
likely
to
succeed
because
you're
further
along
in
the
implementation
of
it
right
prioritize
them
appropriately
like.
I
think
we
would
take
a
win
in
either
dimensions.
Then,
yes,.
A
E
Yeah
the
challenge
that
we
had
is
we
couldn't
come
to
a
an
agreement
back
when
we
talked
last
time
about
what
is
worth
measuring
in
istio
as
far
as
testing.
If
we
can
come
to
an
agreement
of
what's
worth
measuring,
then
we
can
come
up
with
the
metric
and
how
to
identify
that.
But
we've
got
to
come
up
to
come
up
with
that
agreement.
First.
F
A
Okay,
that's
great
so
just
capture
action
item.
I
think
the
major
action
item
is.
We
need
your
agreement.
What's
what's
that
measuring
and
what
should
be
put
on
the
dashboard.
J
Yeah,
so
in
the
process
of
defining
a
new
api
for
telemetry,
the
there
came
sort
of
a
blocker.
That
was,
how
do
we
define
references
to
the
back
ends.
The
telemetry,
the
telemetry
api
would
reference
and
which
the
resolution
within
the
working
group
was
to
follow
the
example
set
for
in
the
better
external
authorization,
api
changes
and
to
model
extension
providers
and
put
that
into
mesh
config.
J
So
what
I'm
looking
for
here
from
the
toc
is
guidance
on
how
we
want
to
represent
things
like
extinction
providers
in
the
sdo
and
and
whether
or
not
we
should
be
moving
and
migrating
all
the
existing
examples
and
ways.
Different
providers
are
configured
into
one
coherent
policy
or
or
what
our
plan
is
moving
forward.
And
so
I
I
talked
a
little
bit
with
the
environment's
working
group
and
tried
to
move
some
of
that
debate
into
this
rfc,
which
is
really
just
trying
to
figure
out
what
the
requirements
are
and
what
we
feel
like.
J
We
should
be
doing,
and
I
I'm
really
looking
for
input
from
the
rest
of
the
community
here,
because
this.
A
Is
this.
J
Well,
I'm
happy
to
have
it
go
offline,
but
it's
been
offline
in
online
and
offline
and
online
back
and
forth
in
different
working
groups.
I
don't
know
the
best
way
to
resolve
that.
K
K
It's
not
something
that
has
any
upgrade
implication.
It's
only
for
a
very
narrow
use
case.
I
don't
think
we
ever
approved
it
as
official
way
to
define
all
the
extensions.
So
we
have
a
plan
to
move
everything
to
it.
It
was
just
introduced
in
the
last
moment.
You
know
for
a
very
narrow
use
case,
so
we
don't
put
the
stuff
in
in
a
better
api.
K
Now
what
you
are
trying
to
do
has
massive
implications
for
upgrade
and
stability,
because
tracing
has
been
configured
in
a
particular
way
for
since
0.1
or
whatever
version
before
1.0
and
any
change
you
are
doing
has
massive
application
user,
because
users
who
set
tracing
zipkin
or
whatever
to
the
20
different
install
methods,
will
suddenly
migrate
to
a
new
way.
Austin.
C
K
I
don't
know
what,
where
the
requirements
are
coming
from
and
and
and
I'm
not
trying
to
to
create
new
requirements,
I'm
just
saying
the
context
for
the
pushback
and
what
environments
is
trying
to
achieve
is
to
minimize
the
pay
on
users
and
to
not
make
arbitrary
changes
that
you
know
don't
have
a
clear
value.
K
B
B
K
So
we
have
in
environment,
we
have
major
discussions
about
who
owns
smash
config.
Is
it
the
mesh
operator
like
external
studio
or
some
managed
service,
or
is
a
measurement
proxyconfig?
It's
clear,
for
example,
that
it's
usually
under
the
control
of
the
user
or
measurement,
and
we
are
discussing
splitting
proxy
config
to
a
separate.
You
know
crd
work
on
feed
maps.
So
so
we
clearly
separate
the
rules
leading
mesh
config
under
the
control
operator
now
tracing
adding
a
new
tracer
is
probably
something
that
a
mesh
operator
shouldn't
care
about
and
shouldn't
be
bothered
with.
K
It
is
also
a
regression
because
in
the
past,
user
had
ways
to
control
proxy
tracing
by
by
using
proxy
config
as
an
annotation,
and
it's
it's
moving,
something
that
should
be
or
was
in
under
the
control
of
the
user
to
the
operator.
So
it's
all
the
way.
J
L
I
I
agree
with
what
you're
trying
to
do
doug.
You
know
I've
worked
on
eclipse
project
in
the
past
and
generally
the
way
they
handle
things
that
are
meant
to
be
extendable
or
have
different
features
plugged
into
those
is
to
have
a
defined
api
for
how
to
do
that
and
how
users
will
configure
it.
I
think
when
we
talk
about
upgrade
and
those
kinds
of
things
it
goes
back
to
the
conversation
we
had
last
week
or
the
week
before
about
in-place
upgrades
right.
L
If
we're
changing
something,
we
should
be
able
to
write
code
that
migrates
that
so
that
the
user
doesn't
have
to
worry
about
it
for
a
period
of
time.
You
support
both
as
you
move
on,
so
I
don't
think
that
we
should
be
considering
that
kind
of
stuff,
as
it's
more
like
this
might
be
hard
to
do
so.
We
shouldn't
make
a
decision
at
all
and
then,
with
respect
to
this
stuff,
costan
was
talking
about
with
the
tracing.
C
We
need
to
figure
out
how
to
separate
that
out.
I
don't
think
calling
one
proxy
config
and
saying
that
the
mesh
admin
owns
proxy
config
is
the
right
option,
but
we
need
to
discuss
that
yeah.
L
C
Hold
on
so
that
I
think,
there's
a
there's,
a
problem
that
we
need
to
tackle,
which
is
mesh
config,
is
a
mess.
It
needs
to
be
cleaned
up
right.
I
I
would
like
to
not
block
progress
from
other
working
groups
on
environments
figuring
that
out,
I
think
that's
a
problem.
We
have
a
lot,
which
is
we
say
well,
this
is
a
huge
mess
right
now
we
don't
know
the
solution,
so
please
don't
make
any
changes.
That's
not
actually
good
for
the
project.
C
We
need
to
have
a
path
that
we
can
actually
continue
to
make
improvements,
while
we
figure
out
how
to
how
to
clean
this
problem
up.
I
think
in
this
case
right
we
should
actually
try
to
figure
out
the
right
model
for
defining
extensions
to
the
system
in
terms
of
like
services
that
are
plugged
in,
and
we
should
also
figure
out
the
right
way
to
do
telemetry
both
of
those
seem
like
they
are
owned
by
the
right,
the
the
telemetry
and
extensions
working
group.
C
So
I
I
think
we
do
need
to
keep
iterating
on
this
stock.
Specifically
doug.
I
don't
think
we're
gonna
solve
it
in
toc,
but
I
think
we
need
to
figure
out
the
right
group
of
people
that
can
read
on
this
and
have
you
know,
like
a
sub
working
group
set
up.
G
B
So
no
progress
so
mandarin.
The
appropriate
question
for
the
toc
right
is,
I
think,
is
the
provider
pattern,
a
good
one
for
extension
like
regardless
of
where
it
actually
lives
right,
relevant
to
sen's
question
about
like
mesh
configs
of
mass
right?
Do
we
think
the
provider
pattern?
Is
the
right
pattern
to
be
using
for
this
type
of
thing.
B
B
So,
in
the
sense
that
if
you
want
to
have
an
extension
right,
you
have
a
schema
for
it,
you
don't
necessarily
have
to
have
a
separate
api
for
it
and
you
name
it
and
then
the
way
that
you
reference
it
within
the
api
model
is
by
name
and
name
only
right.
So
we
have
a
way
to
include
or
reference
potentially
complex,
behavior
or
implementation,
and
the
way
that
we
do
it
within
our
api
model
is
by
name
right.
B
K
B
Well,
are
they
actually
right?
Not
every
provider
is
a
service
right.
Some
providers
are
implementations
right,
they're
code
inside
sdod,
they're
coding,
their
wasm
modules
right
or
they're,
even
potentially
code-
that
somebody
writes
that
you
know
becomes
a
dynamic
extension
inside
sdod
or
one
of
those
other
runtime
contests.
C
In
the
interest
of
time,
I
wonder
if
we
can
not
go
too
deep
in
this
here.
I
I
do
think
we
need
to
discuss
whether
we
put
this
somewhere
outside
mesh
config
because
of
the
concerns
about
mesh
config
needing
to
be
split.
How
about
I
volunteer
to
help
doug
and
mandar,
and
then
whoever
else
wants
to
get
involved
and,
let's
figure
out
a
path.
Anyone
else
from
toc
wants
to
up
to
I'm
just
trying
to
volunteer
to
help
move
this
forward
and.
A
C
B
Well,
I
would
like
that
to
be
a
little
like.
I
want
that
requirement
specifically
to
be
clear,
yeah
like
because
the
fact
that
we
have
mesh
config
right,
but
what
we
meant
was
that
that
was
the
thing
that
was
automatically
automatically
actuated
is
a
disaster
from
a
naming
perspective
right,
because
mesh
creates
the
perception
that
that
was
for
the
admin
to
use
as
a
name
and
I'd
almost
rather
preserve
the
name
and
the
expectation
then
change
the
behavior
right.
We're
gonna
be
careful
with
that.
Okay,.
G
And
one
one
thing
thing
we
should
all
remember
is
that
mixer
configuration
model
had
absolutely
nothing
in
mesh
config.
Everything
in
the
api
and
everything
was
configurable
to
the
nth
degree.
Right.
Is
that
something
you're
advocating
question?
I
guess.
C
Yeah,
I'm
just
saying
mender:
why
don't
you
pull
me
in
maynard
doug?
Pull
me
in.
I
can
represent
toc
here.
Unless
anyone
else
wants
to
represent
themselves
right
and
let's,
let's
have
whatever
meetings
we
have
to
have
all
right,
send
us
a
meeting.
I
can
clear
space
in
my
calendar.
B
G
Also,
so
what
what
I
don't
have
one
last
point,
the
the
three
personas
are
already
so
are
already
codified
somewhere
right,
then,
the
the
control
plane,
admin,
the
mesh
admin
and
the
and
the
third
thing,
because
I'm
not
sure
I
have
heard
that
thing
codified.
So
that
is
a
separate
thing,
but
it's
a
big
input
into
this
discussion.
That's
why
I'm
mentioning
it
here.
C
Yeah,
I
think
we
have
not
done
a
good
enough
job
there
and
especially
we
haven't
codified
how
those
mapped
to
our
api
model,
which
of
the
owned
today
very
well
so
yeah
yeah.
We
we
do
need
to
work
on
that
and
again,
I'm.
L
L
Set
of
things
going
forward,
yes,
like
some
of
the
requirements
there
seem
like
they're
specific
to,
as
someone
mentioned,
like
things
that
have
an
endpoint
other
things
might
just
be
internal
code,
those
kinds
of
things,
because
I
could
see
this
applying
to
things
like
you
know,
certificate
authority
and
those
providers
and
other
areas
where
istio
itself
is
offloading
that
functionality
to
some
third
party.
So
I
just
want
to
if
you
could
clarify
that.
K
B
Laura
there's
an
admin
altered,
artifact
right
that
wants
to
reference
like
an
external
ca
that
we
have
some
integration
with
via
code
right.
That
would
follow
that
pattern
right
right.
The
alt.
The
pattern
in
alt
z
is
there's
a
reference
to
an
external
implementation
declared
in
the
api,
but
the
implementation
itself
could
be
provide
is
provided
by
something
in
the
runtime
or
an
extension
to
the
runtime.
L
And
so
the
reason
I
bring
this
up
is
is,
like
I
said,
some
of
the
requirements
seem
like
they're,
they're
particular
to
a
certain
type
of
a
plug-in,
but
I
think
and
I'll
add
my
comments
to
the
document
here,
but
for
things
like
you
were
just
talking
about,
that
would
also
imply
some
kind
of
api
contract
for
that
area
of
functionality,
so
that
somebody
providing
a
plug-in
is
either
implementing
a
provider
api
or
working
with
a
back
end
or
both.
L
So
I
think
that
needs
to
be
sort
of
spelled
out
as
part
of
this
that
you
know,
as
things
are
deemed
to
be
areas
that
folks
can
put
their
own
functionality
into
that.
There
would
be
some
sort
of
api
contracts
around
those
areas
and
then,
ideally,
the
functionality,
that's
already
being
provided
in
istio,
would
just
implement
that
api.
As
a
simple
test
like.
L
B
Right,
the
only
thing
I
don't
want
to
imply
right
now
that
there's
a
well-defined
in-run
time
api
contract
for
the
behavior
all
right.
Let's
say
somebody
built
a
custom,
build
of
sdod
right
and
they
added
some
code
and
they
wanted
to
trigger
it
right.
But
there
was
no
well-defined
api
inside
sdod
for
doing
the
customization
that
they
had,
but
they
wanted
to
make
it
configurable
about
whether
the
feature
that
that
thing
represented
was
enabled
or
not
right,
like
the
console
integration
that
we
had,
you
know,
maybe
done
right
would
be
a
good
example.
B
K
And
we
should
foreign
for
external
services,
and
maybe
we
can
revisit
later,
if
we
need
to
refactor
this
year
to
use
this
module
or
not.
But
I
think
we
it's
better
to
not
blow
the
scope
unless
then
wants
to
load
it.
Of
course,.
B
Yeah
yeah,
I
think
it's.
Let
me
be
clear
about
the
pattern
in
the
notes.
D
L
Sorry,
I
wasn't
implying
that
I
was,
I
was
more
saying
like
as
we
identify
areas
where
there
would
be
plug-in
right,
and
that
would
be
additional
work
that
would
need
to
be
done
would
be
coordinate
off.
Here's
the
provider,
api,
here's!
What
an
example
looks
like,
because
istiod
was
doing
it
already,
and
that
gives
people.
You
know
an
example
and
a
good
jumping
off
point,
and
and
like
I
said
my
particular
interest
would
be
you
know,
certificate
and
key
management
in
addition
to
the
metrics
right.
K
L
C
Let's
so,
let's,
let's
take
this.
A
F
A
A
So,
let's
take
offline,
what
time
we
meet
for
the
actual
meeting,
so
oliver
living
are
you
guys
on?
Maybe
we
can
review
yeah.
N
Yeah,
we
made
some
modifications
on
this
plan
taking
the
feedback
from
toc,
and
this
is
just
very
quickly
run
through
those
like
new
changes,
but
the
dock
is
going
to
be
mainly
focused
on
improvements
of
existing
features
for
new
features.
We
we
do
have
very
few,
but
they
are
very
important,
so
we
can't
remove
them
the
first
one.
N
Improvements
on
existing
certificate.
Provisioning
flow
includes
two
things:
making
sds
configuration
into
first
class
api,
which
means
not
using
environment
variable
for
the
set
cars,
but
we
are
potentially
using
like
something
like
mesh
config
right
and
resolve
the
sds
related
startup
issue.
If
there's
there's
there
are,
I
know
there
are
some
issues
some
are
under
unboy.
N
This
is
some
effort
that
will
take
time,
so
I
set
it
as
p0.
I
will
try
to
do
it
resolve
as
much
as
possible.
If
not,
we
continuously
do
it
next
release
as
well.
Second,
one
update:
there
is
two
deal
to
align
with
the
kubernetes
v1
csr
api
in
their
dns
server
provisioning.
Currently,
we
are
we're
using
their
alpha
api
rfr,
there's
a
like
all
the
api,
but
it's
going
to
be
deprecated
in
in
around
end
of
q1.
I
think
so.
N
We
need
to
make
the
move
to
adopt
their
v1
api.
So
that's
that's
also
a
p0
yeah
leave
me
and
you
want
to.
O
Yeah
yeah,
so
for
improvement,
security
policy
we
have
two
p0's.
The
first
one
is
the
young
which
is
xiaomi
has
already
worked
on
for,
for
some
time
is
to
support
the
third
party
oscillation
engine
through
the
custom
action,
oscillation
policy,
custom
action.
I
think
the
design
is
already
settled
and
the
the
implantation
yeah
at
least
it's
in
progress
yeah.
If
not,
if
it's
not
completed
the
second
one,
is
making
migration
from
alpha
security
policy
to
beta
security
policy,
easier
with
a
better
documentation
of
a
conversion
tool.
O
We
have
seen
quite
quite
a
few
issue
customers.
They
are
still
stuck
in
alpha
security
policy
in
easter,
1.8
1.6.
We
actually
deprecated
all
the
alpha
security
policies,
but
yeah,
but
we
don't
want
to
leave
a
lot
of
users
behind
just
because
they
are.
O
B
O
Yeah,
I
think
the
current
data
we
got
is
from
the
google
customers.
I
I
haven't
yeah,
I
don't
know
the
other
vendors,
I
does
anyone
has
any
input.
Should
we
prioritize
this
or
do
you
think
this
is
not
very
important.
B
A
We
haven't
heard
a
lot
of
complaints.
We
actually
have
a
lot
of
user
using
authentication
policies.
Most
of
time
we
were
just
sending
them.
Dogs
and
people
seems
to
be
happy.
This
definitely
desire
from
us
to
promote
that
to
beyond
beta.
If
it's
stable
enough.
In
fact,
that's
the
number
one
feature
our
customers
use
with
is
mutual
trs,
so
we
have
a
lot
of
our
team
actually
adopt
istio.
For
that
reason,
and
I
think
it
would
be
really
cool
if
we
can
graduate
out
of
beta.
O
Yeah
right
now
it
is
beta.
We
were
actually
thinking
of
graduate
from
beta
to
ta
yeah.
It's
just.
I
actually
previously
had
a
one
item
saying
graduate
all
the
features
to
ta
is
just
we
don't
know
if
we
can
have
enough
time
to
work
on
that,
but
I
mean
from
your
experience
you
think
customers
do
not
have
problem
migrating
from
alpha
to
beta
right.
So
beta
we
started
from
beta
policy
from
one
1.6
actually
from
1.6.
All
the
alpha
policies
are
deprecated.
A
A
Yeah,
I
think
we
we
no
longer
support
one
five,
so
we
only
support
one
six
and
one
seven.
Now,
not
one.
Eight
is
just
made
to
ibm
cloud,
so
we're
probably
going
to
deprecate
one
six
really
soon,
but.
A
O
Okay,
so
louis,
do
you
think
we
should
keep
it
p0
or
p1.
B
O
Yeah,
okay,
yeah
sounds
good,
so
let's
keep
it
yeah
p0.
The
other
scenes
are
actually
left
over
from
the
previous
previous
release.
One
is
the
refactor
authentication,
filter
and
migrate
to
upstream
and
one
I
think,
lisa
is
driving
that
effort
and
also
improve
the
troubleshooting
tool
with
the
easter
cattle
we
previously
had
an
instikato
auton
and
because
we
switched
to
beta
policy
and
the
yeah,
the
the
tour
stopped
stopped
working.
So
we
want
to
also
recover
that
tool
and
make
it
better.
N
Yeah,
I'm
picking
it
up
next
one
is
support,
adding
custom
routes
to
the
workloads
to
support
ca,
migration
and
federation.
This
is
a
p1.
I
think
we
already.
N
Yes,
when
this
is
directed,
rotation
will
be
very
easy
right,
but
this
is
a
pretty
quiet.
Prerequisite
for
that.
Yeah
joint
effort
with
networking
working
group,
design,
kind
of
working
with
the
youtube
on
the
enhanced
transport
security,
yeah.
D
O
N
That's
important
feedback,
so
it's
approved
yeah,
happy,
okay,.