►
From YouTube: Technical Oversight Committee 2021/12/06
Description
Istio's Technical Oversight Committee for December 6th, 2021.
Topics:
- Networking WG Roadmap
- Environments WG Roadmap
C
A
C
Yeah,
I
just
I
guess
I
just
want
to
say
for
1.13
release
manager.
John
and
myself
has
been
working
with
the
elizabeth
on
her
first
pr.
We
do
expect
to
be
merged
pretty
soon,
so
I
think
she'll
be
qualified
to
be
a
release.
Manager.
C
No,
no,
I
was
just
it's.
It's
probably
good
to
ask
other
volunteer
for
the
113
release
man,
because
she's
new,
I
think,
she'll
be
qualified,
but
it's
good
to
have
experience
manager
work
with
her
on
the
release.
C
Yeah,
I
don't
know
you
guys
said:
google
had
somebody
112,
but
there
were
enough
people
on
one
child
release
manager.
D
A
D
You
know
not
yet
there's
supposed
to
be
a
schedule
actually,
so
people
could
sign
up
several
releases
in
advance,
so
we
didn't.
A
F
Or
do
you
want
me
to
present
that
you
can
go
and
present.
G
F
Take
it
away
steve:
the
road
map
is
relatively
small.
I
didn't
really
get
a
chance
to
go
triage
and
just
like
list
critical
bugs
here,
I'm
not
really
sure
if
they
were
appropriate
to
put
in
here,
but
the
the
main
things
I
want
to
get
done,
we're
still
waiting
for
autzy
from
the
grpc
side
and
it's
a
pretty
critical
feature
to
be
able
to
actually
call
proxies
jrbc
alpha,
dns
proxying.
F
We
added
a
few
tests
in
the
last
release
and
last
release
also
gave
a
couple
features
that
we
had
for
allowing
cross-cluster,
headless
services
and
staple
sets
for
multi-cluster
to
marinade,
and
we
had
like
three
or
four
different
users
reach
out
asking
whether
or
not
that
was
possible
and
yeah
just
turn
on
the
feature
flag
and
they
seem
to
have
success
with
it.
So
I
think
it
should
be
ready
for
beta
the
workload
group
crd
doesn't
really
seem
to
have
any
new
things
that
we
really
want
to
put
into
it.
F
One
thing:
that's
kind
of
been
in
the
backlog
that
I
haven't
really
prioritized
is
aks
or
sorry,
eks
and
aws
tend
to
use
hostnames
for
their
external
load
balancers,
and
there
are
a
ton
of
users
of
multi-network
who
are
have
reached
out
and
are
using
some
pretty
weird
hacks
to
support
this.
So
I
want
to
add
first
class
support
to
istio,
there's
like
several
options
in
this
dock,
for
how
to
do
that,
and
I
just
want
to
get
any
basic
version
of
it
working
because
it
would
be
better
than
current
state.
F
One
of
the
biggest
things
for
the
release
will
be
actually
implementing.
H-Bone
dynastio,
you
chan
said
that
it
is
a
work
in
progress
with
an
envoy
and
should
be
ready
for
a
sexually
programmed
control
plane
relatively
soon
last
release
we
merged
code
to
raise.
That
is
the
charge
to
get
that
into
like
experimental.
Yes,
yes,.
F
We
have
code
that
handles
using
endpoint
slices
instead
of
endpoint
for
istio's
main
service
discovery,
our
endpoint
discovery.
So
this
will
just
switch
istio
to
automatically
read
those
instead
of
endpoints
if
using
kubernetes,
121
or
greater.
We
actually
already
have
the
code
to
do
this,
but
we
wanted
to
actually
give
it
time
to
be
tested
and
it
leads
emerged
pretty
late
in
the
cycle
for
112.
F
D
F
You
know
this
came
out
of
conversations
with
costin
I
mean
you
could
be
argued
that
it's
ready
now
right
now
we
have
mtls
support
and
a
few
different
load
balancing
policies.
That's
all!
So
it's
it's
a
bit.
I
So,
first
of
all,
it's
we
don't
have
to
wait
because
it's
ready,
we
just
need
to
test
it,
and
it's
it's
very
hard.
I
mean
you
have
mtls
if
you
have
out
encryption,
but
you
don't
have
any
authorization,
you
don't
get
too
much
security.
It's
a
bit
weird.
I
I
You
know
before
was
the
previous
release,
but
it
just
slipped
a
bit
and
for
113
we,
you
know
have
plenty
of
time
to
just
use
it
and
use
our
own
api.
Instead
of
forcing
users
to
authenticate
themselves,
I
mean
it's
so
close
that
it's
it's
it's
kind
of.
I
According
to
grpcc,
I
mean
it's
in
in
in
traffic
is
ready,
so
it's
we
just
need
to
verify
that
it
works
for
us
as
well.
Okay,.
E
F
Yeah
I've
had
a
handful
of
people
reach
out
to
me
on
slack,
asking
about
it
and
several
have
tried
out
the
demo
and
are
just
asking
for
when
we
will
add
more
support
for
things
like
oxy
got.
It
awesome.
F
There's
a
slight
issue
where
some
of
the
changes
that
we
made
to
support
the
latest
grpc
version
break
older
grpc
clients,
their
xds
implementations
seem
to
have
kind
of
switched
over
slightly
in
one
area.
So
there's
a
chance.
We
need
to
conditionally
send
different
config
based
on
the
version,
but
I'm
trying
to
kind
of
work
with
the
grpc
team
to
figure
out.
If
that's
actually
the
case.
E
Awesome
no
good
work
steven.
This
is
I'm
glad
that
folks
are
using
it
and
giving
feedback.
I
C
I
think
the
future
promotion
items
stephen
you
had
on
your
list
makes
a
lot
of
sense.
I
do
have
a
question
on
the
host
based
low
balancer
for
multi
network.
I
think
that's
also
a
very
important
gap
we
have.
I
was
just
wondering
your
bandwidth
as
far
as
like
you
on
the
many
of
things
on
the
list.
What's
your
take
on,
you
know
having
that
actually
landed
on
113
for
the
host
name
base
load,
balancer.
F
C
I
see
so,
which
option
did
you
land
with
that,
because
I
actually
looked
at
the
dock
this
morning?
I
didn't
recall
you
specify
which
option
you
landed.
F
The
the
version
before
that
I
have
like
partially
implemented
is
like
resolving
in
the
control
plane,
which
isn't
great,
but
it's
you
know
essentially
what
users
are
doing
today
with
their
own
custom
tools.
But
I
would
like
to
experiment
with
trying
to
just
like
switch
clusters
over
to
strict
dns
and
then,
if
that
doesn't
work,
try
doing
something
with
aggregate
cluster.
C
F
There
was
some
debate
about
it
last
time,
but
I
don't
know
if
I
guess
there's
only
two
items
on
the
agenda
for
toc,
so
maybe
we
do
have
time
to
talk
about
it.
It
sounded
like
that
would
require
us
actually
deserializing
the
eds
responses
and
then
re-serializing
them
which
didn't
sound
super
optimal
but
yeah.
It
is
possible.
C
A
Are
are
there
things
that
like?
Are
there
other
things
that
are
sort
of
intentionally
we're?
Not
doing
do
you
have
a
list
of
you
know.
D
F
For
this
release
for
this
release,
get
cds
pretty
much
working
and
then
make
sure
that
way
the
way
that
they
have
other
xts
types
implemented.
It's
pretty
much
the
item
from
112
there
was
just
no
work
on
it.
Yeah
so
like
cds
should
actually
have
an
improvement
over
traditional
and
then
right
now.
Other
types,
I
think,
are
slightly
worse
than
like
state
of
the
world.
Xcs.
G
I
Less
cpu
more,
you
know
when
spare
is
your
d
server
per
cpu,
less
memory,
less
network
traffic.
D
J
G
I
A
bit
it
does,
I
mean
it's,
you
get
fewer
large
objects.
So
if
your
larger
allocation
every
time
there
is
a
push,
so
there
is
a
picture
yeah.
G
H
G
That
we've
seen
actually
in
terms
of
metric
that
changes
is
the
amount
of
just
like
network
traffic.
That
is
a
far
bigger
indication
of,
or
I
mean
that
metric
changes,
far
more
than
cpu
on
the
control
plane
or
even
on
envoy.
A
D
I
I
And
also
for
stability,
it's
a
very
good
thing
to
keep
the
current
code
as
stable
as
possible
and
focus.
You
know
any
kind
of
experimental
stuff
in
the
h1
path,
which
is
a
bit
more,
so
it
may
slow
down
h1
a
bit,
but
the
people
will
benefit
from
the
stability
of
the
core
existing
deprecated
legacy
code,
yeah,
okay,.
D
All
right
seems
like
we
don't
have
any
other
questions.
Are
there
other
things
that
you
were
curious
about
sven
that
were
not
on
the
room.
A
I
I
didn't
know
what
to
be
curious
about,
so
that's
why
I
was
asking
what
like
what
are
the
things
we
are
explicitly
not
doing
he's
like
this
looks
like
a
good
list,
but
are
there
things
that,
if
I
saw
them
I'd
be
like?
Oh,
no,
that's
a
hard
party.
I
There
is
one
which
is
also
coupled
with
h1,
which
is
support
for
stateful
sets
with
multinetwork,
which
is
simply
impossible
today,.
G
I
don't
think
so
I
mean
as
they
you
know,
make
changes
we
keep
up
with
them,
but
there's
nothing
burning.
That's
that's
not
going
on.
G
Yeah,
I
mean
I'm
sure,
as
we
get
feedback
and
usage
we'll
improve
on
it,
but
the
the
core
that's
out
there,
basically
just
waiting
for
people
to
try
it
out
and
tell
us
what's
what's
wrong
with
it.
I
guess.
G
I
think
it's
kind
of
blocked
by
the
api
itself
so
if
and
when
they
go
to
beta,
which
might
happen
in
1.13
time
frame.
Actually
so
maybe
we
should
track
it
here.
I
think
it
would
be
good
for
us
to
go
beta
at
the
same
time.
If
it
doesn't
work
out,
then
you
know
one
release
afterwards,
but
you
know
we
can't
go
beta
until
they
do
so.
I
D
I
G
Yeah,
I
don't
know,
if
that's
the
I
mean
it
may
be
a
short
term,
but
I
don't
know
if
that's
the
right
path
for
long
term,
because
that
means
that
you
have
to
like
in
order
to
use
egress
in
a
sane
way.
You
need
to
not
have
a
service
defined,
which
means
you
lose
a
bunch
of
other
things
like
telemetry,
et
cetera,.
I
D
E
E
G
G
Or
no
I
mean
with
udp
we,
I
don't
think
we
have
a
way
to
get
the
original
source
of
the
traffic
for
udp,
like
we
do
for
tcp.
D
I
D
We
have
to
decide
right
either
like
we
could
do
udp
routing
at
ingress
without
mtls
to
the
back
end.
That
would
be
a
pretty
incremental
feature
right.
I
I
That
we
know
how
to
do.
I
mean
it's
not
a
problem.
The
problem
is
how
to
get
how
to
get
the
original
source.
When
you
capture
on
this
on
the
sidecar.
Is
that
really
the
thing
we
don't
have
a
solution
for,
or
we
don't
know
how
to
do
it
ebb
for
different
cni,
where
you
create
the
second
second
v8
pattern,
and
then
you
you
can
get
it
from
on
the
second
one
that
ways
but
yeah,
okay,.
I
One
thing
that
is
actually
possible
is
to
to
have
white
box
white
box
mode
where
you
have
a
dedicated
udp
and
you
send
to
localhost
like
we
do
it
with
tcp
whitebox,
and
then
we
can
do
that
and
while
we
implement
the
encryption
with
mask,
but
that's
probably
what
3d
this
is
in
the
future.
D
D
E
For
telcos
and
all
it
has
always
come
up,
I
mean,
and
then
they
want
it
everywhere.
D
Right
right,
I
mean
mask,
is
our
strategic
solution
for
this
with
h-bone
yep
and
we
can
do
udp
over
h-bone,
but
it
would
take
some
work
to
get
that
into
ongoing.
L
C
Yeah
but
but
I
want
to
add
one
more
thing
so,
a
couple
of
weeks
ago,
do
you
guys
recall
the
hpe
team
was
presenting
to
the
networking
working
group
on
the
spell
agent
integration
and,
throughout
that
discussion
right,
we
kind
of
concluded
that
there's
only
one
change
we
needed
in
istio,
which
is
to
enable
somebody
else
which
could
be
spell
agent
as
the
sds
provider.
D
D
And
I
think
it's
a
work
in
progress
right,
jimmy
is
working
with
the
hpe
folks.
Is
that
right.
C
I
Whenever
spire
is
ready
to
land,
it's
not,
we
are
not
nobody
in
our
when
we're
just
helping
them.
C
Okay,
but
the
issue
configuration
change,
we're
interested
to
see
if
well,
you
know
we
can
help
out
on
that.
Who
should
I
follow
up
on
that.
I
I
No,
we
we
reach
the
conclusion.
Well,
at
least,
I
hope
we
reach
that
that
it's
completely
automatic
I
mean,
if
you
have
the
cni
plugin,
that
creates
the
cn
the
uds
socket
agent
will
detect
it
and
say
hey.
I
have
this
sds
so
get
created
by
cli.
Let's
use
it
don't
do
anything
else,
so
user
will
not
have
to
to
configure
anything
globally
or
anywhere
just
if
they
install
sds
support,
inspire
they
use
it.
Otherwise
they
don't.
C
I
K
B
All
right:
okay,
cool,
let's
just
start
on
future
promotions,
so
first
things
revision
tags.
We
actually
had
the
same
exact
item
on
our
112
roadmap
and
we're
going
to
kind
of
have
a
better
story
for
helm
support
and
the
proposal
kind
of
didn't
make
it
anywhere.
So
we're
gonna,
try
again
for
113
to
get
this
officially
promoted
to
alpha
helm,
we're
looking
at
beta
for
113.
We
just
moved
to
like
official
helm,
repos
and
stuff.
B
So
hopefully
things
will
stabilize
a
little
bit
more
and
we're
looking
at
beta
there
for
multi-cluster
steven
has
been
working
a
lot
on
making
it
stable.
There
are
a
few
outstanding
items
with
respect
to
like
off-boarding
clusters
from
a
mesh
that
kind
of
need
better
testing
and
maybe
more
attention
on
the
ux
there.
So
that's
kind
of
the
remaining
work.
B
Mcs
nathan's
been
working
on
getting
that
to
alpha.
So
that's
the
target
for
113.
A
And
if
I
remember
multi-cluster,
that
was
just
for
primary
primary
right,
I
believe.
F
Right
right,
one
thing
I
did
also
want
to
add
is
that
I
want
to
try
to
get
like
the
the
stable
promotion
does
kind
of
list
that
we
want
to
have
solid
performance
numbers
and
with
multi-cluster.
One
thing
we
don't
have
is
kind
of
control,
plane
performance
numbers,
as
you
scale
up
a
number
of
clusters,
because
you
know
it's
going
to
be.
A
B
Okay,
yeah
moving
on
to
other
things,
so
we
have
distrelus
that
john's
been
working
on.
I
think
that
the
main
items
here
are
like
making
sure
our
existing
tests,
like
docs
tests,
work
with
this
rulers
but
yeah.
I
know
a
lot
of
work
has
already
been
done
here
like
on
the
test
and
release
side
too.
B
So
for
113
we
should
be
able
to
have
more
fields
in
the
proxy
config
api
because
we
only
lifted
like
two
really
uncontroversial
fields
for
112.
B
and
then
potentially
making
it
so
that
more
fields
are
like
live
reloadable
and
proxy
config.
So,
instead
of
relying
on
the
field
being
set
at
injection
time,
you
can
change
it
in
the
cr
and
then
through
proxy
config
discovery
service,
it'll
automatically
change
in
the
proxy,
so
we
have
mesh
config
cleanup.
B
B
A
So
for
the
for
the
mesh
config
stuff,
I
think
it'd
be
really
useful
if
we
could
publish
some
sort
of
longer
term
roadmap
on
kind
of
what
what
the
pieces
are
that
are
going
to
move
out
you
know
and
which,
which
working
groups
own
them
and
make
sure
that
those
end
up
on
those
working
groups
roadmaps
like
I
think
we
have
it
kind
of
implicitly
in
a
bunch
of
places,
but
it'd
be
good
to
actually
write
that
up
as
a
doc.
A
Yeah,
but
I
think
I
think
it'd
be
good
to
just
make
sure
you
know
we're
sort
of
publish
publicizing
that
at
the
toc
level.
So
people
know
when
to
expect
those
things
to
be
happening.
K
I
B
Okay,
yeah:
that's
that's
everything
we
had
planned
for
113.
Are
there
any
any
other
questions
or
clarifications
here.
E
J
So
so
prefer,
I
think
I
think
I
think
we're
getting
caught
up
on
the
word
here.
Whatever
we
recommend
is
one
thing,
but
we
we
had
data
that
surprised
us
when
we
saw
how
much
the
in-cluster
operator
was
used
and
yeah,
and
I
believe
that
was
from
two
releases
ago.
L
G
Backing
out
we're
just
saying
that
it's
not
recommended,
and
I
don't
think
our
docs
ever
said
it.
Although
I
spoke
to
a
number
of
people,
then
I
asked
them:
why
they're
using
it
and
they
said
the
docs
recommended
it,
and
I
asked
them
to
point
to
it
where
it
recommended
and
there's
like
confusion
on
what
the
recommendation
is.
I
But
strategy
right
now
is
to
not
deprecate
anything
that
we
launched.
Everything
that
we
launched
will
support
it
forever.
I
mean
we,
we
pay
for
our
mistakes
and
forever
we
pay,
but
helm
experience
is
so
much
better,
especially
with
with
the
remote
repository.
Basically,
you
don't
have
to
download
anything.
You
just
run
two
commands
and
heaviest
you're
running,
and
I
think
people
will
naturally
migrate
to
this
because
it's
you
know
simply
better
and
easier.
A
There
was
deprecated
helm.
I
G
Yeah
once
we
have
the
new
gateway
stuff
as
well,
I
think
the
install
method
will
be
less
important,
hopefully
because,
if
you're,
just
installing
one
deployment,
easter
d,
how
you
install
it
really
doesn't
matter
like
people
are
perfectly
fine,
installing
the
ego
operator
and
it's
selling
used
to
id
is
just
as
hard
as
installing
the
hdo
operator.
So
I
mean
it
is
important,
but
it
hopefully
is
less
important,
especially
compared
to
where
we
were.
You
know
a
year
and
a
half
ago
where
we
had
like
14
deployments,
yeah.
I
And
one
more
thing
I
wanted
to
point
out
in
this:
the
reason
install
was
so
important
was
because
all
the
options-
image
config
proxy
config,
because
you
have
to
reinstall
to
change
anything
now
that
you're
moving
proxy
config
to
crv
and
part
of
mesh
config
is
more
or
less
auto.
You
know.
E
I
K
I
totally
see
why
we
want
users
to
move
to
helm
from
the
operator
model.
I
just
hope
that
we
can
encourage
them
to
move
with
the
carrot
rather
than
the
stick.
G
It's
already
been
that
way,
though,
we've
had
so
many
critical
bugs
in
the
operator
that
haven't
been
fixed
ever
and
like.
I
don't
think
that
it's
we
should
invest
effort
into
fixing
these
when
it's
not
the
path
forward.
Obviously
we
shouldn't
break
it
intentionally,
but
at
the
same
time,
like
there's
a
lot
of
work
that
would
be
needed
to
get
it
to
a
stable
quality.
I
And
the
best
way
to
to
not
destabilize,
it
is
to
not
touch
it.
Unless
there
is
a
cv,
it's
you
know
the
way
it
works,
it's
stable
it
work.
People
are
happy
with
it
more
or
less
the
more
you
touch
it.
The
more
you
break
it.
E
I
will
have
to
say
I
mean
it's
a
bizarre
place
where
we
are
right,
so
istio
ctl
is
the
preferred
way
and
that
has
been
stable.
I,
like
the
other
most
stable
thing
is
in
cluster
operator,
which
is
beta,
but
it
has
bugs.
We
don't
fix
and
we
don't
want
to
support
and
helm
has
been
lying
around
with
experimental
alpha
for
more
than
a
year
right.
I
I
I
That
was
a
mistake,
but
it
was
again
yeah.
E
D
D
K
D
I
guess
the
question
for
you
is:
are
you
asking
suggesting
we
try
to
do
more.
G
I
I
I
think
one
thing
that
is
not
very
clear
in
this
discussion
is
that
the
new
helm
is
actually
has
a
different
apis
and
then
the
list,
your
cartel
and
operator
understands.
So
basically,
when
you
install
you
have
a
much
smaller
set
of
features
because
it
takes
into
account
that
a
lot
of
things
moving
to
proxy
config
is
moving
to
crds.
It's
it's,
it's
a
simplified
gateway,
a
route.
So
it's
it's
a
simpler
api.
J
So
so
I
think
one
one
thing
which
which
looks
like
we
should
test
the
waters
on
right
is
that
even
though
operator
has
been
marked
as
beta,
we
are
clearly
not
paying
the
kind
of
attention
that
we
should
pay
to
a
beta
api,
at
which
point,
if
we
send
out
like
clear
notification
that
now
we
are
going
to
going
to
deprecate
it.
J
If
we
hear
a
lot
from
the
customer
saying
that
like
don't
do
it,
then
that's
a
good
signal.
However,
if
we
hear
from
the
customer
okay
deprecate
it
here
is
the
reason
why
and
in
favor
of
this,
and
if
the
community
is
okay
with
it,
then
and-
and
I
think
if
we
don't
do
anything,
it
looks
like
that's
where
we
are,
because
we're
not
really
fully
supporting
it
anyway.
J
D
You,
like,
I
think,
the
point
here.
Amanda
is
no
matter
what
we
do.
We
have
to
hand
hold
the
user
right.
It
can't
be
go
and
redo
things
right
by
hand
right.
They
need
some
conversion
all
right.
They
may
be
willing
to
move
to
helm
but
they're
not
willing
to
move
to
helm
right
without
help
and
that
help
them
better,
be
clear
right,
like
yep.
Is
your
cuddle
migrate
to
right?
If
that's
what
we
would
never,
because
we
can't
deprecate
without
doing
that.
G
Yeah,
I
don't
think
that
we
should
deprecate
it
because,
first
of
all,
deprecation,
I
think,
is
this
like
dirty
word,
that
scares
people
and
if
we
say
like
oh,
it's
deprecated,
but
we're
not
going
to
remove
it
for
two
years.
That
nuance
will
not
be
understood.
It's
almost
certain.
I
feel,
like
we've
done
so
much
churn
in
this
area
that
we've
kind
of
used
up
our
churn
budget
for
the
next,
maybe
two
years,
such
that
any
change.
Even
a
good
change
is
almost
not
not
good
right.
G
G
I
would
rather
have
you
know
if
you're
using
the
operator
fine
keep
using
the
operator,
we
should
probably
shouldn't
be
a
new
user,
that's
deciding
for
the
first
time.
If
you
picked
operator,
then
we
did
something
wrong,
but
I
don't
think
there's
any
desire
to
you
know
push
people
away
from
the
operator
that
are
already
on
it
and
cause
more
churn.
I
And
at
the
same
time,
we
don't
necessarily
want
to
improve
it
too
much.
I
mean
to
change
the
experience
to
make
improvements,
because
that
will
actually
create
danger
for
people
who
are
actually
using
it
currently,
and
so
we
will
have
to
make
changes
so
keep
it
stable
fix,
p0s
seems
like
the
safest
thing.
K
I
I
think
what
I'd
like
to
see
is
some
investment
on
test
on
the
testing
side,
so
that,
like
some
of
the
recent
regressions
that
we've
seen,
are,
are
checked
for
and
we're
sure
that
they're
not
happening
again
in
future
releases.
That
shouldn't
involve
any
risk
to
the
operator's
stability
for
users.
D
I
D
D
J
J
K
H
I
G
The
documentation,
I
think,
is
pretty
good
in
1.12.
It
basically
says
this
feature
is
not
good,
don't
use
it
if
you're
not
already
using
it.
Let
me
see
it
says:
use
of
the
operator
for
new
installations
is
discouraged.
The
operator
will
continue
to
be
disported.
New
features
will
not
be
prioritized.
I
One
thing
I
mentioned
a
few
times:
I'm
not
sure
if
it's
so
just
like
with
releases,
we
need
some
people
to
you
know
do
this
work
if
we
believe
it's
important.
So
it's
not
happened
by
magic
and
we
need
you,
know
dlc
or
someone
to
assign
to
find
resources
to
assign
them,
and
it's
nothing.
It's
it's
free
and
if,
for
example,
niraj
believes
that
operator
is
important
and
customers
are
using
it,
then
it
is
a
priority
to
find
some
swiss
to
to
work
on
it.
D
Yeah,
that
is,
the
role
of
the
working
group,
leads
meaning
right
to
triage
and
force
trade
around
yeah
staffing
for
right,
so
yeah
I
mean
if
we
see
p0
bugs
in
the
operator
right
and
it's
not
deprecated
and
it's
a
beta
api,
then
we
have
to
fix
them
and
we
can
go
and
arm
wrestle
about
that
in
the
working
group
leads
meeting.
D
D
They
can
already
use
this
to
a
couple
with
basically
the
same
configuration
so
that
the
question
would
be.
Do
we
provide
tooling
to
help
users
might
pay
to
hell,
or
is
it
even
worth
it
right?
Because
there
were
codification
of
the
other
api
options
anyway
in
proxy
config
and
mesh
config,
and
so
this
will
all
just
wash
out
in
the
end.