►
Description
Service APIs Bi-Weekly Meeting (EMEA Friendly Time) for 20201029
A
All
right
welcome
to
service
api's
meeting
for
october
29,
we're
recording
and
thanks
to
everyone
for
showing
up
we're
getting
awfully
close
to
a
v1
alpha
one
launch
and,
as
you
can
see,
a
lot
of
our
agenda
today
is
very
related
to
that.
But
let's
get
started
with
our
v1
alpha
1
milestone.
A
Most
of
these
are
now
well
prioritized.
Yesterday
we
went
through
thanks
to,
I
think
it
was
danian's
idea.
We
went
through
and
actually
prioritized
any
bugs
that
were
not
already.
You
know,
didn't
already
have
a
pr
that
could
solve
them
and
anything
that
we've
labeled
labeled
as
critical
urgent
is
something
that
we
have
said
has
to
get
in
before
v1
alpha
cut.
A
Anything
else
is
nice
to
have,
but
not
required,
and
not
a
blocker
on
a
v1
alpha
one
release.
So
I
won't.
I
won't
go
through
all
of
these
bugs
again
since
they're
pretty
familiar
since
we
covered
most
of
them
yesterday,
but
there
are
a
couple
new
ones
I
created
in
response
to
that
pre-alpha
review
doc,
I'd
gone
through
just
to
make
sure
we
caught
all
the
comments
in
there
and
there
were
two
that
I
think
are
significant
one.
A
This
is
probably
an
area
that
james
is
most
familiar
with,
and
I
know
this
is
not
the
best
time
for
him,
but
there's
a
there's,
a
problem
in
the
listener
documentation
right
now,
where
listener
status
has
a
port
to
uniquely
identify
the
listener
or
listeners
it
is
referring
to
and
in
the
pre-alpha
feedback.
We
got
this
great
point
that
that
is
potentially
not
enough,
and
so
I've
added
some
potential
solutions
here.
This
does
not
seem
like
one
that
is
particularly
difficult
to
solve,
but
it
could
result
in
an
api
edition.
A
So
just
throwing
that
out
there
I'll
wait
to
see
what
james
thinks
of
this
one
as
well,
but
while
we're
here
does
anyone
have
any
you
know
any
any
preference
on
a
way
we
proceed.
A
No,
no
to
sorry
what
I
mean
is
just
an
additional
field.
That's
it
so!
There's
the
the
confusion
comes
from
the
idea
that
gateway
listeners
have
a
associated
listener
status,
but
the
only
thing
to
tie
the
two
together
is
port
and
we've
been
introducing
ways
where
you
could
potentially
specify
more
than
one
listener
on
a
port
and
right
now
the
guidance
is
just
shove,
all
the
status
into
that
one
listener
status
thing
for
that
port
and
the
feedback
is
well.
A
Maybe
we
should
be
a
little
bit
more
specific
in
the
matching
than
just
that
and
have
more
of
a
one-to-one
relationship
between
listener,
status
and
listener.
In
the
case
that
we
have
multiple
listeners
on
a
single
port.
B
A
Yeah,
so
I
think
I
think,
there's
you
know
you
either
add
a
name
field
to
listeners
to
uniquely
identify
them-
or
you
add
at
least
protocol
to
listener
status,
so
you
can
uniquely
identify
a
listener
by
protocol
and
port
name.
So
I
I
don't
think
this
is
complicated.
Just
need
to
get
some
consensus
here
and
james
seems
to
be
most
familiar
with
these
areas,
so
I'm
gonna
I'm
interested
in
what
he
says
as
well.
The
other
one
I've
already
filed
a
pr
for
this
kind
of
was
an
umbrella
issue.
A
I
created
just
to
you,
know
capture
all
the
tiny
little
bits
of
feedback
we've
gotten
in
there.
I've
got
a
pr
that
should
address
all
of
these
at
this
point
and
we
can
get
into
that
in
issue
triage
pr
triage,
but
there
were
just
a
bunch
of
small
little
things
here
that
I
wanted
to
make
sure
we
covered
and
that's
it
for
new.
I
think
that
means
this
list
is
finally
complete.
A
I
I
hate
to
say
complete,
but
this
does
feel
like
a
list
of
everything
we
want
to
have
done
in
time
for
v1
alpha
1,
and
I
would
say
it's
also
likely
that
it's
just
the
union
of
anything
that's
critical,
urgent
here
or
anything
that
already
has
a
pr
in
for
it
and
the
rest
is
nice,
but
not
necessarily
required.
A
So
I
think
we're
in
a
reasonably
good
place.
I
know
I've
said
that
before,
but
we
really
are
making
some
great
momentum.
So
thank
you
to
everyone
for
the
contributions
here
yesterday.
We
also
decided
that
it
was
worth
launching
a
second
release
candidate
this
time
on
friday,
and
I
think
this
is
going
to
be
the
first
time
where
we
actually
do
a
change
log.
A
A
We
wanted
to
establish
some
kind
of
a
firm
deadline
for
a
v1
alpha
release,
as
we've
kind
of
covered
here,
we're
getting
awfully
close
to
that
release
timeline.
The
number
of
issues
we
absolutely
have
to
have
in
the
before
then,
is
pretty
tiny,
and
you
know
we
could
keep
on
improving
and
perfecting
this
api.
But
at
some
point
you
just
need
to
cut
an
alpha,
and
with
that
in
mind
I
wanted
to
actually
commit
to
a
date
where
we
would
launch.
A
We
would
commit
to
having
a
v1
alpha,
1
release
cut
by
no
later
than
x,
and
so
just
so,
everyone
can
be
looking
at
the
same
dates
as
I
am.
I
just
have
this
really
simple
link
here.
This
is
the
month
of
november,
starting
on
sunday
for
us,
so
surely
it
will
happen
sometime
in
november,
I
yesterday
we
had
discussed.
You
know:
20
23rd.
A
I
hate
to
release
anything
during
the
week
of
thanksgiving
and
I
also
hate
to
release
anything
on
a
friday,
so
that
leads
me
to
think
one
of
these
dates
would
be
better
for
our
final
cut.
I
would
personally
be
good
with
18th
or
19th,
but
I'm
interested
in
feedback
from
the
community
on
this
one.
Does
that
feel
too
aggressive,
knowing
what
we
know
now
about
what
is
what
we
have
left.
B
I
think
if
we
boil
it
down
it's
this,
the
two
issues
that
are
really
important
and
others
can
be.
You
know
iterative
edition,
so
yeah
I
would
be
down
for
18.
Even
11
would
be.
I
don't
think
yeah
too
aggressive
at
this
point
right
and
we
can
it's
always
better
to
release
early
in
nitrate,
especially,
I
think
we
do
have
a
of
even
alphabet
quality
that
the
community,
the
human
rights
community,
expects.
A
I
I
agree
with
that.
I,
and-
and
that's
I,
I
made
a
slight
change
to
even
what
I
was
initially.
I
had
called
this
a
you
know,
a
release
date
that
we
would
commit
to,
and
I've
changed
that
to
a
release
deadline
in
the
sense
that
we
will
release
no
later
than
this
date,
but
I
really
really
would
like
to
hit
sometime
in
this
week.
I
I
think
that's
actually
realistic,
but
I
do
want
to
have
something
that
is.
A
I
would
argue
no,
but
we
we
have
also
said-
and
this
this
is
where
we've
gotten
a
little
less
clear.
Maybe
I
should
run
back
through
the
milestone
and
actually
prioritize
all
the
other
ers.
We
have
issues
all
the
issues
we
have
prs
for
as
critical
urgent,
because
what
we've
said
is
we
want
every
every
issue
that
has
a
pr
to
get
closed
and
every
issue
that
is
critical,
urgent
to
get
closed,
but
that's
kind
of
a
rolling
deadline.
B
Yeah,
I
think
I
like
that,
like
like.
Let's
do
that
labeling
so
that
you
know
we
are
disciplined
and
like
scope
creep,
is
very
normal
and
yeah
yeah.
A
So
I
think
yeah
that
sounds
good
to
me.
Okay,
great
with
that
said,
I
think
that
I
think
it's
reasonable
to
aim
for
the
earlier
of
these
dates.
So
I'm
going
to
say
november
18.
A
Okay,
I
I'm
I'm
fine
if,
if
anyone
wants
to
push
later,
but
unless
I
hear
from
anyone,
I'm
gonna
say
november
18.
A
All
right
cool
well
with
that
we've
got
lots
of
pr's
and
issues
to
get
through,
I'm
not
going
to
filter
just
on
pr's
for
my
for
v1
alpha
1
milestone
this
time,
because
we
did
that
yesterday
and
we
missed
some
relatively
significant
pr's.
Let
me
go
through.
I
know
harry
had
raised
some
issues
that
we
should
cover.
First.
B
A
A
I
think
yesterday
there
was
some
relative
consensus
around
maybe
back
end
policy
as
a
source
here,
but
harry
you've
also
added
a
suggestion
that
we
could
just
not
have
structured
annotations
and
set
this
on
another
resource.
Instead,
yeah
maybe
give
a
wh.
What
are
you
thinking
here?
Harry.
B
Yeah,
so
I
think
what
I'm
like
yeah
structured
annotations,
can
be
a
little
painful,
although
my
inclination
is
the
structure
presented
here
is
not
too
complicated,
and
I
think
this
like
it,
it's
not
ideal,
but
I
think
it's
not
too
bad,
given
that
it's
a
stopgap
solution
and
second,
if
that,
if
it
is,
you
know
it
is
a
problem,
and
we
don't
want
to
do
that.
B
What
if
we
don't
have
a
structured
solution-
and
you
just
say,
annotation
key
and
the
value
is
http
https
whatever,
and
it
applies
in
it
that
takes
in
effect
whenever
a
gateway
is
talking
to
that
service
right,
no
matter
which
port
it
is
now.
It
obviously
does
have
limitations
right.
You
cannot
have
like
between
the
gateway
and
service.
B
You
cannot
have
traffic
flowing
on
different
protocols,
so
that
is
a
limitation,
although
it
does
not
show
up
in
most
use
cases,
the
in
most
use
cases,
the
gateway
and
the
backend
usually
talk
on.
You
know
a
single
board,
and
so
so
maybe
it's
not
a
too
big
of
a
limitation.
So
that's
what
I'm
thinking
right
like.
Can
we
simplify
and
keep
it
that
way.
A
A
Yeah,
I
get
that,
and
I
I
know
obviously
we
we
are
all
familiar
with.
You
know
ingress
where
there's
annotations
everywhere.
With
that
said,
I
this
feels
like
maybe
it's
it's
reasonable
for
an
exception,
because
what
we're
primarily
trying
to
do
is
provide.
A
You
know
some
kind
of
way
to
express
this
for
earlier
kubernetes
versions.
But
what
we're
talking
about
is
something
that
will
almost
be
deprecated
on
arrival,
in
the
sense
that
this
is
something
like
you
should
use
for:
kubernetes,
116
117
and
after
that,
it's
useless
kind
of
thing,
so
we're
we're
just
trying
to
provide
a
way
to
like
it
would
be
one
thing
if
a
field
didn't
already
exist
to
solve
this
problem,
but
we
there
is
a
field
and
it's
coming
so
I'd
hate
to
add
another
field.
That
feels
like
a
more
permanent
thing.
A
If
it's
just
going
to
overlap
and
confuse
between
the
app
protocol
field
on
service-
and
you
know
we
like-
if
we
added
an
equivalent
thing
here-
it
would
be
less
clear
what
should
be
chosen,
whereas
if
we
just
have
an
annotation
as
kind
of
a
temporary
stop
gap
until
more
users
on
are
on
kubernetes
118.
A
B
A
I
would
prefer
to
say
no
at
this
point,
but
you're.
That's
a
great
point.
I
part
of
the
issue
here.
Is
we
really
don't
understa?
At
least
I
don't
understand
the
potential
non-service
back-end
use
cases
yet
and
what
those
might
look
like
you
can
think
of.
You
know
I've
often
thought
of
like
a
storage
bucket,
as
as
an
example
of
a
of
a
non-service
backend,
and
that
makes
sense,
but
that
is
something
where
app
protocol
doesn't
seem
to
apply
like
it.
A
I
don't
know
like
it's
hard
for
me
to
think
of
a
common
use
case
for
a
custom
backend.
That
is
not
that
requires
app
protocol,
not
to
say
it
doesn't
exist.
I
just
don't
because
we
don't
have
a
concrete
use
case.
Yet
it's
hard.
C
B
C
Yeah
and
since
it's
not
standard
in
on
our
side
like
we
say
we
target
service,
it
would
so
it
would
either
have
to
be
like
a
standard,
well-known
resource.
That
kind
of
everyone
agrees
on,
in
which
case
we
can
talk
about
an
app
protocol
on
that
thing
or
it's
something
that
is
entirely
custom.
That
is
outside
of
this,
in
which
case
you
can
add
a
protocol
there
anyways.
B
Yeah
I
mean
if
you,
if
you're
thinking
of
just
solving
the
service
protocol
problem
with
an
annotation
and
backend
policy,
I
think
that's.
Okay,
the
the
only
other
concern
that
I
would
have
is
then
back-end
policy
resource
itself
will
only
have
a
selector
and
nothing
else
meaningful
inside
it
and
right,
but.
B
C
A
Yeah,
maybe
maybe
to
start
we,
we
call
this
annotation
just
specifically
service
app
protocol,
so
it's
very
clear
that
this
is
tied
to
a
service
and
intended
for
that
use
case.
Only
app
protocol
wasn't.
A
Even
before
18
in
custom,
there
was
no
standard
annotation
that
I'm
aware
of
there
were
a
lot
of
custom
ones.
C
B
Yeah,
so
in
that
case,
then
we
can
have
this
annotation.
We
keep
this
annotation.
We
don't
need
the
structure
if
we
have
the
back-end
policy
and
we
put
this
annotation
on
the
back-end
policy
and
that's
the
guidance
that
we
have
for
two
versions
of
kubernetes.
But
then
essentially
it's
a
deprecated,
annotation
sort
of
yeah
yeah.
Any
any
objections
on
that.
A
A
Cool
sounds
good
thanks
all
right,
so
that
was
the
upstream
protocol,
the
default
for
hostname
match.
This
is
one
of
my
pr's
that
fixes
a
couple
different
ones.
A
This
has
this
has
been
approved
and
lgtm,
but
I
need
to
needed
to
rebase
it.
So
just
if
anyone
wants
to
take
another
look
at
this,
I
think
it's
really
straightforward
now,
so
I'm
not
going
to
spend
much
time
on
this
call.
I
know
we
have
plenty
of
other
pr's
to
get
to
the
next
one
clarify
what
happens
when
a
named
address
is
not
supported.
A
D
D
I
guess
any
suggestions
should
do.
Typically,
there's
like
an
rfc
that
we
reference,
I
mean
rfc
791
is
pretty
old,
but
it
does
talk
about
the
requirements
for
an
ipv4
address.
B
D
D
Textual
representation,
I
believe
it's
yes.
C
A
Yeah,
so
I
spent
some
time
actually
I
read
through
these
comments
and
I
wanted
to
dig
through
how
we're
defining
iep
addresses
in
kubernetes
upstream
and,
actually
surprisingly,
I
could
not
find
any
references
to
rfcs
or
really
much
of
any
detail.
It
was
almost
assumed
that
you
know
like
it
was.
This
is
either
an
ipv4
ipv6
address
that
that
was
about
as
specific
as
I
could
find
in
upstream.
There
was
not
a
lot
of
additional
detail
around
what
an
ip
address
was
or
a
great
definition
yeah.
It's.
C
So
this
one
seems
pretty
straightforward:
we
just
need
to
find
the
right
reference,
because
ineb
piton
has
like
you
can
and
like
three
is
an
ip
address,
which
is,
I
guess,
okay,
we
just
have
to
have
a
test
case
for
it,
because
all
the
some
controllers
might
find
that
strange
and
then
the
other
one
is
the
xero
elijah
rules
for
ipv6,
which
I
think
is
like
pretty
standard
but
again
needs
a.
B
Reference
I
see
in
that
case,
I
think
we
are
mentioning
rfc
5952.
Maybe
if
you
open
that
def
section
rob
that
would
open
up
there,
but
even
in
there
like,
I
think
we
can
just
mention
that's
for
ipv6
and
then
we
have
for
ipv4.
We
can
have
this
rfc.
I.
A
D
Yeah
and
I'm
starting
to
wonder
if
this
is
kind
of
scope
creep
too,
in
the
sense
that
this
was
godok
that
was
already
in
there
and
if
we
go,
can
we
go
back
to
the
issue
that
this
is
trying
to
address?
Really
quick.
D
A
B
A
D
All
right
I'll
I'll
delete
that
first
sentence
the
rest
of
the
changes
that
that
the
pr
introduces
is
that
everything
else
looks
reasonable.
A
Or
not,
yeah
that
it
made
sense
to
me.
I
I
need
to
I
looked
through
it
once
and
I
didn't
have
any
feedback
to
add,
which
meant
you
know
other
than
what
was
already
said
about
what
was
apparently
a
little
out
of
scope
here.
B
D
C
Yeah,
this
is
a
discussion
that
we
had
with
some
other
folks
as
well.
The
a
lot
of
the
clouds
have
a
notion
of
reserving
a
name
and
having
the
address
be
a
resource
is
how
this
ended
up
here.
Oh
yeah
support
is
implementation
specific,
but
we're
putting
in
the
api
interesting.
I
think
it
should
be
extended
right.
B
C
B
C
C
A
Generally,
aren't
these
wouldn't
they
have
kind
of
globally
unique
names.
C
B
A
A
Okay,
so
this
one
I've
already
covered-
this
is
the
same.
Pr
that
is
approved
just
needs
another
lgtm
after
the
last
rebase
and
the
cleanup
from
pre-alpha
review
yeah.
Let's
cover
this
real
quickly.
This
is
a
pr
I
got
in
last
night.
A
There's
there's
a
lot
of
relatively
tiny
changes
here
I
was
hoping
it
wouldn't
be.
None
of
them
would
be
too
controversial
and
I
just
seemed
easier
than
a
pr
for
each
one.
The
one
of
tim's
comments
in
pr
pre-alpha
feedback
was
that
we
had
an
unbounded
list
and
you
can
never
have
an
unbounded
list
in
kubernetes,
so
I
just
added
an
upper
limit
of
100
to
the
number
of
gateways
that
could
be
stored
in
route
gateway
status.
A
Then
I
yeah.
Let
me
just
look
at
the
diff,
because
there's
not
that
much
in
here
there's
a
little
typo
fix
here,
resource
to
kind
to
match.
Our
latest
thing
there's
also
some
feedback
that
undefined
had
scary
connotations
depending
on
languages
that
you'd
worked
with
and
so
unspecified
seemed
to
be
a
safer
way
to
describe
this.
A
We
had
listed
a
filter
as
core,
but
then
it
could
be
used
in
a
forward
two
and
then
it
would
not
be
core
in
that
sense.
So,
oh
I
got
this
wrong.
Let
me
fix
this
myself.
This
should
be
hd
route
forward,
two
that
I
want
here,
but
yes,
basically
defining
the
two
different
levels
of
conformance
depending
on
where
it's
actually
used.
A
So
there's
a
there's
a
couple
things
here:
one
of
the
large
ones
is
there's
a
question
around
why
we
had
such
a
low
maximum
for
weight
and
the
the
answer
was
well.
It
seems
like
most,
everyone
can
get
enough
precision
out
of
you
know
on
a
scale
of
one
to
ten
thousand,
but
even
still,
there's
there's
no
reason
that
we
can't
have
a
higher
weight
and
we
looked
at
alternatives
and
they
often
had
higher
weights,
like
sometimes
the
max
of
in
32
as
an
example.
A
So
with
that
this
increases
that
to
1
billion
harry
correctly
pointed
out
that
not
every
implementation
is
going
to
support
arbitrary
weights
and
that's
a
great
point.
I
tried
to
clean
this
up
based
on
some
feedback
and
the
rest
of
this
thread.
So
basically
the
the
key
sentence
is
for
non-zero
values.
There
may
be
some
epsilon
from
the
exact
proportion
defined
here,
depending
on
the
precision
on
implementation
supports,
so
this
is
computed
by
weight
divided.
A
So
you
you
have
a
proportion
and
then
that
will
be
mapped
to
whatever
precision
that
implementation
can
actually
support.
Does
does
that
feel
like
a
reasonable
compromise
here
I
don't
know
harry
does.
Does
that
help
with
your
use
case,
or
would
you
rather
just
have
a
lower
weight
still.
B
I
think
yeah
I
mean
it's
more
about
like
for
non-zero
value.
There
might
be
some
exact
proportion,
okay,
yeah.
I
think
that's
that's!
That
could
be
a
little
confusing,
but
the
number
of
users
who
would
run
into
this
would
be
not
too
high,
like
it's
more
like.
Are
you
trying
to
do
like
a
weight
between
two
back
ends
of
one
and
then
one
billion
right
so
like
like?
A
Yeah,
this
really
does
feel
like
quite
an
edge
case-
I'm
not
particularly
tied
to
this
large
of
a
number.
I
I
do
admit
that
10
000
is
rather
arbitrary
and
potentially
too
low
and
just
a
quick
survey
here.
We
know
what
kong
supports.
Anyone
else
know
what
their
implementations
max
out
at
here,
if
or
the
the
precision
that
they
can
provide.
B
And
the
other
question
is,
you
know,
do
you
have
that
level
of
traffic
yeah
large
providers
like
google
could
have
that
but
like
if
you're
really
spilting
traffic
between,
like
you
know,
a
billion
requests,
it
gets
harder
to
test
it
so,
and
I
also
don't
know
why
kong
has
that
arbitrary
limit.
I
need
to
go
back
and
check.
C
But
isn't
it
won't
you
just
scale
it
anyways
into
your
range.
A
C
C
That
we
shouldn't
go
so
high,
like
maybe
make
it
fit
into
an
n32,
especially
adding
up
multiple
of
them,
because
I
know
that
some
people
had
a
interesting
experience
where
they
used
the
max
value
and
then
it
ended
up
overflowing
like
an
int
when
they
had
multiple
weights.
So
that's
just
one
thing
to
consider:
if
you're
going
to
have
a
super
high
value.
B
C
That's
just
dynamic
range
and
some
people
may
be
encoding
stuff
in
there.
The
reference
we
have
is
for
scheduling
priorities,
which
is
actually
like.
The
number
can
be
gigantic
and
they
just
allowed
space.
C
I
think
scheduling
priority
is
like
a
n64
which
is
pretty
humongous.
Although
rob
we
should
just
double
check.
I
don't
think
a
billion
is
going
to
go,
for
it
is
64.
but
like
if
you
had
100
like
a
split
among
100
back
ends,
and
then
you
like
add
them
all
up
like,
for
example,
in
32,
seems
like
a
pretty
good
number,
and
then
we
divide
it
by
like
a
thousand
and
then,
if
you
add
them
all
up,
you
won't
your
math
won't
overflow.
That
might
be
reasonable.
A
Yeah
I'm
trying
to
find
where
we've,
where
we've
said
the
number
of
back
ends
that
we
are
the
number
of
forward
twos
that
we
support.
But
I
it
is
enough
that
if
everyone
specified
the
max
value
here,
I
think
we'd
overflow,
an
n32.
Oh.
C
C
I
don't
know
I
wouldn't
lose
too
much
sleep
on
this,
given
most
users
won't
try
to
peg
it
at
the
max
right
so
like
there
are
weird
use
cases.
For
example,
like
that
thing
I
was
alluding
to
earlier
where
people
were
trying
to
peg
at
the
max.
They
did
that
to
get
a
specific
behavior
out
of
the
system,
and
then
they
didn't
realize
that
if
you
actually
set
it
at
the
max,
it
would
like
overflow.
A
Okay,
well
yeah
I'll
I'll-
think
about
this
a
little
bit
more
and
try
to
understand
what
a
reasonable
middle
ground
is
here.
This
does
feel
too
high,
and
this
does
feel
potentially
a
bit
too
low
and
I'll
come
back
and
update
this
pr
yeah.
We
we
are
over
time
here.
So
let
me
just
real
quickly
highlight
this
pr,
because
it's
worth
some
broader
community
feedback,
yeah
istio
and
is
already
using
the
gateway
resource
and
already
using
a
gw
short
name.
A
The
argument
had
been
well
okay,
if
we
at
least
use
different
short
names,
we
can
co-exist
relatively.
Happily.
Another
argument
is
well.
We
can
just
you
know.
We
can
never
be
completely
unique
here.
A
C
A
Yeah,
my
my
argument
has
been
that
we
shouldn't
intentionally
overlap
here
more
than
we
already
are
so
using
a
different
short
name,
even
if
it's
not
quite
as
nice
might
be,
might
allow
the
two
to
coexist
for
a
bit
longer
but
yeah.
I
just
we
don't
have
enough
time
to
discuss
on
this
call,
but
I
do
want
to
at
least
raise
attention
to
this.
So
if
you
have
strong
feelings
about
what
we
should
use
here,
definitely
every.
A
On
this
pr,
yes,
okay,
cool,
well
we're
over
time
here
I'll
plan
on
filing
a
pr
to
update
a
change
list
on
friday,
and
hopefully
we
can
get
a
release
candidate
out
then
and
we're
getting
awfully
close
to
a
release.
So
yeah
thanks.
Everyone
for
the
help
have
a
great
rest
of
your
week
and
we'll
talk
to
you
next.