►
From YouTube: GitHub Quick Reviews
Description
Powered by Restream https://restream.io/
A
Hello,
everyone
welcome
to
a
regularly
scheduled
api
review
where,
once
again,
we
have
things
marked
as
blocking
I'm
at
least
partly
to
blame,
and
we
have
some
other
6-0
issues,
but
we're
not
really
expecting
to
get
past
the
red
today.
B
A
Let's
jump
right
in,
shall
we
all
right.
General
purpose:
non-cryptographic
hashing,
api,
for.net
24328!
This
is
something
we
already
approved.
A
So
in
the
non-cryptographic
hash
algorithm
base
class,
we
have
this
property
hash
length
and
bytes,
and
then
we
have
all
the
span.
Writing
methods
returning
the
number
of
bytes
that
they
wrote,
which
is
all
well
and
good.
We
also
have
the
template
method
pattern,
so
we
have
git
current
hash
core,
since
that
was
returning
an
end.
That's
supposed
to
match
the
property
I
found
myself,
checking
it
and
throwing
if
it
didn't
match,
which
I
decided
was
stupid,
and
I
just
made
the
method
void.
A
Approve
cool
all
right
and
then
the
second
piece.
So
we
did.
We
approved
four
algorithms,
crc32,
crc64,
xx,
32
and
xx.
64.,
the
xxhash
family
is
actually
seeded
hash
algorithms,
and
instead
of
limiting
our
implementation
to
the
zero
seed
I'll
make
us
or
I
propose
that
we
simply
default
to
the
zero
seed
and
allow
specifying
the
optional
seed,
because
we've
seen
usability
concerns
with
defaults
in
constructor
default
parameters
in
constructors.
A
So
we
could
we've
simply
seen
that
default
parameters
confuse
novice,
c-sharp
developers
with
intellisense.
C
D
Don't
you
think,
then
it's
kind
of
backwards
that
the
the
you
know
the
one-stop-shop
aesthetics
are
the
ones
that
still
have
it,
but
then
the
more
complicated
case
when
you
construct
the
objects,
they
don't
have.
It.
D
Well,
yeah,
I
mean
I
think
they
I
mean
it
depends
on
how
you
see
this,
but
my
expectation
would
have
been
that
basically,
people
who
want
to
do
hashing
just
do
the
static
ones
right.
So
the
so
the
simple
case
would
be.
I
mean
I
mean
not
sure
simple
is
here
the
best
way
to
say
anything,
because
none
of
this
seems
very
simple,
but,
like
I
assume
it's
like
you
just
take
the
type
you
call
dot
hash
on
it.
D
A
D
A
All
that
we
have
right,
you
know
this
is
the
the
hash
method
that
takes
the
input
and
produces
the
output,
and
then
these
next
three
are
just
the
the
spanified
versions
of
that.
The
span
input.
A
The
try
and
the
span
to
span,
certainly
the
span
to
span
should
be
able
to
understand
a
default
parameter,
certainly
yeah,
but
if
you
think
that
the
friendly
array
to
array
wants
to
be
an
overload
just
for
friendliness
purposes,
really
this
the
overload
for
the
constructor
was
me
trying
to
channel
you
with
usability
email.
So,
whatever
you
think,
the
right
answer
is
I'll
believe
it.
D
Yeah,
I
think
christoph's
suggestion
was
basically
like
the
most
simple
case.
Just
make
sure
you
have.
You
know
basically
a
method
or
parameter
that
takes
less
than
three
arguments
and
then
for
the
more
complicated
cases
you
could
just
say
you
only
have
one
that
takes
optionals,
because
that's
basically
when
people
get
started,
that's
what
they're
looking
for
they're
like
they
get
scared
when
they
see
something
pops
up
and
it
takes
12
arguments.
A
D
B
Okay,
so
I
think
we
can
so
this
is
a
new
api,
so
it'll
be
exciting
to
essentially
get
a
wider
audience
audience
to
take
a
look
so
here
in
the
issue
in
a
bishop
proposal,
I've
kind
of
summarized
essentially
the
key
points
from
a
more
extensive
design,
doc
that
I've
linked
at
the
top
and
in
that
design
talk,
there's
a
lot
more
details
with
like
the
goals,
the
motivations
and
some
of
the
plans
in
terms
of
how
we
expect
some
of
these
implementations
to
work
as
well
as
proof
of
concepts
and
and
those
kind
of
things.
B
But
I
think
for
just
to
start.
I
will
kind
of
give
a
bit
more
of
a
brief
brief
overview
of
the
background
and
motivation
and
then
kind
of
go
over
to
api's
as
currently
proposed,
and
then
we
can
get
into.
I
expect
that
we'll
get
into
a
lot
more
discussion
about
specifics
on
this
new
api.
B
So
the
background
is
that
we
want
to
provide
a
set
of
abstractions
for
defining
rate,
limiting
primitives
or
resource
limiting
primitives,
and
this
is
to
kind
of
ensure
that
folks,
who
want
to
do
rate
limits
or
concurrency
limits,
will
be
able
to
speak
the
same
set
of
languages
and
and
use
the
same
abstractions
for
exchange
types
as
well
as
we
will
provide
some
default
implementations
that
solve
some
of
these
concerns.
B
The
two
main
types
of
limits
that
we
want
to
enforce
or
provide
abstractions
for
are
rate
limits
and
concurrency
limits.
So
rate
limits
is
like.
I
want
to
make
sure
that
my
apis
are
only
called
with.
You
know:
five
requests
per
second
and
any
further
processing.
Beyond
that,
we
are
just
going
to
essentially
throttle
those
requests.
Concurrency
limits
is
more
like
I'm
only
allowed
to
process
five
requests
at
a
time.
B
I
may
process
more
in
a
second
if
they
end
up
being
shorter
processing
like
requests
or
I
I
may
process
less
than
that.
If
all
of
these
requests
take
a
long
time-
and
we
wanted
to
provide
a
single
abstraction
that
cover
both
of
these
cases,
because
we
find
that
in
a
lot
of
cases
where
this
is
being
used,
either
in
libraries
or
in
other
components,
they
want
to
essentially
take
in
a
limiter
or
an
abstraction
for
a
limiter
and
query
for
whether
they
can
proceed
with
the
operation.
B
These
abstractions
are
three
types
of
rate
limits,
fixed
window
sliding
window
and
token
bucket,
and
we
want
to
also
ship
a
concurrency
limiter
and
we
essentially
keep
track
of
the
resources
acquired
or
obtained
via
a
release
resource
lease
struct,
and
this
essentially
allows
us
to
track
ownership
of
the
acquired
resources
and
for
these
limiters.
Essentially,
we
also
want
to
have
two
separate
types
of
apis.
One
is
essentially
doing
a
fast
check.
C
B
A
little
bit,
I
think
we
should
probably
take
a
look
at
the
aps,
apis
a
little
bit
I'll
just
go
over
it
briefly
and
then
I'll
kind
of
open
up
the
the
floor,
and
we
can
definitely
go
into
essentially
more
details
about
these
goals
and
I
can
probably
open
up
the
design
dock
as
needed
to
kind
of
elaborate
on
any
of
those
points.
But
I
think
I
probably
want
to
go
over
the
proposed
apis.
At
least
the
abstraction
really.
B
Quickly,
okay,
so
essentially
in
the
proposed
api
we
want
to,
we
start
to
create
a
new
name.
Space
and
it'll,
be
a
separate
package
and
it'll,
currently
we're
thinking
system
threading
resource
limits,
but
that
is
up
to
the
discussion.
The
key
point
is
that
we
want
to
have
this
abstract
class
called
resource
limiter
and
on
it
we
have
three
separate
members,
the
estimate
account,
which
provides
essentially
an
estimated
account
of
the
resources
available.
B
This
is
mostly
used
for
diagnostics,
but
in
certain
cases
it
could
be
used
for
actually
to
make
certain
decisions,
but
we
expect
this
to
be
mostly
a
diagnostics
thing
for
the
acquire
method
that
is
essentially
the
fast
synchronous
attempt
to
acquire
resources.
You
give
it
a
requested
account
to
indicate
how
many
resources
you
want
to
obtain
and
you
get
a
resource
lease
back,
which
indicates
whether
this
acquisition
was
successful
or
not,
and
when
it's
successful.
It
also
encapsulates
the
resource
that
is
resource
leased
at
its
own.
B
And
that
is
how
you
that
is
the
struct
that
you
interact
with
for
essentially
releasing
back
the
resources
when
you're
done
and
then
there's
acquire
async,
which
is
a
async
version
of
acquire
which
takes
in
a
request
account
as
well
as
a
cancellation
token,
and
when
resources
are
available.
It'll
return
immediately
and
when
they
are
not,
it
will
wait
until
those
resources
are
available
before
returning.
B
H
B
Yeah
and
yeah
in
terms
of
the
resource
lease,
it
is
a
struct,
a
read-only
struct.
It
has
a
is
a
choir
to
indicate
whether
the
acquisition
was
successful
or
not
a
count.
That
represents
how
many
resources
were
obtained
and
is
on
the
lease
it
has
a
state.
B
This
is
a
bit
more
controversial
one,
so
the
intention
here
is
to
represent
additional
metadata,
such
as
retry,
afters
values
and
error
codes
that
certain
limiters
can
return,
and
there
are
discussions
on
whether
there
should
be
an
object
with
different
interfaces,
or
it
should
be
a
property
bank.
So
this
one,
I
think
we
need
to
discuss
a
little
bit
more.
B
There
are
also
two
private
fields
that
facilitate
essentially
the
release:
semantics,
which
is
a
resource
limiter
reference
and
a
action
that
indicates
what
the
undisposed
what
happens
on
this
position.
B
So
so
that's
a
very
brief
tour
of
the
entire
api
and
I
kind
of
want
to
start
opening
the
discussion
a
little
bit
more
to
everyone
before
we
kind
of
go
further
into
detail
for
any
of
these
so
yeah.
I.
F
Have
some
behavioral
questions
yes,
so
acquire
if
I
say
acquire
five
that
might
return
immediately
with
a
count
of
three
or
it'll
only
ever
return,
five
and
or
zero.
I'm
not
clear
on
the
semantics.
B
Right
so
currently,
the
proposal
in
the
implementations
is
that
we,
it
is
all
or
nothing.
So
there
is
no
partial
acquisition,
but
there
is
under
alternative
designs.
Essentially
diddy.
We
have
thought
about
essentially
partial
acquisitions
as
well
as
partial
releases
and
what
that
could
look
like
it's
just.
We
don't
currently
see
any
use
cases
for
those,
so
we
haven't
added
those
yet.
F
B
So
partial
releases
will
entail
additional
functions
on
the
strut,
essentially
a
release
method
that
essentially
you
pass
in
an
amount
and
what
that
will
do
is
it
will
decrease
the
decrement
the
count
on
the
resource
lease
and
yeah,
so
you
can
essentially
resource
release
resources
back
to
the
limiter,
as
you
are
done,
processing
these.
F
B
Yeah,
so
that
is
potentially
that'll
make
count,
I
think
a
little
bit
more
difficult
and
all
it
might
entail
using
something
on
state
but
yeah.
F
It's
this
ties
in
with
my
previous
comment
about
like
what
what
is
the
purpose
of
count
at
all,
if
acquire
and
wait
async
only
ever
give
you
back
zero,
which
is
acquired,
equals
false
or
the
actual
amount
you
asked
for.
What
is
the
situation
where
count
is
important.
B
So
it
is
used
by
the
on
disposed
call
or
delegate
when
this
resource
is
disposed.
F
F
An
implementation
detail
for
the
provider
of
this
resource
limiter
and
resource
lease.
That's
not
for
that's
not
for
user
consumption
right.
B
So
the
reason
why
we
decided
to
have
an
explicit
account,
that's
public
is
we
think
this.
This
could
be
a
very
like.
It
could
be
useful
for
users
to
know
essentially
how
many
resources
were
obtained
to
kind
of
just
keep
track
of
it,
but
yeah
it
is.
I
do
see
your
point
in
that
yeah.
If
we
expect
this
to
be
all
or
nothing
then
count
is
potentially
redundant.
F
Yeah,
I
guess
the
concern
is
my
concern.
There's
a
small
concern
about
redundancy,
there's,
a
larger
concern
about
the
immutability
and
copyability
of
resource
lease,
and
what
does
that
mean
for
count
when
it's
disposed?
If
you
have
a
copy
made
of
it
like?
What
does
that
show
up,
you
know
after
you
dispose.
What
does
the
copy
show
up
as
if
we
did
have
partial
release?
What
does
that
mean
for
copies
of
count?
What
does
that
imply
for
the
underlying
implementation,
blah
blah
blah.
A
A
No
like,
if
this
I
mean
you
know
either
it's
a
struct
wrapping
a
class
that
is
pooled
or
you
know
whatever,
or
it's
a
struct
that
holds
an
id.
That
is
just
a
key
into
a
dictionary
that
tracks
the
state
which
it
like
you.
Basically,
it's
the
state
needs
to
be
held
elsewhere
and-
and
this
just
manages
calling
into
that
state
like
we
have
that's,
that's.
Basically,
the
pattern
for
disposable,
structs.
B
It
would
it
be
better
to
make
the
struck
not
read
only
but
most
of
the
fields
other
than
count.
Read-Only.
G
And
don't
we
have
that
that
that
attribute
that
we
didn't
do
yet.
A
D
I
C
B
So
that
was
covered
in,
I
think
one
of
the
other
alternatives
designs
where
we
essentially
make
resource
leads
a
class.
And
yes,
it
makes
a
lot
of
things
simpler.
You
don't
have
to
have
a
state
because
you
can
just
subclass
it
you
yeah,
and
we
can
keep
track
of
like
this
position.
It's
a
lot
easier,
but
the
allocations
on
every
success,
successful
like
res
acquisition,
either
via
acquire
or
weight
async,
was
deemed
to
be
too
allocating
for
it
to
be
a
useful
api.
I
G
A
A
That's
basically
because
we've
always
had
very
loose
ownership,
and
if
you
tie
this
in
with
some
other
state,
did
it
get
both
put
in
a
using
scope
and
tied
in
with
somebody
else's
using
scope
or
or
what,
and
so
it's
yeah
I
mean.
That's,
that's
the
general
stance
we
have.
There
are
probably
exceptions,
but
that's
the
start.
C
There's
a
good
rationale,
which
is
that
when
you
rely
on
on
only
disposing
once
you
get
into
use
after
free
issues
now
these
have
to
free
issues
may
be
manageable.
Maybe
we
don't
care
about
them.
I
suspect
that
we
care
about
them
to
some
extent,
but
they
may
not
rise
to
like
msrc
level,
but
you're
certainly
going
to
put
a
burden
on
customers.
Some
customers
are
definitely
going
to
hit
this
and
yeah.
I
D
Well,
you
kind
of
can
right.
So,
basically
all
you
need
to
do
is
you
need
some
sort
of
tag
in
the
state
object
that
you
also
burn
into
the
struct
and
then
before
the
structure
is
allowed
to
use
it
just
to
check
that
it's
the
same
tag
right
and
so
as
soon
as
you
return
it
to
the
pool,
you
would
have
some
sort
of
interlocked
operation
that
would
basically
bump
that
tag
right.
D
G
C
F
G
F
A
F
A
A
reasonable
thing
that
I
guess
that
I
was
thinking
right
now.
I'm
we'll
see
if
anyone
else
thinks
it's
reasonable
state
doesn't
have
to
be
public
on
this
thing.
So
if
you
take
resource
lease
and
it's
just
a
struct
wrapping
some
sort
of
state,
whether
it's
a
incrementing
long
or
you
know
whatever-
and
it
has
to
defer
back
to
the
implementation
to
provide
answers,
then,
like
the
state,
you
can
say
that
the
the
resource
limiter
type
has
if
they
want
to
expose
a
state
back
to
somebody.
A
You
have
like
hey,
give
me
the
state
for
this
lease
and
they
pass
in
the
lease
and
you
get
back
a
state,
and
now
it
can
be
strongly
typed.
You
get
rid
of
object.
You
get
rid
of
the
casting
you
get
rid
of
the
the
foofiness,
the
lifetime
management.
It's
it's
now
a
question,
that's
tied
to
each
individual
thing,
and
you
could
only
know
what
the
state
was.
If
you
knew
what
you
were
talking
to.
So
let's
just
make
it
a
static
method
on
the
thing
or
an
instance.
D
Or
whatever,
but
but
how
do
you
make
it
non-public?
Because
the
idea
is
that
you
know
these
apis,
you
live
in
the
bcl
right
and
then
you
have
a
high
level
component
like
let's
say
kestrel
that
exposes
let's
say
the
static
helper
right.
How
would
that
access
any
additional
data?
If
this
thing
is
in
public.
A
A
B
Common
meeting
chat
that
that's
what
steph
and
david
has
been
thinking
as
well,
so
I
can
take
a
look
at
that
pattern
and
kind
of
update
the
api
afterwards,
I'm
not
you
know.
G
That
familiar
with
it
other
than
what
has
just
been
discussed,
I
think
that
only
solves
the
double
disposed
thing
right.
I
think
we're
saying
like
the
design
is
the
same
or
maybe
we
need
two
state
objects,
one
one
that's
used
to
round
trip
state
from
the
from
the
call
to
the
constructor
to
dispose
and
then
one
that's
actually
for
the
user.
F
I
think,
if
I
understood
correctly,
I
think
jeremy's
suggestion
yeah
it
was
the
state
is
basically
an
opaque
object.
That's
purely
used
for
the
implementer
of
the
derived
instances
and
then
yes,
that
provides
some
other
api
that
uses
that
state
to
produce
the
thing
that
the
developer
would
actually
consume.
F
F
I
I
B
Right,
yeah,
and,
and
so,
who
kept
that
mostly
to
be
as
flexible
as
the
implementer
of
a
resource
limiter
wants,
but
essentially
those
implementers
will
need
to
so
one
of
the
proposals
where
this
is
we
keep
it
as
a
object
like
nullable
is
that
the
implementer
of
the
resource
limiter
will
also
ship
a
set
of
interfaces.
For
example,
I
retry
after
is
supported,
or
something
like
that.
B
That
actually
indicates
is
that
that
that
interface
will
have
like
a
getter
for,
for
example,
retry
after
and
it
might
return
an
int
or
something
like
that,
and
then,
as
a
consumer
of
the
resource,
limited
implementation,
they
will
need
to
know
about
those
interfaces
and
try
to
cast
these
objects
to
those
interfaces
before
accessing
the
value.
I
B
Yeah,
so
the
alternative
to
this
interface
kind
of
approach
is
to
have
a
property
bag
instead
for
the
state
which
think
of,
I
think
the
current
proposes,
I
read
only
dictionary
of
let's
say,
string
to
object
and
essentially
having
a
set
of
like
well-known,
essentially
header
values
or
like
essential
keys
for
like
retry
after
and
stuff
like
that,.
C
E
C
Going
super
super
general
here
and
trying
to
to
enable
every
possible
thing
we
could
under
the
sun.
I
would.
It
seems
to
me
that
a
rate
limiter
at
the
end
of
the
day
isn't
that
complicated,
and
this
is
a
a
a
set,
a
small
set
of
cases
you
want
to
handle
a
small
set
of
information
you
want
to
provide.
Let's
just
define
an
abstraction
and
there's
like
three
ways
you
want
to
do
it
right.
You
said
fixed
window
sliding
window,
I
don't
know
what
they
are,
but
there's
like
three
implementations
of
the
abstraction.
C
G
C
G
G
And
there
are
a
bunch
of
strange
reasons
that
it
may
fail
and
they
want
that
to
be
exposed
to
users.
G
No,
no,
no!
No,
so
the
caller
is
people
that
consume
there's
a
producer
and
a
consumer,
and
one
developer
is
like
coding
against
the
actual
abstraction
and
one
and
the
infra
engineer.
Is
configuring
config
to
like
to
say
if
this
fails,
because
you
went
over
some
limit,
the
error
is
this
and
that's
going
to
config
file
somewhere
and
the
config
file
is
read
by
the
rate
limiter
implementation
and
it
will
surface
those
errors
whenever
things
happen
and
the
config
values
change
on
the
flight,
depending
on
like
who
knows
what.
A
A
G
No
doesn't
have
to
be
it.
It
depends
right,
like
so,
like
john,
is
going
to
write
the
ethernet
core
middleware
that
will
use
the
abstraction
and
then
the
azure
team
will
write
a
different
implementation,
that
surfaces
more
more
information
when
things
fail,
and
there
are
two
different
parties
that
the
the
person
consuming
and
authoring
the
rate
limiter
is
different
from
the
person
who's
actually
consuming
it
in
code,
which
is
asu.net,
and
maybe
that
information
they
want
to
surface
somehow.
G
D
G
D
So
the
thing
I
still
understand,
so
why
so
you
said
key
value
ps
would
be
preferable
because
they're,
like
you
know,
you
could
have
a
semi
standardized
set
for
those,
but
like
isn't
the
idea
that,
if
I
am
azure
I
have
some
sort
of
state
I
want
to
expose
like
a
single
type
would
also
work
right.
It's
just
that.
It's
unfortunate
that,
if
you
have
a
type,
then
you
need
to
put
the
type
somewhere
now
the
problem
becomes
well.
How
do
I
consume
the
azure
thing
without
depending
on
that?
D
G
Really
helps
to
be
decoupled,
so
the
the
reason
is
because
the
company
is
because
the
person
consuming
the
api
asp.net
does
not
have
any
idea
what
the
azure
implementation
actually
is.
So
the
question
becomes
we
we
can't
surface
it
by
default
to
the
user,
but
the
user
can.
Basically,
I
guess,
downcast
or
get
the
key.
D
Yeah
so
but
but
yours
so
is
the
idea,
then
there
are
some
conventions
that
a
framework
like
asp.net
could
look
for
right.
So
basically,
you
know
there's
a
certain
key
that
says:
retry
after
or
whatever,
and
then
you
say:
okay,
the
value
has
to
be
a
time
span
or
something,
and
then
you
extract
that
key
and
then
do
something
useful
with
that
yeah,
because
then
I
mean
the
problem
with
these
kind
of
designs
is
also
that
that
it
sounds
great.
But
then
you
really
have
to
be
very
careful
how
you
design
that
right.
B
Right
so
the
in
the
case
of
essentially
where
you
want
to
expose
more
state
back
to
the
user,
we're
kind
of
okay
with
allocations
there.
It's
more
like
for.
Very.
I
guess,
the
default
implementations
that
we
ship
we
don't
want
to
have
to
allocate
state.
D
I
B
D
B
Right,
essentially
even
for
successful
ones,
we
don't
want
to
prevent
them
from
being
able
to
return
state.
Although
yes
state
is
usually
most
useful
after
a
failed
one.
Yeah.
A
So
because
I
on
the
screen
I
get
to
doodle
like
if
we
think
that
you're
always
gonna
need
to
know
what
the
rich
type
is
in
order
to
get
information,
it
seems
like
just
put
it
on
whatever
the
rate
limiter
type
is
so
if
the,
if
the
middleware
has
the
on
failed,
acquire
then,
and
it
passes
the
resource
lease
back
into
it,
then
they
can
now
call
get
metadata,
which
is
their
own
api,
that
returns
whatever
rich
type.
A
D
So
the
only
problem
now
is
imagine
that
my
rate
limiter
is
really
it
says:
azure
rate
limiter,
right
yep.
That
no
means
now
imagine
your
right,
asp.net
right,
so
a
spin
that
doesn't
want
transitively
to
depend
on
some
proprietary
azure
api
right.
So
your
model
works.
Well.
Basically,
if
you
think
of
the
limiters
as
leaf
nodes
in
the
system,
right,
somebody
defines
them
and
then
the
consumer,
the
ultimate
consumer,
the
you
know
the
customer.
A
G
A
D
Feels
like
it's
the
worst
of
all
worlds,
yeah
I
mean,
I
would
say
like
it
would
help
if
we
could
actually
take
some
of
these
azure
concepts
and
actually
design
it
in
com
with
concrete
terms,
rather
than
trying
to
solve
it
in
the
abstract,
because
the
thing
is
what
concerns
me
here
is,
I
mean
david
said
this.
One
of
those
other
meetings
is
that
he
basically
turns
into
owen
where
everything
is
a
dictionary
to
a
funk,
to
the
dictionary
of
a
funk.
D
D
Like
I
mean
we
have
seen
this
in
other
places
like
vs,
for
example,
where
we
have
math
components
of
arbitrary
metadata
and
very
often
like
there's
only
one
combination
of
stuff
that
works,
and
so
as
soon
as
you
div
something
else,
it
just
all
collapses,
and
then
you
get
weird
runtime
errors,
because
nothing
composes
anymore,
and
then
you
go
hunt
for
like
okay
who
up
right.
Oh
this
was
inevitable.
D
Time
spent
not
a
time
spam
and
all
the
costs
failed
and
like
bad
things,
happen
right,
yeah
and
and
that's
kind
of
the
the
concern
I
have
when
somebody
says
key
value
pair
or
you
know,
object
right.
G
E
D
D
I
think
it
works
well,
if
you
have,
you
know
effectively
like
some
sort
of
spec,
for
what
the
keys
and
the
values
are
right
as
soon.
E
D
Headers
work
really
well
because
you
have
effectively
an
official
here
is
what
you
can
put
there
kind
of
authority
right
and
then
there's.
Oh,
this
particular
api
author
added
these
things,
but
there's
a
whole
industry
around
that
right.
So,
like
you
like,
it's
not
like
completely
arbitrary
right,
I
mean
it
is
arbitrary,
but
with
very
strong
expectations
of
what
arbitrary
means,
and
so,
unless
you
establish
that
here
it's
going
to
be
a
mess,
you
will
have
somebody
calls
it
on
retry
after
the
other
guy
calls
it.
D
B
Right
but
would
kind
of
you
know,
I
guess
naming
it
as
like
a
set
of
header
values.
Is
that
too,
limiting
in
terms
of
having
these,
like,
I
think,
that's
like
just
a
trade-off
between
how
prescriptive
we
are
and
how
flexible
we
can
be
right
like,
and
then
we
just
I
mean.
D
I
think
I
think
if
you
go,
I
think
if
you
go
down,
let's
say
key
value
pairs.
What
I
would
encourage
you
to
do
is
just
define
a
framework
type.
That
is
what
we
call
a
strongly
typed
string,
basically,
where
you
have
like
a
quasi-enum,
but
the
members
of
strings
kind
of
like
hp,
headers
work,
and
then
you
can
say
here
we
put
like
a
bunch
of
things.
We
know
people
want
to
use,
I
mean
retry
and
whatever
else
you
have
and
then,
but
you
can
also
always
construct
your
own
string
right.
D
So
you
so
you
you.
So
it's
not
like
an
enum
that
is
literally
closed
for
everybody
to
extend.
But
then
you,
because
you
have
this
base
type
in
the
vcl.
You
kind
of
make
a
very
strong
statement
about
here's
already
20
strings
that
we
standardized.
So
if
your
is
one
of
those
20
pick
one
of
those
20
things
don't
come
with
your
own
naming
for
that
right.
D
Just
strings
like
nothing
prevents
us
from
literally
adding
everything
to
it,
because
it's
basically
free
you're,
not
you're,
not
you're,
not
extending
you,
basically
you're,
not
including
you're,
not
extending
your
api
closure,
but
you
make
it
so
that
similar
to
headers
like
there's
this
catalog
of
well-known
names
right
and
you
try
to
keep
that
the
master
list.
D
A
D
B
B
Let
me
just
make
sure
I
understood
it
correctly.
It's
essentially
saying
for
the
in
the
property
back
case.
We
essentially
have
the
keys
or,
like
the
keys,
be
these
kind
of
strings
where
it's
a
more
well
curated
list.
D
D
D
The
abstraction,
I
think,
is
a
bit
hard
because
you,
it
would
be
useful
to
see
okay
here,
the
three
parties
involved
and
you
know,
here's
what
here's,
what
they
can
access:
here's,
how
they
would.
B
Yeah,
so
just
a
brief
kind
of
okay.
I
guess
summary
of
like
the
things
that
they
generally
include
in
there
a
very
con,
like
the
reason
why
here
I
call
that
retry
after
and
error
codes
are
because
those
are
pretty
specific
or
like
those
are
pretty
common
to
the
different
limits
that
they
implement,
but
depending
on
the
types
of
like
rate
limits
or
concurrency
limits
that
they
configure.
B
These
can
be
very
arbitrary
values
for
specific
limiter
implementations,
which
is
why
I
didn't
include
the
full
list
because
they
are
like
you
know,
only
used
in
one
or
two
places
and
there's
just
a
huge
list,
but
yeah.
I
can
get
some
more
details
on
what
those
values
are
or
what
essentially
to
get
a
flavor
of
what
those
kind
of
extensions
could
be.
B
But
I
agree
essentially
if
we
have
a
curated
list
for
the
common
ones
and
then
essentially
allow
that
to
be
a
bit
more
extensible,
for
you
know
arbitrary
values.
I
think
that
is
a
a
good
compromise.
D
Yes,
sorry,
the
goal
wasn't
completed,
but
the
goal
was
just
to
sketch
effectively
how
these
three
parties
would.
You
know,
exchange
information
to
just
get
a
sense
for
what
is
it,
what
flavor
of
information
would
be
in
there
and
what
shape
it
would
have
and
how
the
consumption
and
production
feels
like
right.
B
Okay
yeah,
I
can
totally
get
some
more
essentially
samples
for
what
the
scenario
looks
like
yeah
sounds
good.
D
I
mean
once
you're
down
to
that
level.
Yeah,
you
don't
want
to
allocate
a
dictionary
every
single
time
you
read
like
you
know,
4k
right
exactly,
and
so
I
think
that
that's
kind
of
the
the
other
thing
like
I
don't
know,
like
my
understanding
of
the
resources,
was
kind
of
the
more
coarse-grained
access
to
a
service
thing.
But
if
you
really
go
down
to
the
individual
reads
and
writes
I
mean
if
you
want
to
use
that
api
for
that,
then
it
needs
to
be
super.
Super
lightweight.
B
Yeah,
hopefully,
in
those
cases
the
limiters
you
do,
I
end
up
choosing
don't
allocate
state.
D
G
A
Yeah
I
mean
if
we,
if
we
change
the
resource
lease
to
just
be
a
along.
Basically,
then
the
on
disposed
call
would
just
get
the
long
and
right
whatever
state
it's
supposed
to
free.
At
the
same
time,
just
that's
part
of,
however,
the
the
limiter
handled
its
release
or
release
code.
G
A
Like
my
notes
here,
I
called
it
lease
id
it's
just
a
long
that
is
incremented,
oh
okay,
so
at
least
I
mean
either
start
at
zero
or
pick
a
random
number
for
your
start
and,
like
you've
produced
that
one,
the
next
one
gets
this
one.
The
next
one
gets
this
one
and
then
that
basically
just
lets
you
key
into
a
dictionary.
That
is
the
constantly
maintained
dictionary
and
you
end
up
with
eventual
stop
allocating
state.
A
Personally,
I
recommend
long
because
that's
extraordinarily
unlikely
to
cycle
back
around,
I
think
we
use
a
short
in
value
tag,
which
is.
F
D
D
D
F
You
might,
you
know,
call
the
wrong
thing,
because
you
may
you
know
you
just
as
if
the
user
had
a
any
other
kind
of
bug,
but
you
wouldn't
you
wouldn't
get
an
access
violation.
You
wouldn't
write
over.
You
know
your
memory
to
your.
You
wouldn't
treat
one
type
as
a
different
type
right.
B
B
What
so,
most
of
the
samples
of
what
goes
on
the
state,
I've
seen
so
far
are
literally
the
information
that
goes
to
the
user.
The
only
thing
that
is
really
the
only
state
that
is
kind
of
passed
into
the
on
dispose
is
the
count
do
like.
I,
I
kind
of
would
lean
towards
essentially
not
having
a
separate
kind
of
state
for
on
this
build,
because
I
don't
see
what
else
could
be
needed
than
the
count
like
we
don't.
G
B
It
yeah
that
that's
possible,
but
it's
not
clear
to
me
that
that
is
not
completely
unnecessary,
so
yeah.
Essentially,
the
action
at
the
on
disposed
delegate
could
take
essentially
the
resource
lease
and
we
just
pass.
You
know
this
in,
but
I
think
the
just
the
resource,
limiter
and
even
and
and
the
count
should
be-
should
be
sufficient.
A
I
A
A
Implementation
acceleration
some
implementations
may
not
want
to
actually
save
the
lease
id,
which
means
that
their
dispose
isn't
actually
important,
then
accessing
the
count
or
state
or
whatever
else
off
the
resource
lease
would
work.
And
then,
if
you
added
another
api
in
the
future,
everything
should
be
able
to
be
tied
back
to
the
lease.
So
you
either
get
resource
limiter
and
long
or
whatever
your
id
type
is,
or
resource
limiter
and
resource
lease,
and
now
everything
else
is
just
quibbling
and
they
get
to
read
whatever
properties.
They
want
right,
trying
to
expand.
A
Like
this
is
a
combination
of
an
inc
and
a
maybe
object
is
weird:
there
were
some
comments
from
youtube
chat,
talking
about
asking
about
the
allocation
of
state
and
doing
the
like,
depending
on
how
you
want
to
wire
things
up
with.
Maybe
there's
a
a
get
state
delegate
that
you
pass
in
where,
if
nobody
wants
to
state
it,
maybe
doesn't
even
allocate
it,
but
it
probably
would
have
to
because
it
wouldn't.
It
would
need
to
know
it
ahead
of
time
of
yeah,
so
so
never
mind
on.
G
A
A
Guess
an
interesting
thing
is
if,
if
most
resource
limiters
are
expected
to
produce
state
and
they're
expected
to
produce
distinct
state
for
every
lease,
then
really
the
lease
should
just
be
a
class
right.
I
G
Right
so
the
the
thing
that
the
thing
that
made
it
not
be
a
class
was,
we
want
to
be
able
to
basically
wrap
a
semaphore
slim
without
allocating.
B
Right
essentially,
dispose
is
the
really
semantic
for
the
resource,
and
the
reason
why
we
wanted
that
is
is
to
essentially
have
indicate
ownership
of
the
resources
that
were
acquired,
unlike
74,
which
you
can
release
without
ever
acquiring
anything.
A
Yeah
no,
this
is
just
the
you
know.
While
that's
an
interesting
goal
like
so
that
one
wouldn't
be
producing
states,
is
that
going
to
be.
E
A
D
A
I
think
it's
that
the
semaphore
existed,
you
call
acquire
it
returns,
a
lease
that
says
you
acquired
it
when
you
call
dispose
it
calls
release
but
then
needs
to
track
somewhere.
That
says,
it's
already
called
disposed
for
lease
number
17
so
that
if
it
gets
called
a
second
time,
it
doesn't
call
release
on
the
semaphore
a
second
time
because
now
you're
mixing
the
semaphores.
I
am
a
hard
api
to
use.
Use
me
correctly
with
I'm
disposable
and
I
can
be
disposed
multiple
times.
User
friendliness.
D
D
E
A
G
I
don't
know
the
usage
semantics
of
this
thing.
You
wouldn't
pass
it
around.
I
mean
I
don't
want
to
say
it.
Never,
because,
like
people
do
stuff
all
the
time,
but
the
user
pattern
is
basically,
you
know
await
using
or
you
know,
no,
you
are
using
acquire
or
acquire
async,
and
then
that
is
the
end
of
it
right.
G
D
C
J
J
G
J
G
B
Well,
yeah:
we
want
it
to
be
flexible
enough
that
if
the
developer
wants
to
essentially
limit
the
amount
of
processing
in
a
particular
section
of
code,
they
could
use
this.
G
E
G
A
G
A
C
G
A
G
The
thing
we're
hitting
here
is
that
we're
trying
to
design
the
api
to
support
both
high
level
and
low
low-level
consumers
and
the
trade-offs
being
made
are
different
between
those
two
things
and
and
it's
fine
in
general.
But
I
guess
the
question
is
like
in
the
azure
case.
I
think
that
that
is
the
case
where
they
would
allocate
the
most,
and
it
would
be
fine
for
the
case
where
you're
like
doing
socket
socket
reads,
is
where
kind
of
like
we
would
have
to
have
a
different
api
than
this.
G
If
that
was
the
case,
the
thing
is,
it
would
look
very
similar,
but
just
have
have
a
few
hard
to
use
things
like
destruct,
for
example,
versus
the
class,
but
maybe
they're
two
different
apis,
but
that
feels
I
don't
know
like.
It
breaks
the
abstraction.
G
A
D
Yeah
I
mean
you
could
have
like
literally,
like
you
know,
resource
leads
and
bail.
Your
resource
leads
right
or
I
mean
yeah.
Okay.
I
think
I
think
the
the
the
kind
of
takeaway,
I
think
is
you-
should
design
the
api
like
with
concrete
right
and
then
see
how
far
you
get
right,
design
the
api
you
want
and
then
see
where,
when
it
actually
falls
apart
right
so
like
it
seems
also
problematic
to
optimize
this.
D
G
Yeah,
it
would
also
be
bad.
We
we
actually
have
prior
art
that
this
api
replaces
it
in
asp.net
core
today
right.
So
it's
not
coming
from
nowhere.
We
actually
designed
the
current
api
to
not
allocate
when
you're
concurrency
when
you
are
concurrency
limited
today
and
it
doesn't
allocate
unless
you
end
up
queuing,
and
we
want
to
preserve
that
behavior
and
not
just
allocate
because
you're
you're
acquiring
a
lease.
G
B
G
B
Because
in
asp.net
core,
we
just
had
a
more
restricted
set
of
api
services.
Essentially,
so
state
is
essentially
new
to
this
yeah,
which
makes
it
tricky.
J
It
feels
weird
to
me,
but
just
brainstorming
what,
if
we
flip
it
to
make,
make
this
more
of
a
kind
of
like
the
the
db
connection
classes,
where
you
can
do
like
create
lease,
and
then
the
lease
is
what
you
acquire
in
free.
So,
instead
of
caching
stuff
inside
of
the
resource
limiter,
the
user
could
cache
it
and
reuse
the
lease
as
much
as
they.
D
J
A
Like
I
think
the
current
design
is
supposed
to
allow
for
asp.net
core
has
a
notion
of
you
can
be
a
a
concurrency,
limited
api
and
you
get
to
via
the
configure
method
or
whatever
say
and
use
a
fixed
window.
Five
requests
per
minute,
yes
limiter.
A
D
Yeah,
it
also
seems
like
the
control
is
going
the
other
way.
Now
right
I
mean
if,
if
your
goal
is
really
to
say
at
any
given
point
in
time,
I
know
I
have
no
more
than
any
number
of
these
resources
or
any
number
of
resources
per
given
time
period.
Then,
if
people
are
responsible
for
pooling
the
resources
themselves,
then
you
can
no
longer
guarantee
that
now
you
have
distributed
your
resources,
which
was
kind
of
the
point
of
not
doing
in
the
first.
J
Place,
I
I
think
we
would
call
it
something
other
than
lease
in
this
case,
but
I
think
the
idea
is
you'd
have
any
number
of
leases
and
when
you
call
a
choir
on
it,
that's
when
it
actually
takes
the
lease.
So
it
would
have
the
same
sort
of
limiting
semantics
that
that
you
have
right
now
right.
G
D
Yeah
you're,
basically
saying
the
names
of
things
yeah,
sorry
so
you'll
be
basically
saying
the
acquire
call
on
the
on
the
lease
might
blow
up
and
and
that's
something
you
would
have
to
kind
of
handle
then.
But
that
means
the
resources
can
be
a
fairly
expensive
object,
because
the
object
itself
is
basically
not
representing
that
the
lease
already
happened.
It
just
represents
the
metadata
for
at
least
yeah
and.
D
K
A
But
yeah
I
mean
there's
three
things
right,
so,
while
we're
doodling
with
that
estimated
count
personally,
I
feel
that
this
is
lacking
in
a
name
because
an
estimated
account
of
what
right
the
total.
E
A
That
have
ever
been
acquired
the
things
that
are
available
for
required.
The.
What
other
interpretations
that
I
write
down?
The
number
that
are
currently
consumed
like
I
think
that
it
needs
another.
A
Neither
a
noun
or
adjective
language
analysis
is
failing
me
here.
It
needs
to
say
what
it
is,
so
you
know
yeah
estimated
availability
if
we're
yeah.
I
agree
if.
G
C
I
feel
like
we're
sort
of
I
don't
know
where
we
are
here.
Are
we
saying
it
feels
like
we're
kind
of
saying?
Okay,
we
have
some
more
things
to
explore
before
we
really
get
to
the
point
where
we're
ready
to
to
approve
this.
So
what
more
do
we
want
to
do
here
versus
going
off
and
bringing
it
back?
Another.
B
C
That
seems
real
you're
right
so
on.
On
that
note,
the
term
resource
limits
is
super
general
right
resource
limits.
When
you
say
resource
limits,
that
could
mean
anything.
It
could
mean
cpu
limits.
It
could
mean
memory
limits,
it
could
mean
bandwidth
limits,
it
could
mean
resource
limits
is
something
that's
that's
applied
to
a
whole
lot
of
things.
C
B
Right,
so
I'm
glad
you
mentioned
it,
because
we
kind
of
did
want
this
to
be
as
general
as
to
essentially
cover
both
rate
limits,
concurrency
limits
and
potentially
like
limiting
based
on
you
know,
amount
of
free
memory.
You
have,
or
some
other
metric
like
ambient
metric
that
is
not
essentially
controlled
by
the
user,
like
you
can
theoretically
write
an
implementation
that
is
based
on
arbitrage,
not
like,
I
guess,
environment.
A
Well,
but
I
mean,
if
you're
doing,
memory
limiting,
which
is
you
know,
really
the
os's
job,
but
if
you're
trying
to
do
some
soft
memory,
limiting
that's
really
just
a
a
rate
limiter
where
your
count
represents
expected
megabytes
or
terabytes,
or
you
know,
whatever
you
feel
like.
A
And
that's
a
that's
a
user
conceptual
model
on
top
of
the
same
rate
limiter,
because
what
you're
trying
to
limit
is
how
much
concurrent
memory
can
be
allocated
things
like
cpu,
you
can't
say
I
would
like
to
acquire
10
cpu,
because,
like
you,
you
you
can't
and
if
you're
saying
I
would
like
to
acquire
one-tenth
of
the
processing
power
of
this
application.
A
B
Make
sense
it's
more
like
in
those
cases,
let's
say
you're,
basing
it
off
of
cpu
usage,
essentially
the
resource
limiter
will
pretty
much
ignore
the
requested
account
that
you've
given
and
say
oh
like
right
now,
the
cpu
load
is
let's
say
under
50
you're
allowed
to
proceed,
but
if
it
the
cpu
load
is
about
50,
then
acquires
are
going
to
fail
and
weight.
Asyncs
are
going
to
essentially
wait
and
then
so
that
that
is
a
possible
implementation.
G
The
rate
limit,
specifically
like
the
the
the
impetus
of
this
overall
api
was,
was
a
combination
of
talking
to
the
team
in
azure,
but
also
because
there
was
a
second
team
in
azure
that
was
trying
to
bandwidth
limit
a
bunch
of
reads
on
my
pocket
and
the
the
pattern
is
the
exact
same
thing,
so
we
basically
looked
at
a
bunch
of
rate
limiters,
concurrency,
limiters
and
and
and
ones
doing
other
other
kinds
of
of
limiting,
and
we
tried
to
actually
create
a
concept
that
that
encountered
all
those
things
and
it
it
turns
out.
G
A
Okay,
since
jeff
didn't
start
talking
yet
oh
same
random
timeout.
So
how
are
you
supposed
to
represent
like
a
bandwidth
limit
with
this
structure?
It
feels
like
what
you're
saying
is.
I
would
like
to
acquire
some
peak
bandwidth
availability,
which
is
back
to
just
its
concurrency
on
what
the
theoretical
available
pipe
is
because
you,
like
bandwidth,
is
really
a
measure
of
now,
and
this
operation
could
last
a
lot
longer
than
now.
G
G
So
if,
if
this,
if
this,
if
this
socket
right
too
many
bytes,
I'm
going
to
throttle
it
until
until
someone
else
can
congratulate
can
grab
can
grab,
I
can
grab
other
bikes
so
like
I
have
100
bites
and
the
socket
is
hogging
that
ball
with
I'm
going
to
throw
this
one
socket,
because
if
eating
is
eating
all
the
bites,
unless
someone
else
consume
that
that
overall
thing
so
they're
kind
of
they're
trying
to
basically
not
have
one
socket
starve
the
overall
system
and
to
balance
the
load
across
the
end
sockets,
and
they
were
doing
it
by
injecting
delays
into
reads
and
writes.
A
So
is:
is
the
the
nice
socket
sind
supposed
to
just
call
acquire
with
the
byte
count,
because
again,
if
so
you're
back
to
just
a
it's,
a
concurrency
limiter
of
what
you're
totally
available
thing
is.
A
E
D
The
best
I
can
think
of
is
something
in
run
time,
right
system,
dot,
runtime
dot,
you
know,
let's
say
limiting
or
rate
limiting
or
something
that
makes
more
sense
to
me
than
threading
resource
limits.
The
problem
that
jeremy
brought
up
last
time
I
checked
the
resource-
is
really
very
unfortunate
because
the
thing
is,
it
has
such
a
strong
biasing.net
towards
embedded
resources
because
we
have
a
resources
namespace
and
that's
all
what
this
is
for.
D
So
even
if
you
call
resource
limiting
it's
kind
of
I,
I
still
think
people
will
be
confused
when
you
say
that.
So
I
think
honestly,
like
even
if
you
say
like
there's
other
things
that
aren't
rates,
I
think
going
with
weight.
Limiting
is
probably
the
better
phrase,
because
people
think
of
what
you
do
here
as
a
form
of
rate
limiting
right
and-
and
I
think
so
taking
that
as
the
high
level
concept
and
then
saying
yes,
you
can
maybe
model
slightly
more
than
rate
limiting.
D
I
think
it's
still
better
than
trying
to
come
up
with
the
most
generic
and
most
abstract
term,
because
then
you
probably
have
the
least
desirable
api
right
people
really
like
nouns.
They
they
they
know
what
they
mean
right.
So
if
you
can
make
the
thing
kind
of
specific-
and
I
mean
I'm
not
saying
resource
lease
or
resource
limiter
aren't
but
like
the
problem
really
is
I
see
them
and
I'm
is
it
the
net
guy
kind
of
consumed
the
confused
of
what
that
means
right.
C
Yeah,
I
agree
with
that
and
just
on
the
term
resource,
even
if
I
ignore
the
classic
c-sharp
notion
of
resource
the
term
resource
is
so
overloaded
to
mean
a
bunch
of
different
other
things
that
that
use
it
conveys.
Even
if
I
get
past
the
fact
that
oh
wait,
this
isn't
like
c-sharp
resources,
I
still
wouldn't
quite
know
what
to
associate
it
as
being.
B
Is
definitely
resource
right,
so
kind
of
just
adding
a
little
bit
to
the
naming
discussion
we
also
so
I
agree
and
resource
is
very
abstract
and
and
overloaded
and
yeah.
We
do
see
some
issues
with
that,
but
essentially
we're
trying
to
also
spark
some
conversation
here
on
what
our
possibilities
you
know
there
could
be
so
other
things
we
have
thought
of
previously,
instead
of
resource
limiter,
are
like
throttler,
a
rate
limiter.
It
was
also
kind
of
proposing.
B
In
that
case
we
need
to
essentially
instead
of
differentiate
between
rate
limits
and
concurrency
limits.
We
can
call
it
like
concurrency-based
rate
limits
or
time-based
rate
limits,
or
something
like
that,
so
so
that
that
is
definitely
possible,
thraller
being
another
one
and
yeah.
We
probably
don't
want
to
just
call
it
a
limiter
by
itself,
like
too
short
of
a
name
also
very
kind
of
arbitrary,
as
well
so
yeah
any
thoughts
about
those
other
possibilities.
D
I
mean
if
you
call
the
base
type
rate
limiter
and
then
you
have
you
know
I
mean.
Presumably
people
don't
talk
to
the
to
the
base
class.
Usually
they
call
they
talk
to
something
more
derived
right
so
because
it's
kind
of
specific
to
yeah.
Basically
the
the
actual
thing
you
want
to
do
kind
of
thing
right
and
then.
B
So
it
depends
on
who
you
mean
by
the
user
like
as
a
middleware,
for
example,
we
will
only
work
with
like
essentially
resource
limiters
or
rate
limiters.
We
don't
know
the
specific,
like
implementation
type.
B
G
D
So
I
would
say
I
would
say
pick
a
name
that
that
is
80
correct
and
ignore
the
20
where
it's
not.
The
only
thing
where
that
may
be
unfortunate
is
if
you
need
to
introduce
the
concept
of
a
rate
limiter
as
a
as
a
dedicated
abstraction,
with
a
different
api
shape,
because
then
you
don't
want
to
pick
names
where
the
things
are
too
close
to
each
other.
But
assuming
it's
just
a
semantic
thing
was
like:
well,
it
says
raid,
but
it's
really
not
that
then
I
don't
think
it
matters.
D
A
K
G
A
B
Yeah,
so
as
a
little
bit
more
background,
is
we
also
kind
of
looked
at
what
similar
types
are
in
other
languages?
Most
of
what
we've
seen
is
are
literally
called
rate
limiters,
because
they
are
purely
a
time-based
rate
limit
and
the
difference
here
why
we
try
to
kind
of
avoid
the
same.
Naming
is
because
we
want
to
communicate
that
this
is,
in
addition
to
rate
limits,
a
combination
of
that-
and
you
know,
concurrency
based
limits.
So
so
that's
why
we
kind
of
started
looking
for
different
names,
but
yeah.
B
That's
a
little
bit
of
the
background.
Why
we're
proposing
resource
limits.
D
Yeah
I
mean
I
mean
I
would
also
kind
of
ignore
that,
because
I
think
in
practice
like
when
people
look
for
an
api,
they
I
mean,
if
you
think
most
people
want
to
start
with
rate
limiting
and
then
maybe
they
expand
later.
They
will
look
for
rate
limiting
and
then
they
can
learn
about
the
api
and
then
all
need
it
does
more.
D
But
the
problem
is,
if
you
I
wouldn't
necessarily
say
don't
yet
you
don't
use
the
naming
of
your
technology
as
positioning
as
our
stuff
does
more
than
the
competition,
because
that's
that's
not
how
people
think
about
stuff
when
they
start
learning.
Right
and
realistically,
when
you
look
at
any
given
feature.
D
What
you're,
mostly
yeah,
no
pun,
intended
about
what
you're
most
limited
by
is
how
many
people
are
able
or
willing
to
use
your
technology
right,
so
optimizing
for
people
learning
about
getting
started
and
quickly,
understanding
what
it
is
and
how
it
works
is
far
more
important
than
optimizing
for
the
person
that
already
understands
most
of
it
and
now
pushing
the
envelope
right.
Because
for
those
people
naming
doesn't
matter
anymore,
they
know
what
it
is
right
for
them.
It's
not,
then
the
naming
has
lost
all
its
meaning.
D
E
D
B
Yeah,
fair,
I
I
agree
with
with
that
kind
of,
I
guess
analysis.
Just
one
kind
of
caveat
to
that
is
I
don't
want.
I
think
there
is
a
risk
of
essentially
seeing
the
name
as
rate
limits
and
considering,
as
it
can
only
be
rate
limits
that
that
might
be
how
people
essentially
think
about
it
and
it
kind
of
signifies
a
or
suggests
a
smaller
subset
of
capabilities
that
it
can
potentially
have
that.
That's
the
only
thing
I
see
being
a
little
bit
risky
here.
A
A
But
what's
conceptually
the
difference
between
a
concurrency
limiter
and
a
sliding
window
rate
limiter,
where
you
don't
get
to
know
what
the
window
is
like?
A
K
D
A
A
Well,
sorry,
with
the
bandwidth
thing
and
stuff,
you
need
to
have
an
agreement
of
what
you're
doing
the
the
notion
of
that
you
just
have
the
acquire
zero
as
a
or
sorry
acquire.
One
is
an
easy
call
like
there's
a
thing.
I
want
it
like
concurrency
or
five
requests
per
minute
or
the
five
requests
within
the
last
30
seconds,
they're
all
the
same
thing
but
they're
all
again,
it
really
is
rate.
It's
just.
The
notion
of
time
is
what
gets
weird.
A
B
Yeah,
if
I
feel
like
like
I'm
not
like,
I,
I
want
to
clarify
that
I'm
not
against
the
use
of
the
name
rate
limiter.
It's
just.
We
do
need
to
be
aware
of
the
fact
that
essentially
we're
saying
that
concurrency
limiting
is
the
concurrency-based
limits
are
a
type
of
rate
limit
and
if
we
agree
with
that,
then
yeah,
I
think
ray
limited
sure
would
be
a
good
name
for
it.
D
I
mean
I
guess
I
just
don't
see
the
difference.
I
mean
because
I
mean
for
me
like
if
you
look
at
the
abstraction,
if,
if
they
both
go
through
the
same
abstraction,
it
seems
to
me
at
that
point
you're,
just
kind
of
like
semantic
nitpicking
kind
of
thing.
Where
I
I
would
say
once
you
reach
that
point.
Probably
the
the
80
name
is
probably
the
right
name
right,
because
the
the
absolute
correct
name
then
becomes
some
super
abstract
concept
like
system.acquire
or
not
right,
which
is
no
longer
usually
that
interesting
right.
D
A
B
It,
okay,
maybe
okay,
so
let's
operate
under
the
assumption
that
I'll
rename
this
to
a
rate
limiter.
I
think
there
were
discussions
on
estimated
count
and
I
haven't
heard
anything
about
acquire
weight
async,
but
you
know
we
should
probably
confirm
those
namings
well.
D
B
Right
so,
let's
give
a
couple
examples.
So
two
of
the
proposed
default
implementations-
one
is
a
fixed
window,
ray
limiter.
So
in
that
how
it
works,
is
you
say
I
am
essentially
calculating
my
window
as
let's
say
one
second,
and
I
only
allow
you
know
five
resources
in
each
window.
B
So
in
that
case,
when
you
do
that
it
means
you
can
only
have
five
operations
per
second,
you
can
extend
that
to
like
you
know
five
operations
a
day
or,
like
you
know,
100
operations
per
second
or
or
whatever,
but
that
that
is
essentially
how
you
define
a
fixed
window
rate
limiter
and
essentially,
once
that
window
elapses
your
you
know,
resources
refreshes
to
whatever
the
max
limit
is.
B
D
Yeah,
I
guess
what
I'm
unclear
I
was
like.
I
mean
I
think
we
talked
about
this
last
time
right
when
we
said
like
you
know,
any
end
is
good
enough,
because
you
know
4
billion
or
2
billion
of
something
is
probably
enough
to
model
everything
right,
but
I
was
thinking
of
like
what
david
was
talking
about
like
bandwidth
kind
of
limits,
but
you
would
basically
acquire
the
reads
and
rights,
in
which
case
you
kind
of
want
to
pass
in
the
number
of
bytes
yeah
and
at
which
point
like.
D
I
guess,
if
every
single
request
is
just
talking
about
one
buffer
and
end,
is
probably
enough
practically
speaking,
because
you
can't
make
everything
is
bigger
than
it
anyways.
But,
like
is,
is
that
always
a
model
where
we
can
basically
squeeze
every
possible
resource
into
basically
what
it
months
to
an
end?
And
we
can
say
they're
accountable.
B
Right
and
the
other
thing
we
thought
talk
about
is,
let's
say
you
know
your
base.
Numbers
are
really
big.
Let's
say
you
know
you
operate
on
gigabytes
of
memory
and
normally
and
and
you're
essentially
passing
in,
like
you
know,
roughly
two
billion
every
single
time.
In
those
cases
you
should
scale
that
count
back
to
something
that's
reasonable.
So
if
you're
always
working
with
gigabytes,
you
probably
shouldn't
pass
in
a
count.
That's
you
know
number.
C
B
Just
like
the
request
count
should
represent
one
gigabyte
like
you're
scaling
that
chunk
essentially,
and
that's
why
we
kind
of
went
from
long
to
end,
because
it
made
sense
to
me
that
for
that
you
should
scale
it
to
something
reasonable
and
it
can
be.
A
A
Thing
that
looks
like
it's
a
general
purpose.
Api
is
actually
very
special
cased
and
that
for
nice
socket
it
would
probably
want
a
a
wrapper
over
top
of
this,
which
was
a
bandwidth
limiter,
which
took
a
long
and
then
called
into
a
thing
that
represents
what
its
scale
is
to
call
down
to
acquire
and
like
in
that
yeah.
The
the
problem
with
being
super
general
is
once
we
once
we
say
you
know:
oh,
you
can
be
a
tenth
of
the
cpu
processing
or
whatever,
like
you're,
now
very
special
purpose.
A
Maybe
the
same
concepts
work
like
you're,
just
a
sliding
window
thing
with
a
maximum
concurrency
or
with
a
maximum
thing
of
10
and
like
that
works,
and
you
just
know
what
your
resource
represents.
But
it's
it's
not
it
doesn't
let
you
write
a
socket
that
is
bandwidth
cooperative
against
arbitrary
limiters.
It's
everything
now
becomes
really
tied
to
the
an
application
in
these
cases
really
needs
to
know
both
what
it's.
What
it's
count.
B
Yeah,
and-
and
that
is
the
kind
of
expectation
we
have
essentially
in
terms
of
prior
art-
we
also
have
the
memory
cache
that
has
a
size,
but
that
size
doesn't
just
mean
by
say
it's
just
whatever
you
define
it
to
be.
So
there
is
this
kind
of
at
least
a
little
bit
of
prior
art
in
saying
the
users
must
understand
what
the
counts
actually
represent.
D
Right,
like
iclonable,
comes
to
mind
where
there
is
no
consumer
who
I
cloneable
right,
it's
just.
It
doesn't
make
any
sense.
So
there's
a
there's,
an
argument
here
for,
like
you
know,
if
we
understand
exactly
who
the
consumers
are
like
do
they
know
how
to
scale
the
end
in
all
cases.
But
if
it's
passed
through
it's
fine
but
like
I
mean
I'm
having
a
hard
time
imagining
generic
aspen
at
middleware.
D
B
Right
so
as
a
couple
of
examples
of
how
we're
envisioning
it
being
used
and-
and
some
I
guess,
details
on
the
proof
of
concepts,
so
in
asp.net
core
the
count
represents
one
request,
and,
and
essentially
when
you
apply
in
the
middle,
where
it's
always
one
right
in
the
samples
I've
been
kind
of
trying
with
channels
it's
just
essentially
one
message
or
whatever
your
t
is
in
pipelines:
it'll
be
nice
because
you're
shuffling
around
essentially
bites.
B
So
so
that's
what
that
end
would
kind
of
represent.
I'm
not
sure
if
these
samples
are
sufficient
for
you
to
kind
of
illustrate
those
uses,
but
that
is
kind
of
at
least
like
three
of
the
ones.
I
think
that
could
be
relevant
here
as
strawmen.
A
Yeah,
I
mean,
I
think
it's
just
interesting,
because
basically,
what
the
the
middleware
for
the
asp.net
would
then
say
is
it
accepts
any
resource
limiter
that
believes
that
one
or
that
the
one
request
is
one
resource
right,
and
so
then
it
the
the
fact
that
it
accepts
a
a
limiter
means.
It
needs
to
say
how
it
intends
on
using
the
limiter,
which
right
feels
sort
of
backwards,
but
is
is
probably
fine,
but
that
goes
with
the
same
thing
with
the
the
cooperative
socket
needs
to
describe.
E
A
The
I'm
I'm
cooperative
with
a
limiter
that
understands
my
sense
of
scale
and
or
my
unit
and
sense
of
scale,
which
I
guess
are
the
same
thing,
and
so
it
like
it's
fine,
it's
it's
just
a
it's
an
interesting
documentation
thing
and
so
with
pipelines.
A
If
it's
measuring
we're
now
back
to
the
int
question,
if
it's
measuring
bytes
but
pipelines
support
pushing
a
sequence
at
a
time
which
I
don't
remember,
if
they
do
or
not,
then
that
that
means
the
int
now
has
to
become
long
or
pipelines
can't
express.
What's
going
on
and
and
then
now
we're
back
to
like
well,
do
we
really
want
it
to
be
long.
A
D
A
That's
why
I
said
it
makes
sense
for
the
weight
one,
because
that
one
you
can
do
sort
of
piecemeal
and
like
oh,
I
need
to
queue
now
and
and
whatnot,
but
the
like,
I
don't
know,
maybe
it
makes
sense.
I
it
just
feels
a
little
weird
to
me.
If,
if
it
fails
in
the
middle,
how
do
you
communicate
that
right.
G
B
I
do
not
have
it
in
the
api
proposal
issue,
but
if
you
open
up
the
design
dock
it'll,
there's
a
blurb
about
it
there.
So
we
should.
B
Yeah,
so
I
don't
have
the
sample
showing
what
we
talked
about
offline
on
friday.
Yet
that's.
J
B
Down
down,
this
is
just
a
lot
ton
of
use
cases
and
implementation
things.
Okay,
actually.
G
B
G
B
That
was
not
my
intention.
I
think
I
put
it
under
a
details
thing
that
you
need
to
expand.
Let
me
just
open
up.
B
Channel
that
it
yeah
sorry,
let
me
find
it's
something
probably
deleting
it's.
H
B
Sample
is
gone,
though,
why
do
I
see
it?
It.
B
Know
I
see
okay,
yes,
sorry
about,
I
realized
yeah
when
I
linked
it
it's
to
a
specific
shot,
so
I
didn't
have
the
latest
one,
my
bad!
Let
me
just
send
that
via
a
chat,
so
you
have
the
updated
doc.
So
this
doc
I've
been
writing
and
still
in
pr.
So
that's
why
I
fair
enough.
Didn't
I
didn't
realize
that
it
does.
Did
that
shocking,
locking,
I
guess
here's
the
link
to
the
latest.
A
A
J
A
G
G
G
A
B
Oh
one
more
quick
thing
before
we
take
a
look
at
these
aggregated
limiters,
so
another
concern
I
was
thinking
about
with
regards
to
naming
to
rate
limiter
is:
does
that
make
sense
like
with
you
know,
counts
that
you
pass
in
like
what
does
that
mean
for
rate
and
also
a
rate
lease?
Does
that
you
know.
B
Make
sense
because,
like
a
lot
of
these
naming
like
leases
or
you
know,
request
count,
they
made
a
lot
more
sense
when
we
were
thinking
about
resources,
but
when
we're
talking
about
rates
either,
those
names
also
have
to
change
which
which
I
expect
they
have
to
be.
But
is
there
a
same
name
for
those
when
we're
talking
about
rates?
B
Yeah
anyway,
just
kind
of
like
for
thought
but
yeah
about
the
aggregated
limiters.
So
the
motivation
behind
this
is,
for
example,
in
some
cases
we
have
very
high
cardinality
limiters.
So
let's
say
you're
writing
a
middleware,
and
you
want
to
rate
limit
based
on
the
remote
ip,
with
just
the
vanilla
resource
limiters
or
ray
limiters.
B
You
have
to
it's
potentially
keep
track
of
one
limiter
per
ip,
which
is
that's,
that's
a
lot
of
limiters
and
it's
we
are
concerned
about
having
to
allocate
one
per
ip
address
and
that
that
being
hard
to
scale
or
it
doesn't
scale
well,
so
we
came
up
with
this
other
aggregated
limiter,
where
essentially,
in
addition
to
all
of
it,
is
similar
to
all
the
apis
that
are
in
ray
limiters,
but
you
also
need
to
pass
in
a
resource
id
for
all
of
the
api,
so
estimate
account
acquire
and
weight
async.
B
They
all
take
in
a
a
key.
Essentially
that
indicates
essentially
a
resource,
so
so
so
that
is
the
motivation
behind
this.
However,
there
are
some
tricky
bits
to
this
as
well
is,
for
example,
resource
leads
will
probably
need
to
be
a
resource.
B
Lease
of
you
know
another
type,
because
we
need
to
store
a
reference.
Potentially,
you
need
to
store
a
reference
to
the
limiter
itself
on
the
resource
lease.
So
that's
another
potential
like
gotcha
and
also
like,
is
this
shape
for
the
api
for
these
kind
of
use
cases
a
reasonable
design.
The
reason
why
we
didn't
propose
this
in
or
didn't
include
this
in
the
proposal.
Api
proposal
issue
is
because
so
far
we
haven't
seen
any
use
cases
of
this
inside
the
bcl.
B
It
seemed
more
of
a
higher
level
concern,
so
we
were
considering
you
know
using
this
or
shipping
this
as
part
of
asp.net
core
is
alongside
the
middleware.
So
this
is
something
we
need
to
consider,
but
it
likely
won't
be
in
what
we
include
and
done
at
runtime.
G
D
C
D
Is
you're
splitting
the
the
count
right
I
mean
you
could
imagine
that
instead
of
having
the
resource
id
as
a
generic
parameter,
you
can
just
think
of
it
as
a
bit
mask
over
there
over
the
count
itself
right.
But
you
basically
take
your
your
ip
address
range.
You,
you
bucketize
it
into
buckets
and
that's
part
of
your
of
your
end
right.
D
But
then
you
know
in
the
only
being
32
bits.
I
can
see
that
not
being
something
you
can
scale,
especially
when
there's
you
know
large
number
of
values
per
per
bucket,
potentially
right,
but
that's
why
I
think
it's
a
bit
weird
that
you
modeling
it
as
a
as
a
different
limiter,
rather
than
modeling
it
as
a
different
thing
you're
requesting.
B
Yeah
it
almost
you
can
almost
win
and
kind
of
see.
Tiki
as,
like
you
know,
another
thing
to
another
input
like
I
guess,
resource
limiter,
kind
of
context,
kind
of
thing
where
you
can
pass
in
additional
information
on
the
essentially
per
request
to
the
resource
limiter.
B
But
I
mean
the
original
motivation
for
this
is
to
essentially
allow
ray,
limiting
based
on
ip
ip
addresses
or
ip
address
buckets.
B
D
A
B
Exactly
yeah
so
yeah,
as
david
mentioned,
having
these
two
kind
of
parallel
abstractions
makes
it
kind
of
difficult
to
work
with
and
also
like.
We
wouldn't
have
it's
not
reasonable
to
ship
like
default
implementations
for
these
aggregated
limiters
as
well.
Since
you
don't
know
what
t
key
is,
we
could
constrain
it
to
potentially
be
like
higher
comparable.
I
think
that
allows
potential
usages
in
dictionaries,
but.
A
Yeah,
so
actually
for
regarding
resource
lease.
If
resource
lease
changes
to
be
the
you
know,
using
a
long
to
identify
its
state,
then
that
just
means
it's
up
to
the
caller
to
be
able
to
track
it
back
to
the
key,
so
it
wouldn't
actually
require
changing
the
resource
lease
type.
If
it's
already
based
on
just
an
identifier.
D
B
B
That's
the
one
we
have
identified,
but
I
don't
see
why
we
will
cons
like
what's
the
benefit
of
constraining
it.
So.
D
Well,
one
thing
you
could
do
is
instead
of,
as
I
said,
instead
of
having
a
completely
different
abstraction,
you
just
basically
change
the
chord
where
you
basically
have
effectively
a
struct
that
you're
passing
instead
of
the
end
so
effectively,
you
mix
the
key
and
the
thing
into
one
struct,
and
then
you
have
like
that.
Struct
basically
holds
on
to
an
end
and
an
object
and
then
the
strongly
type
to
whatever
you
want
it
to
be.
But
that
means
if
it's
usually
a
value
type.
You
end
up
boxing
if
it's
usually
a
cluster.
D
A
Well,
if
it
was
a
rate
limiter
of
t
and
the
acquire
took
a
some
struct
of
t,
I
guess
the
problem
is
that,
because
if
that
struck
had
a
count
plus
the
t
state,
then
you're
back
to
what
does
the
simple
one
use
for
t-
and
I
was
gonna
say
it's
obviously
int.
But
then
you
have
the
count
and
the
count
together.
So
so
that
doesn't
help
never
mind.
A
B
Yeah,
it
almost
sounds
like
you,
want
a
request
context
or
like
a
resource
limiter
acquisition
context,
or
something
like
that.
Yeah.
B
D
B
Or
the
concern
is
that
we
don't
want
to
essentially
need
to
allocate
a
limiter
per
like
these
higher
cardinality
cases.
B
B
So
the
reason
like
what
we
don't
like
about
having
these
two
separate
types
that
essentially
don't
have
any
relationships
to
each
other.
Is
that
like
how
do
you
as
a
consumer,
let's
again
using
the
middleware
as
an
example?
How
do
you
use
both
in
a
consistent.
G
G
B
K
G
D
G
D
A
D
A
Right
I
mean
like
that
you
could
simply
model
or
you
could
model
it
that
way,
but
now
you
do
get.
If
you
are
doing
a
this
ip
address
is
limited
to
or
you're
limited
to,
five
requests
per
second
prior
address.
Then
now
you
have
two
to
the
32
different
instances
of
your
acquire,
assuming
you
saw
the
whole
range
of
ip
addresses
and
then
ipv6
also
happens.
So
good
luck
with
that.
G
K
K
Is
right,
but
I
think
that's.
E
K
Like
I
comparable
with
t-
and
you
could
write
your
own
but
yeah-
it's
an
interesting
point.
So
could
the
tq
thing
be
the
entire
abstraction,
I
think,
is
the
question
right
like
do
you
even
need
the
non
like
keyed
version
resource
limiter
is
that
if
all
the
efficient
implementations.
G
D
The
key
I
mean,
if,
if
you
say
we
don't
want
to
proactively,
create
them
a
mirror
of
laser
initializing
one
limiter
per
ip
address,
then
yes,
I
can
see
that
working.
If
the
whole
point
is
you
want
to
literally
virtualize
it
and
say
there
is
no
limiter
per
p
address.
There's
just
one
uber
limiter
that
through
some
sheer
magic
of
bookkeeping,
knows
how
many
appear
addresses
it
has
so
it
doesn't
actually
allocated
limited
per
ip
address.
D
D
I
mean
I
think
the
problem
is
still
like.
If,
in
the
current
design,
you
basically
have
two
different
ways:
resources
are
being
maintained
and
there's
no
relationship
right.
Ideally,
you
want
to
be
like
if
you
want
to
write
generic
code
in
the
middle,
where
you
want
to
be
able
to
treat
one
through
the
other
right.
K
Well,
if
we
take
corey's
least
idea
smarter,
at
least
the
the
factory
like
almost
all
the
actual
consuming
code
would
be
interacting
with
release,
and
it
would
only
be
the
initial
require
with
the
resource
id.
That
would
like
be
the
difference
like
how
you
get
the
lease
to
begin
with,
but.
D
A
E
A
An
ip
based,
concurrency
or
ip
based
sliding
window
limiter,
like
you,
had
to
build
an
ip
based
sliding
window
limiter
request
and
now
you're,
not
you're,
not
pluggable
anymore.
You've,
you've
locked
yourself
to
an
implementation,
so
I
I
I
think
that
cory's
inversion
thing.
While
it
solves
some
api
problems,
it
eliminates
asp's
plugability.
K
A
K
A
A
A
No
but
yeah
so
like,
I
understand
how
it
would
work
in
if
it,
if
you
don't
mix
in
the
the
contextual
ones,
but
once
it
gets
context
you're
back
to
how
does
how
does
the
middleware
know
it
needed
context
and
what
that
context
was,
and
how
does
it
get?
One
of
these
requests
to
ask
the
the
random
ambient
state
of
the
process
like
hey,
does
anything
care
to
bucketize
by
this,
and
then
what
does
that
mean
so
it
it
gets
weird,
at
least
in
my
head.
A
D
D
It's
it's
bad,
because
you're
forced
by
a
vocation,
yeah
I
mean
if
you
have
an
abstraction,
you
kind
of
want
to
be
able
to
treat
one
as
the
other
and
and
what
that
flow
is
depends
on
your
use
cases,
but
it
does
seem
to
me
that
it
kind
of
pokes
a
hole
at
the
idea
that
everything
is
an
end
that
to
me,
is
kind
of
the
higher
order
bit
here.
It's
kind
of
you
realize
that
in
this
not
enough
for
modeling
all
you
need.
D
So
you
have
this
thing
on
the
side
and
let
me
know
model
as
this
fairly
heavyweight
thing.
You
know
the
question
is,
is
I
mean?
Is
this
sufficient?
I
mean
presumably
generic
you
can
do
whatever
the
you
want,
so,
whatever
state
you
need
to
smuggle
in
to
acquire
leases
you
can
do
now.
But
now
the
question
is:
is
that
is
that
really
the
core,
or
is
the
other
one,
the
core
one
right
right,
but.
H
D
A
But
that
means
that
basically,
you
have
to
yeah
it's
back
to
the
now.
You
have
to
re-implement
things
because
the
t
changed
and
you
can't,
even
if
we
have
generic
math
you're,
not
going
to
be
able
to
generic
which
property
off
of
this
struct
did
you
care
about
to
do
go.
Do
generic
math,
for
it
was
three
too
many,
so
you
need.
B
G
B
Yeah,
I
can
kind
of
point
to
how
we're
using
I
how
we're
implementing
a
ipv
aggregated
rate
limiter
inside
the
middleware
sample
I've
been
working
on,
but
yeah.
I
was
kind
of
throwing
this
out
there
to
get
people's
kind
of
ideas
of
like
what
another
shade.
Just
should
compare
this
to.
I'm
sorry,
yeah.
K
K
Jeremy
brought
it
up
at
the
very
beginning
and
and
dave
is
going
to
read
his
allocations,
but
there
are,
there
are
ways
to
avoid
it.
It's
like
compare
it
to
I
dictionary
of
like
t
key
to
the
normal
resource
limiter
and
how
that
would
be
used,
assuming
that
was
like
registered
as
a
service
or
something
very
close
to
it.
K
I
guess
maybe
not
that,
but
you
get
what
I'm
saying
right
like.
Maybe
it's
just
a
method
that
takes
the
tq
resource.
B
Yeah
yeah
yeah
there's
a
couple
of
comp
composing
options
of
how
that
they
could
essentially
kind
of
be
built
on
top
of
each
other,
so
to
speak,
but
there
were
like
none
of
these
feel
are
very,
I
guess
natural.
B
K
A
Yeah
the
composed
call
like,
like
stefan
at
least
what
I
heard
when
stefan
said
it
of
that
somewhere.
There's
an
aggregator
and
you're
like
aggregator,
give
me
the
rate
limiter
for
this
context,
and
now
it
just
gives
you
back
a
rate
limiter
yeah.
You
can
do
complicated
object,
reuses
and
maybe
and
stuff
for
tying
that
context
in,
but.
B
Yeah,
I
I
think
like
way
back
like
a
month
and
a
half
ago
or
two
months
ago,
that
that
is
kind
of
how
I
use
these
apis,
but
it
was
kind
of
brought
up
that.
Oh,
we
don't
want
to
map
a
t
key
to
a
resource
limiter.
So
this
was
the
result
of
that
discussion,
but
indeed
we
might.
A
Need
to
revisit
that
yeah
I
mean
the
the
other
thing
to
is
you
know
what
would
writing
the
ip
based
limiter
look
like
here,
and
how
sad
would
you
be
that
you
have
to
go
rewrite
it
for
all
the
different
algorithm
types
that
you've
already
discussed
for
the
oh
context,
context-free.
B
One-
and
I
don't
think
that's
too
bad
because
you
kind
of
just
like
it
shouldn't
be
too
difficult.
The
the
main
concern
was
that
each
limiter
might
be
too
heavy
like
it
might
have.
You
know,
imagine
the
case
where
we're
essentially
saying
you
know,
resource
leases.
Have
you
know
an
id
that
tracks
back
to
keeping
track
of
things
then?
Essentially,
each
resource
limiter
almost
need
like
a
a
dictionary
of
ids.
To
I
don't
know
some
kind
of
state.
B
If
that
were
the
case,
if
every
limiter
had
a
dictionary,
do
you
want
to
allocate
like
all
of
that
in
you
know,
for
each
ip
address
or
essentially
what
this
allows
the?
How
to
get
a
resource
limiter?
Is
you
have
one
dictionary,
but
it
keeps
track
of
like
a
ton
of
different.
I
guess
resource
limiter
states.
A
A
B
But
okay
anyway,
so
I
think
the
next.
When
is
the
next
opportunity,
where
we
can
kind
of
circle
back
to
some
of
these
discussions?.
D
I
mean
it's
mostly
function
of
like
when
you
think
you
will
be
ready
to
present,
because
I
think
that's
I
mean
we
meet
basically
several
times
a
week.
I
think
we're
now
down,
because
we
don't
have
to
review
as
much,
but
we
can
meet.
Basically,
whenever
you
need
to
our
usual
slot
is
tuesday.
We
have,
I
think,
one
slot
on
thursdays.
D
D
Yeah
this
stuff
says
we
do
analyzers.
That
was
our
last
big
one,
I
think,
and
then
it's
mostly
backlog,
I
think,
for
next
week.
I
haven't
set
up
anything
yet
so
I
mean,
if
you
want
to,
we
can
just
do
next
tuesday.
That
would
be
the
easiest,
probably.
B
Well,
I'll
I'll
try
to
get
I'll
address
most
of
these
comments
by
thursday,
so
miraculously
we
have
time,
I
can
kind
of
continue
the
discussion
there,
but
yeah.
We
can
say
tuesday
for
now,
but
maybe
even
earlier.
D
I
mean
we
will
definitely
not
have
time
to
talk
about
that,
because
we
want
to
kind
of
use
the
two
hours
to
go
through
all
the
analyzer
stuff
we
have.
But,
like
I
mean
I
mean,
if
you
really
wanted
to,
we
can
set
up
one
for
friday,
but
like
at
that
point,
we
might
as
well
call
it
tuesday.
Right
I
mean
it.
It
really
depends
on
like
whether
you
think
you
get
more
out
of
it
or
not.
B
I'll,
try
to
I
mean
I'll,
go
through
some
of
this
and
if,
if
it,
if
I
feel
I'm
ready
I'll
try
to
set
something
up
for
friday,
maybe
that's
good
ad
hoc
thing.
Okay
sounds
great.
Thank
you
very
much.
H
You
very
much
this
is
needs
work
right,
eva.
D
D
A
All
right
awesome
thanks
a
ton
and
all
right,
so
we
will
be
back
doing
analyzers,
as
we
already
mentioned,
thursday.
10
a.m.
Redmond
time
people
who
care
know
how
to
find
us.