►
From YouTube: 2020-04-16 KEDA Standup
Description
Link to notes: https://hackmd.io/cEi_FerdQvyTvB1i-5U0kg?view
A
Perfect,
okay,
good
morning,
good
evening,
good
whatever
everyone
thanks
so
much
for
joining
I,
see
a
few
faces
or
names
that
I
don't
recognize,
which
is
great.
So
we've
got
a
few
items
on
the
agenda.
We've
got
an
upcoming
Cato
release.
Coming
on,
there's
been
a
lot
of
progress
on
some
of
the
v2
work
that
I
know,
zip
UNIX
been
working
on.
A
A
If
you
want
and
then
just
let
me
know
as
we
go
around
if
there's
any
specific
items
that
you're
interested
in
discussing
today
as
a
group
now
make
sure
to
keep
adding
them
to
the
agenda
and
if
you're
just
here
to
listen
and
hang
out,
then
that's
cool
too.
So
I
will
start
first,
so
I'm
Jeff
I
work
at
Microsoft,
one
of
the
kata
maintained
errs
and
yeah
I.
Don't
I
don't
have
anything
to
cover
today.
A
B
A
C
A
A
A
bit
confused
about
that.
Yes,
we
can
cover
that.
We
did
cover
that
last
week,
the
we
wondered
if
it
would
make
sense
for
us
to
shift
the
time
of
this
meeting.
We
talked
about
a
few
options.
We
thought
about,
maybe
bumping
it
up
an
hour.
We
are
already
moving
meeting
every
other
week
now,
I
think
there
was.
There
wasn't
strong
consensus
for
either
I
think
I'll
have
downsides,
especially
once
life
normalizes.
A
D
A
E
Not
much
for
me
I'm
Ahmed,
I
work
for
Microsoft
as
well
working
I,
know,
I,
don't
have
anything
else.
Oh
yes,.
A
E
A
F
Yes,
hello,
my
name
is
Mindy
I'm
from
Red
Hat
I'm,
working
on
OK
de
on
on
the
v2
branch,
specifically,
so
just
a
short
update.
We
make
some
progress
over
there.
I've
made
some
refactoring
on
that,
which
is
great.
So
from
my
point
of
view,
like
the
main
functionality
for
for
the
scale
object
is,
is
pretty
much
done
to
make
the
next
main
main.
A
A
H
I
J
I
work
in
the
Microsoft
cell
scheme
I'm
in
Technical
Sales,
and
wanted
to
check
whether
it
is
whether
Kara
is
the
right
solution.
If
I
want
to
scale
out
my
kubernetes
cluster
based
on
incoming
FTP
requests,
also
I
see
on
the
on
the
agenda
here
or
from
March
19
there's
a
line
item
around
HTTP
scaling
so
wanted
to
know
whether
it
is
possible
to
scale
out
for
scallion
based
on
incoming
HTTP
requests.
Thank.
A
A
Awesome
thanks
for
joining
Travis
yeah
I
feel
free
to
jump
in.
If
you
have
any
questions
and
I've
got
an
eye
on
chat
to
or
or
folks
want
to
raise
their
hands
as
we
go
to,
because
we
do
have
a
few
folks
on
on
the
call
today.
So
thanks
Aaron
for
joining
so
with
that
I
think
we've
got
a
good
agenda
set,
let's
jump
right
into
it.
The
first
one
is
the
kata
1.4
release
I'ma.
D
Surprised
if
I
did
it's
absolutely
alright,
it's
fine
I
wish
I
was
a
half
as
smart
as
Emmett,
but
that's
a
different
matter.
So
the
only
thing
is
that
yeah,
so
we're
going
to
start
the
release
today
and
hopefully
we
will
get
it
done
by
tomorrow.
That's
that's:
ergo
I
have
candidly
not
looked
at
the
not
looked
at
the
commits,
so
I'm
not
sure
what
else
is
being
included
here.
Yes,.
A
Our
last
cut
was
on
March
12th,
and
so
we've
got
a
few
updates
and
read
means
it
looks
like
a
typo,
some
improvements
to
the
AWS
authentication,
the
bug
fix
and,
as
your
monitor
leak
with
some
of
the
scalars,
a
few
more
governance
type
stuff
around
our.
How
we
now
do
DCO,
which
I
ran
into
personally
with
my
PR
yeah.
It
looks
like
a
few
bug,
fixes
I,
don't
see
anything
big
in
terms
of
features
or
new
scalars
that
are
in
this
one.
A
So
yeah,
it
looks
like
just
a
pretty
pretty
minor
minor
release,
some
improvements
to
RabbitMQ
and
some
label
stuff,
so
yeah
pretty
minor
stuff.
What
will
funnel
it
up
in
the
changelog
is
Tom
mentioned
so
we'll
kind
of
roll
up
some
of
those
commits
into
the
change
lock
here
and
then
also
the
releases
on
github.
So,
as
we
mentioned
we're
starting
that
today,
so
pretty
soon,
we
should
have
that
wound
up
for
release
and
for
those
who
have
joined
before
where
we're
trying
a
release
schedule
of
every
four
weeks.
A
So
that
would
mean
we'll
release
this
week
and
then
four
weeks
from
now,
we
will
do
the
one
dot
or
the
yeah
the
1.5
release
whatever
gets
in
from
here
to
there,
which
might
include
a
new
Redis
stream
scaler
or
some
functionality
there.
So
any
questions
or
comments
on
1.4
before
we
move
on
to
v2
great
alright,
so
v2
zipping,
like
you
mentioned
already,
you've,
made
a
bit
of
progress
here.
A
Can
I
scale
our
NGO
deployments
and
should
objects
and
jobs
really
be
the
same
thing,
or
should
we
split
them
apart
and
if
I
remember
from
our
discussion
last
week,
we
are
going
to
split
up
scaled
objects
and
scale
jobs
and
assumes
to
be
like
from
your
update,
you're
saying
that
most
of
the
scaled
object
changes
to
make
it
more
flexible
so
that
you
could
deploy
or
scale
like
a
stateful
set
or
an
agro
deployment
or
a
deployment.
It
sounds
like
that
might
be
what
you
finished
off
last
week.
F
A
F
A
A
A
Maybe
this
was
a
bug
where,
if
I
look
at
like
the
scale
or
4q,
it
would
actually
be
the
queue
length
that
it's
looking
at,
and
it's
looking
at
like
this
number
right
here
and
that's
how
many
jobs
it
spins
up
which
to
me
it
wasn't
what
I
expected
so
I,
don't
know
if
that's
by
design
today.
That
would
be
the
only
thing
that,
as
we
move
into
v2
I
would
love
it.
If
it
was
a
bit
more
clear
on
which
one
of
these
like,
what
does
parallelism
do?
Maybe
it's
an
advanced
feature.
F
B
F
A
F
That's
perfect,
I
think
that's
a
great
next
Ally
we
had
the
other
other
thing
for,
for
the
week
too
was
basically
at
the
moment
there
is.
There
is
a
very
limited
way
how
to
scale
the
deployment
based
all
most
people
three
years
or
we
can.
We
can
do
it
or
multiple
scalars.
We
can
do
do
it,
but
the
the
behavior
is
not
ideal.
I
would
say
so
there
were
some
discussions
about
about
capping
the
maximum
number
of
replicas
and
etc.
F
F
Yeah-
and
this
was
the
second
issue-
that
basically,
if
we
use
multiple
scalars
like
most
of
the
scalars,
basically
share
share
the
metric
nine,
so
the
metric
name
is
used
for
HP
ice
to
to
scale
more
employments.
So
if
we,
if
we
deploy
scaled
object,
that
consists
of,
for
example,
two
two
triggers
and
both
triggers
are
using
the
same
same
metric
name,
the
scaling
won't
work
correctly.
So
we
need
to
change
change
this,
so
we
need
to
probably
generate
some
unique,
unique,
not
written
any
more.
Something
like
that.
A
F
Yes,
yes,
yes,
yes,
yes,
yes,
and
then
there
was
the
other
request
for
some
scalars
to
introduce
something
like
starting
thresholds.
So
basically
you
can.
You
can
define
basically,
for
example,
folder
for
the
kafka
you
can.
You
can
define
like
the
electric
short,
so
basically
threshold
based
on
what
you
want
to
scale
and
there
was
a
request
to
basically
introduce
something
like
start
threshold.
So,
for
instance,
you
want
to
scale
your
phone
for
Africa
when
there
is
one
message
but
scale,
the
other
other
like
because
on
on
different
amount
yeah.
This
is
it
yeah.
A
Interesting,
it
makes
sense.
So
it's
almost
like
the
hope
here
is
that
we'd
have
some
pattern
for
you
to
say:
don't
don't
spin
up
the
first
replica
until
some
threshold
is
hit
and
then,
after
that
it
just
publishes
the
HP
a
target
based
on
the
leg
threshold.
So
maybe
maybe
one
is
a
bad
example
here.
A
F
A
So
it's
it's,
maybe
not
a
a
k2
core
specific
item.
It
would
be
something
a
pattern
that
we
could
introduce
that
scalars
optional
is
going
to
introduce,
as
they
do
like.
That
is
active
thing.
Yes,
I
see,
yeah
makes
sense.
That
makes
sense.
Yeah
I
like
the
idea,
so
maybe
that
is
something
that
I'll
move
it
to
the
v2
milestone
for
now,
so
that
when
we
kind
of
do
these
things,
we
keep
an
eye
on
it.
I'll
pop
help
want
it
on
it
and
then
I'll.
A
Keep
this
open
I'll,
try
to
add
a
little
discussion,
which
is
like
I'm
thinking
like
when
we
do
the
B
to
change
that.
If,
even
though
this
isn't
necessarily
coupled
to
b2,
we
could,
in
theory
make
this
backwards
compatible,
but
we
were
kind
of
like
hey
as
part
of
you
to
were
introducing
this
new
pattern
to
called
start
threshold,
and
that
way
when
folks
start
contributing
scalars.
Maybe
it's
something
that
we
reinforce,
or
maybe
we
even
say
all
scalars-
have
to
support
this
property.
I,
don't
know
I,
don't
know
if
folks
have
thoughts
on.
F
Point
to
this,
so
what
you
describe
is
maybe
something
different
than
the
guy
sitting
in
the
issue.
I
think
that
he
wants
to
start
the
start,
like
the
scaling
when
there
is,
for
example,
one
one
one
message
in
the
queue,
but
not
like
scale
to
the
other
Africa
like
until
there
is
100
messages.
Oh
so.
A
G
B
G
With
Casca
yeah
the
scenarios
that
I've
worked
with
the
top
as
well
as
went
up,
they
are
mostly
real,
real,
really
new,
real-time
streaming
scenarios
so
where
the
customer
is
really
looking
for
passing
messages
as
soon
as
they
show
up.
So
in
that,
in
such
cases,
they
wouldn't
want
to
wait
till
a
batch
of
400
messages
is
accumulated
and
they
would
just
want
real-time
processing
right
out
here.
A
I
I
think
I
understand
it
now.
Actually
that
that
helps
so
I
think
the
scenario
is
sometimes
he'll
get
10,000
messages,
and
in
that
case
he
wants
it
to
scale
based
on
this
number,
the
100.
Sometimes
he
might
just
get
10
messages
and
if
it
still
scaled
by
this
one,
the
HP
would
only
give
him
one
instance,
but
he's
saying
in
that
case,
if
it's
greater
than
this
number,
but
less
than
this
one,
he
wants
it
to
honor
the
lesser
one.
A
So
if
you
only
had
10
messages-
and
you
said
it
to
one-
it
would
burst
really
quickly
for
the
small
batch
and
process
them
very
quickly,
but
it
was
a
massive
batch.
It
went
to
cone
nuts,
so
I
think
yeah.
So
the
scale
would
have
to
lie
to
the
HP
about
the
metric
value
when
it
was
greater
or
equal
to
the
start
threshold,
but
less
than
the
lag
threshold.
So
the
feature
request
is
actually
more
around
like
having
the
scale
or
no.
A
A
B
A
A
Yep
I
agree
like
right
now:
we've
been
publishing
them
to
our
YouTube
channel,
which
is
very
one-off
and
so
I
love
the
idea
of
publishing
to
a
channel.
That's
got
a
bit
more
power
to
help
get
get
the
word
out
more
so
yeah.
That's
awesome!
Any
thoughts
on
the
next
step
here.
Should
we
just
ping
our
CNC,
F
friends
and
say
like
hey:
can
you
give
us
knowledge
on
how
to
do
this
I'm
happy
to
help
with
this
one
yep.
A
I
Okay,
talk
about
the
way
to
stop
so
basically
multiple
scalars,
so
the
idea
is
to
like,
whenever
we
had
an
external
metric,
it's
hard
coded
within
or
the
scalar
something
like
Kafka
scalar
has
lack
threshold
as
the
metric
name.
So
ideally,
I
was
thinking
if
we
can
have
a
way
to
in
in
the
scaled
object
to
mention
what
what
should
be
the
name
of
the
external
metric
or
if
they
don't
give,
then
generate
then
generate
a
metric
name
on
our
own.
So
that
was
the
plan.
I
B
A
F
A
Me
make
sure
I'm
thinking
correctly.
So
if
I
look
at
the
scaled
object
spec
today,
why
is
it
not
what
I
wanted
it
to
show?
You
have
triggers
here
and
then
there's
metadata,
which
I'm
not
showing
right
now
with
this
potentially
be
appear
to
metadata,
so
I
would
say
like
a
metric
name
below
the
metadata
property.
B
I
A
If
you
have
an
image,
if
you
like,
publish
an
image
of
Keita,
you
can
even
just
ping
ping
me
on
slack
or
something
and
say
like
hey
and
I've
got
a
few
tests
when
up
I
can
just
modify
my
cluster
and
pull
in
your
custom.
Image
and
I
can
run
a
few
tests,
but
I'm
pretty
I'm,
confident
that
if
it
works
for
RabbitMQ
and
Kafka
that
it
will
work
for
all
the
other
scalars
I
assume
based
on
where
you'll
be
writing
the
code.
A
I
A
No
problem
great,
thank
you
for
being
that
any
other
questions
on
this
one,
and
are
you
cool
if
I
sign
you
to
this
as
well
I
see
you
you've
already
been
pretty
involved
in
this
issue.
I,
don't
know
if
you
you
wanted
to.
It
sounds
like
you're,
actually
coding
it
up
right
now,
so
I
can
just
assign
the
issue
to
you
officially.
I
A
You
on
it,
that's
awesome,
really
appreciate
it.
Yeah
feel
free
I
keep
an
eye
on
the
slack
channel
too.
So
if
you
run
into
any
issues
and
maybe
I'm
not
being
as
responsive
or
someone
isn't
here,
feel
free
to
just
flag
us
on
slack
and
it
will
ping
my
phone
and
unfortunately,
I
get
notifications
hurt
are
too
noisy
now
that
I
almost
just
can't
rely
on,
but.
F
I
A
F
A
I
A
Great
all
right,
let's
go
with
that,
then,
okay,
so
moving
on
to
scalars
in
general
I
know
there
were
specific
interest
from
some
folks
on
the
call.
I
can't
remember
who
wanted
to
talk
about
the
Redis
dreams,
one
Tom
I,
don't
know!
If
that's
when
you
want
to
start
with
her.
There
was
anything
else
to
you
wanted
to
bring
up
on
scalars
yep.
B
H
Liz,
Taylor
and
I
have
one
for
ready
streams
as
well
conceptually.
This
is
a
little
different
because
in
ready
stream
the
object
itself,
it's
like
an
append-only
log.
So
essentially
you
have
to
use
something
called
appending,
increasing,
increase
lists
using
this
specific
command,
I
thought
that
can
form
the
basis
of
criteria
for
scaling.
H
You
know
you
determined
based
on
if
they
were
consuming
off
a
data
stream.
So
that's
that's
the
idea
I
saw
yeah.
So
it's
to
be
honest,
is
very
similar
to
Redis
lists
in
in
terms
of
implementation
like
the
same
authentication
mechanism.
It's
just
that
you
know
the
command
we
which
we
use
in
order
to
fetch
thee
lag.
H
It
differs,
and
there
are
a
few
caveats
which
I
mentioned
in
my
second
comment:
if
you
can
scroll
scroll
down
please,
but
that
is
more
with
respect
to,
because
this
is
like
a
squeezed,
because
structures
is
essentially
unlimited,
append
only
log,
but
this
is
moved
from
a
consumer.
You
know
development
standpoint
and
how
kada
is
not
essentially
or
the
scaling
a
or
may
not
be
just
just
empowerment's.
H
H
D
H
And
different,
at
the
same
time,
it's
similar
I
would
say
the
difference
is
that
you
know
ready
stream,
so
sort
of
takes
upon
itself
to
manage
the
state
of
each
consumer,
so
in
Kafka.
So
there
is
a
consumer
groups
right,
you
can
have
you
know
individual
consumer
instances
come
and
go
like
you
know,
there
is
no
restriction
on
that,
but
ready
streams.
What
it
tells
you
that
you'll
have
to
have
a
unique
consumer
instance
within
a
group
and
a
redis
stream
will
store
some
state
on
behalf
of
that
consumer
instance.
H
So
I
have
an
application,
which
is
mapped
to
open
humor
group,
say,
for
example,
application.
A
and
I
have
two
five
instances
to
scale
out
the
to
share
the
processing
load.
Those
instances
have
to
be
unique
and
Redis
teams
will
store
some
state
on
the
server
side
on
the
west
side
itself.
So
that
has
some
implications
in
terms
of
say,
reprocessing
messages
and
you
know
dealing
with
failure
scenario
so
on
and
so
forth.
One
of
these
haven't
mentioned
in
the
comment
as
well.
If
you
see
the
second
item,
what
if
instance
fails?
So?
H
H
A
It's
good,
though,
to
and
and
I
think
some
of
the
even
the
college
you
have
here
where
it's
like
the
consumer
has
to
a
corrupts.
There's
the
risk
that
it's
going
to
keep
increasing
and
kate
is
gonna,
keep
scaling,
that's
pretty
consistent
with
with
how
qaeda
works
in
general,
with
Kafka,
for
example.
So
I
think
this
is
awesome.
H
A
I
saw
some
questions
on
slack.
They
were,
they
were
wondering
about
which
STK
to
use,
but
that
would
be
a
very
good
one
to
have
as
well
great
Oh
awesome
perfect.
Thank
you,
Tom,
okay,
so
Israel
dropped
off
already.
He
had
to
jump
up
for
another
meeting
at
the
half-hour
mark,
I
guess
I
kind
of
mentioned
this.
At
the
outset
we
were
evaluating
if
we
should
move
times.
A
I
don't
know
if
anyone
cuz
we've
got
a
few
folks
who
are
on
this
call
who
weren't
on
two
weeks
ago,
if
anyone
just
x
wants
to
throw
in
if,
if
we
should
definitely
move
I,
think
the
the
easiest
one
for
most
of
our
team
would
be
moving
an
hour
earlier
than
what
we're
doing
and
still
doing
it
every
other
week,
but
I
like
I
personally,
can
make
most
any
time
work
so
I'll
just
kind
of
pause
here
in
case
anyone
wants
to
chime
in
on
that
ongoing
discussion.
This
week.
B
A
A
I
A
A
Do
the
time
of
Keita
thing
on
slack
I'll
post
a
message
on
so
I
can
see.
If
folks
are
ok
with
the
9
a.m.
and
that
maybe
we
can
move
it.
We
won't
plan
on
changing
next
meeting
because
I
want
to
give
at
least
a
meetings
notice,
but
next
meeting
might
be
our
last
one
at
10
a.m.
Pacific
time,
we'll
see.
Ok,
great
Roger
saw
you
mentioned.
You
add
a
few
use
cases.
I,
don't
know.
G
So
on
mine,
actually
a
part
of
the
use
cases.
There
was
also
this
small
issue
that
I'm
seen
with
HPA,
which
I
wanted
to
share
I,
did
not
raise
it
on
the
Cana
forum
yet,
but
I
probably
should
have
with
more
details.
So
the
use
case
that
I
want
to
share
with
share
is
about
a
customer.
We
are
looking
to
create,
create
log
streaming
and
scaling
and
doing
some
processing
or
parsing
logic
on
some
logs
that
they
are
getting
from
syslog.
G
We
are
so
they
are
getting
the
log
that
in
the
protocol
they
are
passing
them
on
to
event
hubs
and
then
using
events
of
scalar
on
data
to
scale
out
as
your
functions,
which
triggered
using
event,
hubs
and
functions
are
deployed
on
a
TS.
So
that's
the
scenario
and
based
on
the
processing.
The
auspice
from
the
agile
functions
would
either
go
to
an
output
event
tub
or
a
dead-letter
event,
so
that
the
kind
of
end-to-end
sin
are
you
finally
going
out
back
in
back
into
as
your
sentinels
for
further
processing
of
the
process
messages.
G
So
that's
the
kind
of
end-to-end
pipeline
that
they're.
Looking
at
what
I'm
saying,
though,
right
now
is,
my
HPA
is
currently
sinus
stuff
with
the
with
the
number
of
messages
in
the
queue.
So
even
though
I
do
not
see
any
actively
processing
messages,
my
HPA
is
stopped.
Looking
at
a
value
of
saying
that
there
are
more
number
of
messages
in
the
and
that's,
why
does
not
automatically
scale
down
my
pond
I'm,
not
sure?
If
anyone
has
seen
this
analysis,
something
I
was
hoping
to
kind
of
get
an
idea
of
what
I
should
try
to.
A
And
I,
if
I
understand
right,
there
are
active
containers,
there's
active
replicas
that
are
processing
event
hubs,
so
they're
they're
chugging
along
doing
something
but
kada
when
it's
doing
its
check
on
how
much
work
is
required.
It's
it's
seeing
that,
like.
Oh,
it
looks
like
there's
no
messages
here
which
is
potentially
throwing
off
the
scale
is
that
is
that
right,
right.
G
The
other
way
around
only
thing
is
that,
even
if
there
are
no
messages
in
the
event
hubs
currently
that
have
yet
to
be
processed,
my
HPA
is
still
thinking
that
there
are
messages
on
the
tube
and
that's.
Why
does
not
scale
down
so
another
thing
that
I
tested
is
even
with
this
is
with
the
HBM,
this
kind
of
state,
even
if
I
add
and
more
messages.
In
my
event,
my
pods
that
are
already
created
in
those
messages
further
ahead,
so
it's
not
that
it's
in
a
stock
or
a
limbo
state.
A
A
good
question,
I
know
behind
the
scenes,
how
it
works
for
functions
is,
is
there's
a
storage
account
and
it's
writing
check
points
for
each
partition
and
when
you
run
it
in
the
Azure
function
service
we
named
the
storage
account.
Actually
it
has
nothing
to
do
with
the
name
of
the
function.
I
think
it's
just
using
the
customer
group
name
yeah
and
so
I
am
curious.
It's
like
I
was
wondering
at
first.
Maybe
there
was
a
world
that
it
wasn't
correctly.
A
Writing
the
check
points
back
into
storage
and
that's
why
the
scaled
scaler
was
thinking
it
was
bad
earlier,
but
then,
when
you
said
it
resumes
at
the
right
spot.
That
was
a
bit
strange.
So
definitely
if
you
had
some
conflicts
between,
especially
if
it
was
pointed
to
the
same
consumer
group
I
think
things
could
get
a
little
bit
out
of
whack.
A
D
G
H
G
G
A
B
A
A
Okay,
so
that
brings
us
to
the
last
one
from
Rahul
around
cluster
auto-scaling
and
if
I
understand
right.
The
a
few
there's
kind
of
two
things
in
here
one
is
Keita
today
is
focusing
on
horizontal
pot,
auto
scaling,
which
means
you
have
a
cluster
with
like
five
nodes
and
a
bunch
of
events
come
in
and
we
make
sure
that
we're
scaling
out
replicas
within
those
five
nodes
well
and
I.
Think
the
first
half
of
your
question
was:
could
we
have
Keita
driving
the
scaling
of
the
entire
cluster?
A
So
maybe
you
don't
need
to
go
from
from
two
replicas
to
eight
replicas
alone,
but
you
also
need
to
go
from
five
nodes
to
ten
nodes
and
then
I
think
the
second
one
related
to
it
was
and
what,
if
the
event
is
actually
HTTP
and
it's
not
something
like
event
hubs
or
RabbitMQ
or
Kafka?
Is
that
a
fair
statement?
Rahul
anything
else,
you'd
want
to
add.
J
Or
is
it
the
right
there
right
like
technology?
You
know
to
do
that.
So
the
scenario
here
is
they
process
like
I,
don't
million
FTP
files?
Let's
say
you
know
within
a
couple
of
hours
and
they
were
they
wanted
to
know
whether
it
is
possible
to
first
Moodle
is
structured
to
basic
infrastructure
to
lineage,
which
looks
like
it
is
possible.
But
the
next
question
is:
how
do
you
scale
out
when
you
get
this
burst,
of
incoming
FTP
requests,
yeah.
A
Makes
sense
so
I
think
I'll
take
a
stab
at
this
and
I'll.
Let
others
jump
in
nothing.
An
FTP
right
like
File,
Transfer
Protocol
is
the
one
that
they're
looking
yeah,
so
we
don't
cater
today
doesn't
have
any
way
to
scale
directly
on
anything
that
is
push
or
like
request,
based
that
the
TCP
HTTP
G
RPC
FTP
directly.
However,
there's
one
pattern
that
may
or
may
not
help
there's
a
blog
post
here,
which
I
can
paste
into
chat
it's
more
around
HTTP,
but
I
clattering.
A
Does
it
as
well
is
that
you
actually
create
a
scaled
object
based
on
Prometheus
and
you
scale,
based
on
the
rate
of
events
coming
into
Prometheus,
so
like
I'm,
showing
on
my
screen
now
this
is
one
that's
HTTP
trigger,
so
I
have
nginx
that
publishes
metrics
to
Prometheus
every
second
saying:
here's,
how
many
HTTP
requests
I'm
processing
and
then
kata
asked
from
atheists.
It's
like
hey.
How
many
requests
has
this
HTTP
endpoint
guy
in
the
last
five
seconds
and
then
based
on
that
it
can
actually
start
to
scale
out.
A
This
is,
in
general,
a
super
common
question.
There's
a
few
other
options
too.
I,
don't
know
how
they
work
with
FTP
I'd
have
to
rely
on
others
on
the
call,
but
around
these
scenarios
around
HTTP
in
general,
while
kata
doesn't
do
it
today.
It
is
something
we're
interested
to
see
how
we
could
make
even
the
pattern.
I
showed
a
bit
more
first
class,
the
SMI
spec
work
that
was
mentioned
my
help,
but
that's
more
related
to
service
meshes
Kay
made
of
serving,
does
do
scaling,
but
zibeon
ACK,
I,
don't
know!
A
A
Yeah,
so
that's
kind
of
the
generic
ones
that
we
kind
of
tell
people
of
kata
directly.
If
you
want
it
to
do
it,
it's
gonna
need
to
pull
some
other
metric
other
than
monitoring
the
FTP
request
directly.
If
it's
HTTP,
Kay
native
serving,
could
do
it
today.
You
could
also
do
a
cheapy
with
this
method.
So
I
think
this.
A
This
kind
of
pattern
that
the
blog
post
shows
and
you
could
ignore
all
the
stuff
about
HTTP,
but
in
general,
using
the
Prometheus
scale
or
if
you
can
get
the
data
and
metrics
into
Prometheus,
might
be
your
best
bet.
I,
don't
know
if
that
makes
sense,
or
if
you
have
any
follow-up
questions
on
mash
yeah.
J
A
A
J
A
Awesome
yeah
more
on
that
too,
and
in
fact,
some
of
the
scalar
research
Hammad's
been
doing
is
at
ease
for
two
weeks
from
now
has
been
these
types
of
patterns.
How
do
we
make
it
work
more
seamlessly
with
kata
and
ideally
very
simply
because
this
is
probably
the
number
one
question
I
get
around.
Kata
is
around
the
HTTP
scaling?
A
A
A
Never
mind
when
I
was
going
to
point
to
we
do
have
this
open
issue
which
I
would
love
to
get
around
to
some
time
is
around?
Is
there
a
way
we
could
integrate
easier
with
the
custom,
auto
scaler?
It
has
four
thumbs
up
so
for
people
like
it
but
yeah.
This
was
a
different
one.
There's
some
good
discussion
here
from
some
folks
around
this
could
work.
So
this
is
something
I
think
someone
would
need
to
do.
Some
investigation
on
I
should
do
I
should
say
investigation.
A
We
don't
have
a
label
for
that,
but
yeah
there
is
there
that
separate
one
that
I
was
talking
about.
There
is
an
open
issue
on.
If
there's
anything
we
could
do
and
I
think
I
discuss
here.
We
can
indirectly
influence
the
cluster
autoscaler
today
by
forcing
too
many
nodes
or
pause
to
be
scheduled,
but
there
has
been
asked
like.
Could
we
make
that
more
direct,
no.
A
You
great
okay,
so
thanks
everyone,
a
ton
for
joining
for
the
discussion
as
I
mentioned.
If
you
need
something
between
now
and
two
weeks
from
now,
github
issues
is
great
and
slack
is
also
where
we're
chatting
about
some
of
this
stuff
as
well
on
the
kubernetes
slack
channel
that
you
can
find
on
the
k2
landing
page.
So
with
that
we'll
go
ahead
and
wrap
it
up
for
this
week.
So
thanks
again,
everyone
we'll
chat
again
in
two
weeks.
Thank.