►
From YouTube: 2020-04-02 KEDA Standup
Description
A
Let's
go
ahead
and
we
can
just
start
with
the
agenda,
so
we
only
have
these
every
two
weeks
now.
So
this
first
one
is
actually
really
cool.
Tom
I'll.
Let
you
kind
of
share
this
or
actually
no
we'll
go
around
really
quick
before
we
go
to
the
agenda
and
we
can
just
do
a
quick,
just
a
quick
kind
of
less
intro
I
think
we've
all
been
on
the
call
before,
but
just
a
quick
kind
of
recap.
If
there's
any
items
you
do
want
to
cover
in
this
call
that
maybe
aren't
already
on
the
agenda.
A
So
I'll
start
I'm
Jeff
for
those
of
you
who
may
be
watching
the
recording,
I
guess
you've
never
been
to
a
stand-up
I,
don't
have
any
big
updates.
The
only
one
is.
We
do
have
a
Microsoft
blog
scheduled
to
go
out
on
Monday
that
we
finally
got
kind
of
like
blocked
away.
I
need
to
still
write
the
content
for
it,
because
there's
a
blog
that
we
did
post
but
I'm
gonna
link
to
the
blog
that
we
did
post
it
took
a
little
bit
longer
to
get
the
blog
set
up.
B
A
A
B
I
did
the
same
thing:
it's
been
with
all
the
just
trying
to
adopt
to
the
new
normal,
and
this
is
so
meetings
going
on
and
just
like
just
for
Tom
and
Nick
we're
also
you
can
imagine
like
in
Azure.
We
are
dealing
with
a
lot
of
e
in
a
good
way,
a
lot
of
increased
usage,
because
of
so
so
there's
lots
of
lots
of
lots
of
stress
at
different
levels.
A
C
As
part
of
that
effort,
we
also
now
have
a
blog
and
my
follow-up
action
from
the
previous
standard
was
to
move
mine
to
Google
Docs,
so
it
could
be
reviewed,
and
now
it's
basically,
this
I
use
the
day
off.
When
did
I.
Add
it
Tuesday,
so
31st
of
March,
yep
I,
think
that's.
Okay
and
that's.
You
guys
think
that
we
should
move
it
back
in
the
in
the
past.
No.
A
I
think
that's
great
I
think
that's
really
good
and
yeah.
I
I
have
an
extra
item
too,
because
when
I
saw
this
blog
I,
probably
just
need
to
share
out
I,
don't
know
what
the
best
way
to
share
it
out
is
but
credentials
to
like
buffer,
so
that
if
any
maintain
errs
want
to
post
or
schedule
a
tweet
off
of
the
k2
account
that
they
can
I'm
more
than
happy
to
do
it.
If
people
ping
me
and
they're
like
hey,
do
it,
but
I
also
help
try
to
amplify
that
too.
So.
A
B
E
Hey
guys,
it's
the
first
time
I'm
joining
the
community
call
I'm
working
in
Microsoft
as
a
CSA.
We
have
couple
of
POC
is
going
on
with
a
couple
of
large
customers
and
I'm
having
some
issues
with
care,
especially
for
job
scaling,
so
I
thought
that
will
join
this
call
and
get
your
advice
on
the
issues,
and
it
is
a
blocker
for
us,
and
so
that's
recent
joining
it's.
A
F
A
G
A
A
And
Daniel
this
was
Daniel's
item
right,
whoo,
I,
don't
see
on
the
call
right
now
the
path
to
commit
our
maintainer
I
can
kind
of
flag
what
he
was
interested
in.
We
can
have
a
short
discussion
and
then
we'll
move
to
v2
shortly.
Did
anyone
else
besides
Daniel
put
this
one:
I've
got
the
cameras
over
here.
That's
why
I
keep
looking
over
here?
Okay,
I
think
that
was
Daniel.
So
there
was
a
question
Daniel.
If
you
know
he's
been
pretty
involved
from
astronomer
day.
Oh
he.
A
He
added
this
agenda
item,
which
was
Tom,
actually
did
a
great
job,
giving
some
thought
into
a
doc
here
around
governance
and
in
the
doc
we
talked
about
project
maintainer
x'
and
how
a
maintainer
must
remain
active
and
stay
responsive
and
that
we
can
add
new
maintainer
x'
by
the
maintainer
voting
and
I.
Think
Daniels
question
was
yeah,
he's
really
just
willing
to
help,
but
I
think
it's
a
good
question
to
answer.
I
just
want
to
get
everyone's
quick
thoughts.
Quickly
is.
If
someone
is
interested
in
being
a
maintainer.
B
A
C
B
A
Well,
we
don't
have
to
spend
too
much
time
on
it
today.
Maybe
I
can
take
an
action
to
create
a
github
issue
around
this
for
more
discussion,
but
it
sounds
like
we.
We
want
to
try
to
keep
an
odd
number.
I.
Think
five
is
probably
great.
Seven
might
be
a
little
bit
much,
but
at
least
folks
who
are
interested
can
create
an
issue,
and
then
we
can
kind
of
chat
about
it
on
a
on
a
by
need
basis.
A
So
at
least
then
we
kind
of
have
the
answer
to
someone
who's
like
hey,
I'm,
interested
in
being
a
maintainer.
We
could
say:
look
we
kind
of
want
to
target
five
blah
blah
blah.
If
you're
interested
create
an
issue,
we
can
talk
about
it
in
a
stand-up.
So
okay,
sweet
I,
mean
add.
That
is
an
extra
time
really
quick,
Jeff
whoa,
my
keys
were
on
the
wrong
spot.
You
have
to
create
issue
for
maintainer.
C
A
My
initial
thinking
was
to
create
an
issue
that
we
would
kind
of
closed
on
this
discussion
and
then
we
could
at
it,
but
I
could
go
straight
to
the
PR
for
the
governance,
one
in
at
least
like
we
could
say
we
we
at
least
you
know
we
want
to
have
it
be
an
odd
number
and
then
maybe
I'll
go
straight
for
the
PR
we
might
as
well
we'll
go
for
the
PR.
We
can
have
discussion
on
the
PR
and
decide
from
there.
So,
okay.
C
A
Sweet
okay,
2.0,
zip
your
neck
I
think
this
is
this:
is
your
your
time
to
shine.
D
All
right,
all
right
so
I
was
thinking
about
about
like
possibilities
that
we
can
extend.
Takeda
I
both
like
the
features
we
can
add.
So,
as
you
remember,
we
were
talking
with
the
committed
guys
about
duck
typing
concept.
So
I
was
looking
at
it
and
I
have
found
another
solution.
So
basically
the
motivation
behind
it
is
that
at
the
moment
we
can
scale
just
appointments
and
jobs
and
we,
like
I,
was
trying
to
find
a
way
how
to
scale
just
like
more
more
resources
like
different
ones,
so
stateful
sets
and
some
custom
resources.
D
So
the
first
attempt
is
the
duck
time.
Basically,
the
typing
is
is
let's
say,
because
go
code
doesn't
have
like
generics
and
all
this
stuff
so
duck
typing
concept
is
about
it.
You
have
like
a
shape
of
an
object
and
then
you
are
trying
to,
and
if
you
have
another
resource
you
are
you,
you
are
trying
to
guess
if
that
resource
fulfill
all
the
requirements
by
the
objects.
So
let's
say
we
will
have
like
some
kada
scalable
thing,
whatever
like
the
duck
type,
and
it
specifies
okay.
D
If
your
resource
want
to
supposed
to
scale,
it
should
have,
for
example,
in
the
spec
field,
it
should
have
replicas
and
it
should
have
other
fields.
So
this
way
we
can
define
the
duck
tag
and
then,
if
anybody
has
like
some
resource
and
he
like
basically
fulfill
all
the
needs,
so
just
like
the
spec
in
the
spec
together
and
pick
us
and
the
other
fields
necessary
that
are
they
defined
in
like
type
this
way
you
can
handle
these
resources
basically
generally.
D
This
has
some
analogies
and
some
methodologies,
because
I
was
at
how
the
screening
is
happening
in
the
kubernetes
space.
So
basically
you
can
have.
You
can
have
multiple
fields.
You
can
have
like
that,
so
that
great,
because
you
can
have
the
replicas
hidden
somewhere
inside
your
spec
in
another
field,
so
the
ductile
thing
wasn't.
D
The
best
I
was
like
the
best
approach,
because
the
other
other
thing
is
that
in
kubernetes,
if
you
want
to
handle
resources,
you
need
to
specify
the
are
Peck,
which
is
basically
the
roles
and
permissions
for
for
your
operator
for
quickly
the
operator
to
be
able
to
do,
let's
say,
handle
these
resources.
So,
for
example,
if
we
at
the
moment
want
to
skill
deployments,
we
need
to
add
permission
group.
D
If
we
want
to
scale
some
resource
that
at
the
moment,
we
don't
have
the
resource,
what
was
the
name,
it
was
the
kind
was
the
net.
Was
the
API
version,
cetera
it's
a
little
bit
difficult,
so
we
need,
to,
let's
say,
do
a
hex
with
some
annotations
etc.
So
then
I
have
found
out
that
there
is
basically,
though,
in
kubernetes,
there
is
like
a
skill,
some
resource,
which
is
basically
a
standard
way
how
to
scale
resources.
D
So
at
the
moment
deployment
has
implemented
this
custom
resource
state
for
set
has
disappear,
sub
resource
and
if
you
have
any
your
custom
resource
and
implement,
discourage
it
basically
a
end
point
which
is
basically
handling
the
scaling.
You
can.
You
can
scale
any
resource
you
want,
so
I
move
this
direction
and
even
if
you
have
the
Arctic,
the
permissions
etc
are
much
more
easier
because
because
you
can
say
in
our
big
file
that
you
want
to
access
like
skill,
some
resource
in
every
every
resource,
inter-cluster,
you
can
put
like
the
asterisk
over
there.
D
So
you
can
have
an
asterisk
slash
scale.
So,
even
like
the
permission
thing
is
it's
pretty
much
solved.
So
this
is
the
this:
is
the
proposal,
basically
that
we
will.
We
can
modify
kala
a
little
base
that,
if
you'll
be
able
to
basically
to
be
able
to
scale
any
resource
that
has
this
skill
assembly
sort
defined,
which
is
the
other
other
issue
linked
may
be
in
the
yet.
A
703
m.
A
D
D
Yeah,
so
basically
what
we
have
to
do.
We
will
have
the
scaled
object,
as
it
is
the
right
now
and
in
the
scale
target
ref
field.
We
will
have
like
this
free,
like
300.
All
right
parameter
is
basically
the
name
which
is
at
the
moment
it's
just
a
deployment
name,
but
we
can
put
like
the
API
version
and
current
in
there.
This
would
be
optional.
So
if
it
is
not
specified,
it
will
be
like
just
a
deployment.
D
So
it's
it's
like
a
minor
change
in
the
in
the
spec
and
it
could
be
level.
There
is
one
one
concern
like
handling
the
secrets,
because
at
the
moment
we
can
basically
get
the
secrets
from
the
containers
into
the
problem
and
so
into
into
Ott's
skilled
objects.
Spec
we
have
the
container
name
property
which
could
be
like
fruit.
D
D
Way
so,
basically
we
can.
We
can
support
like
the
handling
the
secrets,
as
we
do
it
now,
just
for
deployments
estate
for
sets,
which
are
the
standard
comparative
subjects
and
for
a
custom
resource.
If
user
wants
to
specify
the
secret,
he
will
is
to
use
the
trigger
authentication
custom
resource
as
we
have
it
now
or
we
can
or
we
can
drop
the
support
for
the
secrets
in
the
deployments
for
v2
and
just
use
the
secure
authentication.
It's
it's
up
to
our
decision.
I'm,
not
sure
what
are
the
preferences
over
here
great.
C
D
F
I
think
that's
a
good
question.
I'm,
not
sure
how
how
many
people
use
that,
as
opposed
to
just
specifying
trigger
a
notification
option
grab
it
a
secret
from
the
deployment
simplifies
the
scale
object
that
you
need
to
deploy.
If
you
can
assume
that
everything
matches
like
the
secret
I'm,
listening
and
deployment
or
the
same
secrets
that
I
need
skill
on
at
least
that's
how
other
functions
used
to
work.
So
that's
why
that
was
added
initially.
But
it's
a
good
point.
D
A
A
I
think
is
beneficial,
and
so,
if
they
are
going
down
like
what
we
think
is
the
80
percent
happy
path
if
they
can
just
be
like
hey
I,
just
have
to
like
I'm,
already
mounting
a
secret
on
I'm
already
doing
these
things
all
I
have
to
do
is
add
this
one
more
CRD
for
scaled
object.
That
sounds
nice
to
me.
So
that's
why
I
kind
of
like
the
getting
started?
A
Hello,
world,
inline
secrets,
but
I,
don't
know
I,
don't
know
how
painful
it
would
be
if
we're
just
like
look
just
do
scaled
object,
plus
trigger
authentication
and
then
it's
consistent
for
every
type,
even
if
you're
in
the
20%,
not
the
80%.
So
that's
my
only
thought
is
that
like
would
it
be
too
much
concepts
for
people
just
getting
started
if
we
made
them
wrestle
around
trigger
authentication
from
the
get-go?
Let's.
D
Yeah
we
did
maybe
we
can.
We
can
think
about,
like
other
name
of
the
object
property,
because
container
name
is
like
somehow
miss
Lily,
because
people
could
expect
that.
Basically
we
are
scaling
that
container
and
it's
not
you
know
it
should
be
different
I'm
from
Mike.
We
don't
do.
It
should
be
like
our
no
environment
source
container
or
something
like
that,
because
we
are
not
scaling
that
container.
We
are
just
grabbing
the
grabbing,
the
secrets
and
environment
variables
combat
container.
A
D
A
C
E
D
C
D
C
D
So
so
this
is
the
sister.
This
is
the
one.
Basically,
the
wine
program
proposal
and
Ravin
proposal
is:
is
basically
the
skilled
object
or
like
steering
jobs.
So
we
can,
we
can
use
the
we
can
use
the
same
same
approach
so
basically
use
the
skilled
objectors,
custom
resource
and
just
reference
the
the
job
directly
in
the
in
the
skill
target
ref.
So
we
will
have
just
one
one
field
so,
instead
of
typing,
you
know
deployment
and
crime.
Whatever
you
just
put
the
put
the
job
in
there,
we
can
do
that
or
we
can.
D
C
C
This
issue,
because
I
saw
different
expectations
based
on
what
they're
trying
to
do,
because
with
the
skill,
that's
why
I
introduced
two
different
names:
let's
call
it
scale
deployment
or
staple
set,
or
whatever
it's
typically
a
demon
constantly
running
and
as
the
metric
goes
up.
Let's
say
they
want
to
add
more
of
those
instances
well
with
a
job.
C
They
actually
expect
that
if
there
is
a
new
message,
it
will
start
the
new
job,
who
will
only
process
that
one
message,
but
we
have
only
one
concept
of
the
scale
object,
so
actually
we're
mixing
two
scaling
approaches
into
one
cr-v
yeah.
That's
why
I
would
prefer
to
split
them
so
that
it's
clear
what
the
scaling
of
no
kids
that
make
sense
at
all
yeah.
A
You
create
a
job,
and
you
have
like
this
buffer
of
jobs
like
we
could
just
shove
it
in
which
is
more
or
less
what
we're
doing
today,
but
I
think
Tom's
concern,
which
is
a
good
one.
Is
those
two
behaviors
between
scaling
a
job
and
scaling
a
deployment
specifically
are
very
different,
like
a
deployment
is
looking
at
the
cow
and
trying
to
feed
in
how
many
instances
are
needed.
The
job
is
really
like:
creating
queues,
rosser's.
D
D
Sorry,
just
for
one
more
note,
so
if
we
introduce
like
a
separate
customer
resource
for
the
job,
we
can
like
put
a
different.
You
know
spec
in
the
in
the
spec
field.
If
there
are
some,
you
know
needs
for
for
different
behavior
different.
You
know
options
properties
for
just
for
dog
jobs,
I'm,
not
sure
if
there
are
any
like
any
requests
for
this,
but
this
could
be
like
they
should
be
taken
aback.
A
D
A
In
metrics
and
deciding
so
maybe
we
don't
call
it
scale
deployment
and
skilled
job,
but
maybe
it's
like
scaled
metrics
and
scaled
messages
or
something
where
it's
like.
One
is
doing
something
per
event
and
the
other
one
is
doing
a
metric
thing,
and
maybe
then
it
makes
more
sense
that,
like
okay,
you
know
the
stateful
set
thing
as
part
of
the
metric
scale.
I,
don't
know
what
the
right
names
are,
but
I.
D
D
C
What
I
was
thinking
was
him
using
the
same
names
like
the
open
application
model,
where
a
job
is
a
task
basically
and
deployments
and
stateful
sets,
are
demons
which
are
constantly
running
and
now
that
we're
talking
about
this
I'm
wondering,
if
maybe
the
more
flexible
approach
which
is
not
maintained
by
us
where
they
bring
their
own
research.
Why
don't
we
separate
that?
But
maybe
with
because
it's
fully
the
same
as
a
deployment
today,
but
not.
D
D
A
I'm,
either
way
like
skilled
object,
has
a
nice
like
it's
kind
of
a
carryover
from
from
v1.
It's
not
necessarily
the
best
best
noun,
but
it
works,
but
I
do
I.
Actually,
my
I
personally
know
I'll
pause
in
case.
Anyone
else
has
thoughts.
I
do
like
kind
of
this
proposal
that
we're
chewing
around
right
now,
which
is
we
keep
scaled
objects
which
scales
deployments.
A
It
could
scale
all
this
cool
stuff,
but
we
introduced
a
new
kind
may
be
called
scaled
task
or
scaled
job
or
skilled
worker,
which
is
specific
for
for
doing
these
job
type,
wear
clothes
which
for
for
at
least
now
the
only
one
I'm
aware
that
it
would
scale
would
be
jobs,
but
maybe
something
will
pop
up
in
the
future
that
would
fit
that
model
as
well.
That
could
fit
into
that
CRD
kind.
Well,.
A
C
A
A
D
Was
because
of
me
basically,
when
I
was
thinking
about
its
unified
approach
on
scaling
resources
with
the
skills
of
resource
I
was
talking
to
some
guys
in
the
infinitive
community
and
they
kind
of
liked
it.
So
they
were
thinking
of
using
it
for
for
the
eventing,
for
some,
like
parts
of
the
event,
think
so
skill
skill,
some
some
things
over
there,
so
they
kind
of
kind
of
liked
it.
So
that
still
is
the
way
why
the
guy,
like
commented
on
this
great.
D
A
D
A
A
A
C
More,
if
that's
okay
go
for
so
we've
seen.
A
lot
of
confusion
is
the
big
word,
but
there's
no
consistent
way
on
how
we
name
a
configuration,
certainly
if
it's
contributed
by
new
people.
So
that's
why
and
I
didn't
add
the
issue,
but
yeah
we
want
to
propose
to
basically
go
through
all
the
scalars
and
use
the
same
name
for
the
same
things
and
also
the
same
approach
like
some
call.
It
address
some
call
it
connection
while
they
are
just
the
same
I
see
and
some
people
want
to
have
a
host
and
board
separated.
C
C
F
G
C
Asking
so
we
could
use
the
approach
and
actually
that's
how
we
added
it
now.
I
think
is
that
if
the
full
address
is
specified
use
that
if
the
host
and
the
board
are
specified
separately,
use
that
as
a
fallback,
if
not
there,
when
we
throw
an
exception,
because
because
we
basically
rely
on
the
information
being
available
on
the
container,
so
we
can't
really
enforce
it.
D
C
So
we
need
to
define
the
I
think
we
need
a
scale
at
requirements
document,
let's
say
which
lists
the
names
that
are
available
now
we
need
to
go
through
all
the
scalars
and
all
align,
all
of
them
and
the
beauty
of
space
over
a
major
change.
We
can
just
change
them
without
all
the
backwards
compatibility
I
see.
That's.
C
B
So
if
somebody
is
using
Kafka
or
somebody
is
using,
then
whatever
the
same
concept
is
called
in
Kafka,
it
should
be
natural
for
should
be
more
closer
to
what
it
is
called
in
that
event.
Suppose
that
that's
just
my
that's
just
one
thought.
Instead
of
trying
to
standardize
it
across
all
the
event
sources,
because,
because,
if
somebody's
trying
to
scale
something
is
in
Kafka
or
something
you
know,
scale
use
event
table
or
whatever
these
be
the
nation
they
would
assume
I
assumed
they
would
already
know
what
does
what
these
different
properties
are.
B
F
F
D
C
C
D
C
B
So
I
feel
it's
fair
if
they're,
if
they're
a
couple
of
things
like
that
which
there
there
are
where
you
where,
where
it
would
not
be
scalar
specific
right
or
the
events
or
specific
something
like
connection
connection,
is
a
great
example,
and
for
that
we
can
give
suggestions,
and
we
can
say
that
hey
better
but
I
think
but
I
think
just
going
overboard
with
this
is
probably
not
needed,
and
probably
even
detrimental
so.
C
Let's
do
this
so
we'll
just
have
a
look
at
what's
around,
what's
confusing
and
then
see
if
it's
worth
Chinchin
things
great
I,
think
that's
make
sense,
yep
and
now
that
sorry
one
thing
pops
in
my
head
is
the
configuration
one
is
now
still
flat.
Were
we
going
to
make
it
more
structured
for
v2
or
what
was
the
plan
there
again
yeah.
A
It
was
brought
up
specifically
in
the
concepts
of
azure
monitor.
This
is
what
it
ended
up.
Looking
like
an
azure
monitor
so
I
know
initially,
mal
wanted
to
do
like
like
these
would
be
in
a
sub
list,
and
these
would
be
in
a
sub
one
and
I
think
they
used
to
there
be
like
three
nested.
This
works
today,
I,
don't
know
if
we've
heard
much
other
noise
from
other
scalar,
so
I'm
a
at
the
point
where
I
don't
know.
A
If
I
have
enough
reason
to
say
we
should
go
through
and
make
it
more
structured,
but
I'm
not
opposed
to
it.
I
just
don't
know
if
we
have
justification
for
the
work
well,
good,
yeah
cuz,
the
only
one
I've
heard
it
from
is
Hatcher,
monitor
and
I
could
see
it
like
it.
There
is
a
lot
of
metadata
here,
but
I
think
most
of
our
other
ones
are
a
lot
better.
So
I
think.
C
C
A
C
A
C
A
You're
good,
all
right
so
I
think
we're
we're
on
track
and
we
might
jump
to
job
scaling
for
Sowmya
in
a
second
just
to
make
sure
he
has
time
within
the
hour.
But
this
is
a
good
one.
Move
stand
up
to
an
earlier
time,
so
it
bumped
slightly
with
daylight
savings
I
believe,
but
it
is,
it
is
kind
of
later
than
most
other
stand
up.
Side.
I,
don't
know
who
created
this
year.
If
anyone
has
any
proposals
but
I'm
interested
in
discussion
on
this
one
yeah,
it
might.
D
Be
basically
it's
for
people
from
Europe,
so
because
at
the
moment
there
is
like
almost
8
p.m.
so
I
understand
that
this
this
thing
is
always
tricky.
You
know,
because
all
the
time
zones
and
regions
so
I
was
thinking
about
at
least
moving
it
like
60
minutes
or
90
minutes
earlier.
It
would
be
like
it
would
you
like
help
me
a
lot,
for
example
like
personally
I'm,
not
sure
about
the
guys,
but
would.
G
A
The
only
one
I'm
aware
there's
the
service
workgroup,
which
right
now
is
primarily
focused
on
event,
grid
or
I'm.
Sorry
cloud
events,
so
I
had
to
join
a
bunch
of
those,
but
that
is
an
hour
before
this
one,
but
I
I,
don't
know
how
many
of
you
end
up
having
to
go
to
that
or
how
much
crossover
we
suspect,
but
that'd
be
the
only
risk
if
we
move
it
just
half
hour
or
an
hour
earlier.
Is
that
we'd
now
be
overlapping
that
stand
up
but
I'm,
okay
with
it
I'm?
A
Okay,
if
we
overlap
one
to
one
and
maybe
we
have
to
adjust
things
later
on,
if
that
becomes
a
conflict,
any
concerns
with
that
one.
So
should
we
do,
should
we
go
60
minutes
earlier,
like
9:00
a.m.
Seattle
time
or
just
30,
honor
it
and
Ahmed
I'm
I'm
up
regardless,
because
I've
got
a
four
year
old,
but
I
I
don't
want
to
speak
for
everyone
on
the
call.
So.
B
D
C
F
A
B
B
A
A
A
B
A
D
E
So
we
are
planning
to
use
cater
for
that
scenario
in
Cuban
artists
and
the
issues
that
we
are
facing
is
that
and
the
scaling
so
well
in
even
in
my
testing
I
saw
that
if
I
push
the
messages
into
the
queue
and
if
I,
if
I
sell,
if
I
send
like
200
messages
but
I'm,
seeing
that
Kara
is
creating
like
350
different
jobs
and
parts,
it's
not
really.
The
number
so
what's
happening
is
in
customer
scenario.
They
are
creating
an
auto
scaler
employ
in
the
Kuban
and
dis
cluster.
E
So
whenever
it
sees
more
jobs
and
parts,
it
unnecessarily
spinning
up
more
VMS.
So
it's
a
it's
a
concern
for
them
too,
in
terms
of
the
cost.
So
what
they're
looking
for
is
that
we
need
to
get
exactly
the
same
number
of
jobs
and
parts
to
be
created
in
parallel
with
the
help
of
kada,
and
so
that
way
they
can
manage
either
using
virtual
cubelet
or
using
the
node
autoscaler,
and
also
they
would
like
to
see.
E
E
I
was
able
to
reproduce
the
same
issue
on
my
end
and
I
saw
there
is
a
tuple
to
be
request
like
the
job
is
already
in
open,
State,
so
I
don't
know
if
it
is
hitting
directly
on,
but
it's
similar
we
are
seeing
that
behavior
see
even
my
first
testing
I
did
with
the
parallelism,
s1
and
Q
length
s1,
so
I
saw
that
each
jobs
are
creating
sequentially,
then
to
create
in
parallel.
I
tried
queue
length
like,
for
example,
100,
then
I'm,
seeing
like
a
weird
experience.
E
A
And
if
I
understand
right,
just
just
to
see
if
I'm
hearing
like
the
you're
dropping,
let's
say
a
thousand
messages
in
a
queue
and
what
you
want
to
have
happen
is
have
some
job
spin
up.
But
you
want
to
be
able
to
cap
the
number
of
jobs
that
will
spin
up
to
like
20
and
what
you're
seeing
right
now
is
you
drop
the
messages
on
the
queue
and
it's
just
going
crazy,
so
you're
getting
like
350
jobs,
which
is
causing
like
the
cluster
autoscaler
to
kick
in
and
a
bunch
of
other
things.
E
That
is
one
issue
and
from
other
customers
requirement
perspective.
They
want
to
create
it
like
parallel
jobs
like
if
I
have
like
a
1000
messages
into
the
queue
they
need
to
spin
up
a
1000
Jeff
parallel
to
process
these
matches,
it's
so
that
they
can,
because
each
job
is
taking
up
to
3
hours.
So
that
is
one
of
the
reason
they
thought
that
go
with
the
cloud-based,
so
they
can
process
it
in
parallel
and
they
did
it
Taizo
bats
there.
Everything
is
good,
but
they
would
like
to
see
if
they
can
leverage
cuban.
A
In
in
the
I
guess
for
this,
what
the
second
problem
I
think
and
maybe
others
on
the
call
got
it
the
1000
numbers
in
that
case
is
that
will
that
always
be
static?
Like
will
always
be?
You
know,
always
run
up
to
a
thousand
jobs
in
peril?
Are
you
saying
that
number
might
change
based
on
how
many
items
are
in
the
queue,
and
you
really
just
want
it
to
always
spin
up
and
number
of
jobs
in
parallel,
yeah.
E
It
readies,
and
so
it's
a
multi
learning
solution,
some
some
customers
when
they
create
a
job
there
will
be
maximum
fifteen
thousand
messages
and
at
minimum
there
will
be
1000
messages
for
each
job
processing
scenario.
So
the
number
of
messages
coming
to
the
queue
will
vary
customer
to
customer.
So
minimum
is
1000
messages
and
maximum.
It
can
go
up
to
15,000
messages.
E
B
I'm
a
bit
so
I'm,
just
I,
think
just
trying
to
get
clarity
here.
It
looks
like
in
like
I'm
just
trying
to
figure
out
somewhere
which
which
problem,
because
they
almost
seem
these
two
problems,
seem
at
odds
to
each
other,
I.
Think
at
one
point
you
said
that
you
need
that
we
are
scaling
out
too
much
and
the
other
and
I
think
on
the
other
side,
you're
saying
that
we
are
not
scaling
out
enough,
so
it's
they're
not
being
processed
in
parallel.
So.
B
E
B
E
I'm,
the
first
scenario
is
that
the
main
requirement
is
that
they
need
to
process
this
one
in
parallel.
So
that
means,
if
there
are
1,000
messages,
they
want
to
see
only
1,000
jobs
to
be
created.
Okay,
so,
and
what
are
you
seeing?
What
are
they
seeing
so,
for
example,
in
my
testing,
when
I
send
like
a
200
messages,
basically
it
in
like
a
three
hundred
and
forty
seven
jobs
so
care,
I
spinning
up
okay,
so
we
are
expecting
like
a
200
jobs
to
be
seen
with
the
help
of
the
scaler
got.
A
Don't
know
if
anyone
here
has
experienced
with
from
from
the
outset,
I
think
that
this
should
be
possible
through
flagging.
Some
of
these
knobs
and
I
think
there's
also
some
weird
behavior
in
the
open
PR,
where,
like
queue
length,
the
number
you
set
for
queue
length
actually
really
impacts
the
number
of
jobs
that
get
spun
up
in
the
metadata,
so,
in
short,
I
believe
the
behavior
you're,
describing
in
that
scenario
it
should
be
doing
what
you're
expecting,
but
if
it's
not
that,
maybe
there's
something
wrong,
I
don't
know.
F
Oh
actually,
no
because
I
see
if
I
saw
Phillip
ers
talking
about
this,
but
I
haven't
I'm
not
too
familiar
with
the
job
scaling,
I,
think
Steve,
I
sure
guys
now
his
last
name
was
thrown
into
edit
I.
Think.
The
best
course
of
action
is
for
someone
to
get
take
a
look
at
it
and
see
why
I
can't
make
it
they
got.
One
I
can
test
the
job
scaling
and
see.
F
D
E
Yeah
sure
no
yeah
I
saw
like
a
two
issues
in
the
in
the
open
pull
request
that
a
tuple
to
various
similar
to
this
particular
issue.
What
we
are
seeing
in
the
pull
request,
it
is
still
open.
So
any
idea
that
when
it
may
merge-
because
this
is
what
very
time
constraint
to
the
customers
of
you
may
need
to
go
with
a
different
route.
If,
if
it
take
time
to
stabilize
the
job
scaler.
C
A
Outputting
an
issue
here:
this
is
the
one
you're
talking
about
this.
Oh,
it's,
a
different
pull
request,
I'm,
trying
to
figure
out
the
issues
you
mentioned
yeah
you
might
just
have
to
flag
us.
It's
it's
hard
to
say:
I,
don't
know!
I
haven't
looked
at
these
portal
quest,
so
I
can't
give
a
definite
answer,
as
Amin
mentioned,
he's
planning
to
look
at
it.
But
if
there's
other
hiccups
here
it
could
delay
these.
A
But
if
you
can
at
least
even
add
I'd,
be
really
curious,
like
what
are
you
doing
when
you're
repro
Inge,
so
you're
dropping
X
number
of
messages
in
a
queue.
This
is
the
scaled
object
that
you're
deploying
with
like
every
single
value
that
you're
using
the
secrets.
Obviously,
even
if
you
add
that
to
an
existing
issue,
that
would
be
fine,
but
if
you
can
do
that,
it
can
help
us
accelerate
in
general.
The
next
scheduled
release
for
kada
is
two
weeks
from
this
week.
A
A
B
G
A
A
At
any
thought,
so
now
I'm
cool.
If
people
just
update
the
existing
package
and
like
I
not
going
to
approve
a
PR
if
it
looks
like
a
breaking
change
and
now
back,
you
need
to
do
a
new
version,
but
if
people
just
want
to
do
a
live,
update
of
the
helm
chart
and
instantly
release
it,
I'm
cool
with
that
I
mean.
A
A
A
C
A
D
C
A
If
their
owners
or
maintain
errs,
then
we
should
put,
then
we
should
read
them
out
if
they're
just
members
I,
don't
think
we
gave
him
any
permission.
It
was
mostly
when
we
were
private
when
we
were
a
private
org.
We
had
to
manually
at
everyone,
but
I,
don't
think
they
have
permissions
to
do
anything.
So
we
can,
if
we
want
I'm
fine,
if
we
pull
them
out,
I
find
if
we
leave
them.
A
A
A
All
right:
okay,
thanks
everyone
we'll
talk
again
in
two
weeks.
If
you
need
me
before
that,
if
you
ping
me
on
slack
on
the
kata
channel
in
the
kubernetes
group,
I
can
help
out
especially
run
like
the
scaled
job
thing
so
Mia,
either
on
teams
front
slack.
Let
me
know
if
we
need
some
help
making
that
progress.