►
From YouTube: 2021-06-22 KEDA Standup
Description
A
A
Great
all
right,
good
morning,
good
afternoon,
good
evening,
everyone
I
will
go
ahead
and
get
started.
I
think
tom
should
be
joining.
I
hope
so
at
least
because
I'm
very
curious
on
some
of
the
things
like
incubation.
I
know
he
was
pinging
me
on
twitter
this
week
around
some
of
the
stuff
to
do
the
incubation
stuff.
So
but
we'll
start
and
we'll
see
if
he
joins
thanks.
Everyone
for
joining
and
we've
got
a
few
things
to
cover,
and
then
obviously
we
can
do.
A
We
can
add
some
more
agenda
items
as
we
need,
and
so
maybe
to
start
with.
If
we
just
want
to
go
around-
and
you
can
just
quickly
say
your
name,
though
I
can
see
you
all
here
in
the
zoom
attendee
list
and
then,
if
there's
anything,
you
want
me
to
add
to
the
agenda.
As
you
add
your
name,
then
just
let
me
know
we
can
just
keep
this
up
today
to
make
sure
we
cover
all
the
topics.
A
So
if
there's
any,
I
guess
questions
or
features
or
status
updates
or
whatever
else
you
want
to
talk
about,
feel
free
to.
Let
us
know
so.
First
off,
I'm
jeff
good
to
meet
you
all
work
at
microsoft,
and
I
had
one
fyi
that
I
was
gonna
pass
along,
which
was
a
customer
request
that
I
got
in
fact
we
might
actually
even
have
some
folks
from
the
customer
on
the
call
that
I
was
just
looking
at
around
how
we
do
activation
and
scaling
this
interesting
pattern.
A
They
wanted
to
use
that
all
just
shares
in
fy.
I
don't
think
there's
any
necessarily
action
required
and
then
ahmed.
I
actually
just
sent
you
an
email
on
this
one
too,
but
I
was
looking
at
the
end-to-end
test
to
notice
that
the
q
and
redis
tests
have
been
causing
us
some
consternation.
So
it's
just
gonna
flag
that
and
see.
If
there's
anything,
we
can
do
to
hopefully
get
this
badge
a
little
less
sad
on
those
end-to-end
tests,
but
that's
it
so
those
are
already
on
the
agenda.
Zubinek
you
wanna
go
next.
B
Sure
hi,
my
name
is
nick,
I'm
from
redhead.
I
don't
have
anything
specific,
but
tom
won't
join
today
because
he
told
me
that
he's
busy,
but
I
can.
I
can
give
brief
update
on
this
ncf
part.
A
Great
that'd
be
awesome
thanks,
sabianic
ahmed
anything
from
you.
A
C
I'm
ahmed
I've
been.
I
haven't,
attended
this
meeting
in
a
while.
I've
been
sort
of
m.I.a
from
the
project
in
a
while,
but
just
been
interesting
a
few
months,
but
yeah
I'll
be
out
the
next
two
weeks
too,
because
I'll
be
out
of
town,
but
hopefully
starting
second
week
of
july.
I
should
be
able
to
start
looking
back
at
things.
You
know
sorry.
A
D
No
problem
thanks
for
joining
and
then
dennis
hey
good
morning,
saving
the
new
guy.
For
last,
I
guess
hey,
my
name
is
dennis.
I
work
for
a
company
called
solace,
we're
big
into
messaging
and
I'm
getting
ready
to
implement
a
new
scalar
for
our.
B
D
System,
so
that's
why
I'm
here
and
might
look
to
hit
you
guys
up
to
make
sure
that
I'm
doing
things
correctly.
A
Great,
that's
great
yeah,
I'm
looking
at
solace.com,
I
assume,
is
the
one
pub
sub
plus
event
broker.
Is
this?
Oh?
That's
great
awesome.
I
love.
I
love
me.
Some
message
broker.
So
that's
great
yeah,
we'll
I'll
add
this
here
dennis
if
we
can
make
sure
we
chat
about
it.
I
I
imagine
we'll
jump
through
this
agenda,
pretty
quickly
so
happy
to
answer
any
questions
that
you
have
and
regardless
just
thanks
so
much
for
joining
great
to
see
you
happy
to
help
where
we
can.
A
Okay,
so
zibinek,
do
you
wanna?
Do
you
know
what
the
latest
is?
I
know
there
was
some
chatter
on
my
twitter
dms
around
liz.
Looking
at
the
due
diligence
document,
I
know
yeah
tom
had
a
question
around
like
our
scalar
governance
that
he
was
pinging
me
on
that
eternal
pull
request.
I
I
don't
know
if
you
have
much
more
to
add
here.
If
you
have
a
sense
of
how
far
we
are
yeah.
The
document.
B
Should
be
should
be
like
complete,
like
at
least
from
our
side,
or
at
least
from
what
lis
requested
and
what
what
she
put
in
there
and
she's
going
to
present
it
on
in
cncf.
In
some
group
I
forgot
the
name,
but
I
think
that
from
our
side
it
should
be.
It
should
be
okay.
So
now
it's
like
the
ball
is
on
cncs
side.
So
everything
is
is
good
from
this
side.
A
Okay,
great
and
I
know
last
time
we
were-
and
I
can't
remember
I
think
zibianicki
were
on
that
call
like
at
least
my
impression
was
some
high
degree
of
confidence
that
we
were
okay.
As
far
as
you
know,
like
we're,
not
there's
no
like
big
concerns
that
they
have
or
that
liz
has
throughout
this
yeah
yeah.
That's
correct,
that's
correct!
What's
she
what
she
said
that
the
proposal.
A
On
a
good
way,
okay,
great
so
that'll,
be
exciting
and
then
yeah
there
has
been.
We
talked
about
this
one
a
few
months
ago
around
moving
our
docker
images
from
docker
hub
to
github
container
registry,
which
is
now
generally
available.
As
I
see
tom,
I
learned
this
from
tom's
tweets
and
then
he
he
obviously
was
looking
to
add
this
here
and
it
looks
like
yeah.
We
we've
been
doing
this
since
the
last
release,
which
was
about
two
weeks
ago.
A
So
I
assume
this
means
we're
we're
now
fully
on
the
github
registry.
I
assume
that's
what
yep
you're
good
to
go
and
then
the
only
other
question
I
had
on
this
yeah,
it's
the
parallel
to
docker
hub.
A
And
we've
communicated
that
we
will
potentially
stop
doing
this
once
to
get
her
registry
is
generally
available.
So
I
guess
now
the
question
is:
do
we
want
to
stop
doing
it
in
parallel,
or
should
we
give
it
a
few
months
just
for
good,
kicks
and
giggles,
and
then
we
just
stop
publishing
I
I
don't
I
care
either
way.
I
don't
know
sibian
akaramet.
If
you
have
a
preference,
I
don't
know
either
yeah
might
find
it
both.
B
So
we
can,
we
can
probably
we
can
probably
continue
like.
Maybe
next
two
releases
or
something
like
that,
and
then
we
can.
We
can
omit
knocker
up.
A
Yeah
now
that
we
have
like
now
that
it
is
generally
available,
I
wonder
if
in
I
I
wonder,
if
I
can
even
I
haven't,
used
this
discussion
thing,
but
more
or
less
just
saying
like
now.
We
have
a
date
and,
like
you
know,
as
of
whatever
august
31st,
we
don't
plan
to
do
any
more
to
docker
hub.
So
then
it's
less
around
like
I
had
no
idea
that
this
registry
was
gonna,
go
ga
this
week.
A
A
So
I
already
mentioned:
oh
no
okay,
so
this
is
the
one
that
all
I'll
flag
and-
and
I
know
the
customer
was
saying
they
were
gonna
plan
on
joining
the
the
stand
up,
but
it
doesn't
look
like
they
did,
which
is
fine,
so
I
won't
share
their
name
because
I
don't
know
how
public
they
want
it
to
be
that
they're
looking
at
using
cada,
but
I
got
looped
into
an
azure
engagement
this
week
and
there's
a
customer
looking
to
build
out
a
bunch
of
stuff
on
kubernetes
running
some
java
spring
apps
and
they're,
looking
to
use
cada
purely
an
fyi
on
this
pattern,
because
I
thought
it
was
interesting,
the
way
that
their
logic
works,
and
I
didn't
even
get
a
chance
to
dig
into
it
deeply
is
they
have
two
cues
qa
and
qb,
and
what
they
were
asking
for
a
way
to
do
was
we
want
to
activate
our
container
and
scale
them
up
from
zero
when
q
a
has
a
message,
but
the
metric
that
we
want
to
drive
scaling
of
is
the
messages
in
qb
and
so
they're
like?
A
Is
there
a
way
we
can
mix
and
match
and
say,
like
use,
qa
to
decide
if
we
should
be
active
like
scaled
to
zero
or
not
and
then
use
qb?
If
there's
something
in
qa
to
actually
decide
how
many
messages
we
need?
And
I
I'm
curious
on
the
like:
what's
the
actual
business
logic,
that's
driving
that,
but
my
initial
answer
to
them
was
like,
I
think
you
have
two
potential
options.
One
is
you
create
a
github
issue
with
a
potential
design
and
make
a
case
for
like?
Should
this
be
a
cada
feature
like?
A
Should
there
be
something
in
the
scaled
object,
spec
that
lets
you
specify
like
activation
descaler
and
you
know
metric
scaler
and
have
those
potentially
be
disjointed?
I
was
like
I
don't
know.
I
don't
know
what
the
general
thinking
will
be.
It
seems
kind
of
niche
and
and
maybe
not
general
purpose,
but
like
that's
one
option,
that's
like
the
other
one
is.
You
could
create
an
external
scaler
with
your
own
custom
logic
and
more
or
less
create
like
go
fork.
A
The
q
scaler
like
if
it's
rabbit,
mq
like
grab
the
rabbitmq
scaler
and
just
fork
it
to
to
have
the
ability
to
let
you
specify
like
qa
and
qb
and
then
have
the
logic,
do
what
you
want,
and
then
you
just
use
this
as
your
own
custom.
Scaler
and
cada
will
do
what
you
want,
because
when
it
asks
for
metrics
you'll
all
you
return
those
that
was
the
initial
guidance
I
gave.
A
I
don't
know
if
anyone
else
has
any
thoughts,
if,
if
there's
better
ways
of
doing
it
or
even
if
the
guidance
that
I
gave
might
have
been
misguided
but
figured
I'd,
be
like
yeah
I'll
share
this
with
the
group
to
see
if
anyone
has
heard
anything
like
this
before
or
wants
to
correct
some
recommendation
that
I
might
have
given
them.
So
any
thoughts.
B
I
think
that
the
recommendation
that
you
gave
them
is
correct,
so
I
would,
I
would
start
with
the
external
scale
for
for
sure,
and
maybe,
if
there's
a
demand
or
something
like
that,
so
it
could
be
included
into
the
core.
But
I
don't
see
I
don't
see
a
way
how?
How
can
we
achieve
this
with
the
current
implementation,
even
if
they
implement,
if
even
if
they
define
two
two
scalars,
two
triggers
in
skilled
object?
C
Why
isn't
it
like,
if
you
define
two
q
triggers
for
this,
for
two
different
queues?
Wouldn't
that
give
them
the
desired
result
like
well.
B
It
will
give
them
or
hpa,
selects
the
the
greatest
number
for
for
the
actual
scaling.
C
Which
so
there's
one
queue
that
just
manages
activation?
So
if
there's
a
message
on
it,
the
deployment
gets
activated.
But
if
you
and
then
there's
another
cue
that
you
do
the
scaling
based
on
the
number
of
messages-
that's
in
it.
So
if
you
just
put
the
two
as
two
different
triggers,
I
feel
like
the
final
result
is
that
hba,
that
has
both
numbers
and
then
hbo
will
skip,
is
on
the
highest
of
them.
B
A
And
then
they
drop
a
message
in
qa,
which
is
more
or
less
like
go
drain
qb,
it's
kind
of
like
that.
It
almost
seems
like
they're,
maybe
just
doing
some
batch
processing,
that's
kind
of
how
I've
interpreted
it
so
once
they
get
the
go,
go
fight
signal
from
qa,
then
they're
like
go
during
qb
as
fast
as
you
can.
So
if.
B
A
C
Tv
is
finished:
if
it's
an
external
scalar,
then
yeah,
they
can
probably
do
it
because
they
have
the
option
of
returning
what
to
return
as
metrics
and
what
to
return
as
active
state.
C
Is
active
will
only
look
at
the
activation
queue
and
the
get
metrics
will
be
like
the
git
metrics
will
have
to
look
at
both,
I
guess
to
know
if
it's
currently
active
or
not
and
then
return
the
correct
number
of
messages.
Yeah.
A
Yeah-
and
I
get-
and
I
don't
know
like
imagine-
a
world
where,
like
they
say,
qa
is
the
is
active
and
qb
is
the
get
metrics.
Let's
say:
there's
one
message
in
qa,
so
it
becomes
active
and
then
qa
is
empty
again
and
so
now
is
active
is
zero.
But
it's
already
activated,
like
the
containers
activated
at
this
point.
A
C
C
C
A
Great
yeah
that
makes
sense:
okay,
awesome,
yeah,
so
yeah,
I
I'm
gonna,
try
to
learn
more
and
if
it
is
in
fact,
like
some
batch
processing
thing
that
that
might
actually
be
like
whether
the
approach
that
they've
come
up
with
of
having
two
cues,
I
could
see
that
being
something
that
would
be
an
interesting
pattern
to
solve,
for
which
is
the
like.
I
I
I
want
you
to
listen
on
to
this
q
with
this
stream.
A
I
guess
we've
heard
like
this
threshold
thing
before,
where
people
are
like,
don't
actually
activate
until
my
stream
has
a
thousand
messages
and
once
it
crosses
a
thousand,
then
I
want
you
to
wake
up,
but
anything
less
than
a
thousand
like
it's
not
worth
it.
That's
the
request.
I've
gotten
a
few
times
before,
maybe
in
this
world,
if
we
define
a
threshold,
the
threshold
could
be
based
on
like
time
or
some
other
signal
as
well
to
be
like.
A
When
is
the
threshold
crossed
and
in
their
mind
again
I'm
making
a
lot
of
assumptions
here,
but
they're
saying
the
threshold
is
crossed
when
we
give
you
the
signal
and
we're
going
to
give
you
the
signal
by
dropping
a
cue
message,
but
that
pattern
of
like
a
threshold
is
maybe
more
general
purpose,
no
idea
how
or
if
we'd
be
able
to
solve
it,
but
but
yeah,
it's
interesting.
I'm
always
I'm
always
fascinated
by
by
how
people
are
are
using
cada
or
using
just
things
in
general
to
solve
some
of
these
more
complex
patterns.
B
Just
want
to
add
that,
regarding
the
threshold
scenario,
I've
just
recently
got
some
some
ask
from
some
user
and
he
may
actually
be
willing
to
contribute
this.
I
cannot
find
the
correct.
I
will
find
the
correct
issue
for
this,
so
this
is
probably
something
that
we
can
have
in
a
near
future.
A
But
yeah-
and
I
think
this
was
the
one
at
least
that
I
think
I
remember
looking
at
I'll
paste
this
in
chat
for
you,
as
well
as
if
you
neck.
But
I
think
this
was
the.
B
A
Remember
talking
about
last
year
around
this
threshold,
one
whether
this
is
the
right
design,
no
idea,
but
I
remember
having
some
good
conversations
about
this.
One.
A
Okay
and
then
the
only
other
one
I
already
flag
it,
which
is
just
like
hey
end-to-end
tests.
I
don't
know
zibianek.
If
you
have
anything
else
to
add
here,
it
doesn't
yeah.
It
looks
like
it's
just
and
I
poked
through
two
of
them
and
both
times.
Let
me
just
grab
a
random
one.
Here.
C
I
can
take
a
look
at
them,
I
mean
they
require
frequent.
B
Just
yeah
yeah,
I
try
oh
yeah.
I
was
trying
to
to
to
do
the
locally
like
to
find
the
issue,
but
I
wasn't
successful
so
it's
probably
some
some
kind
of
time
out.
So
if
you,
if
you
can
look
at
the
cluster,
you
will
see
probably
what's
about
the
problem,
but
generally
yes,.
B
The
cluster
radius,
cluster
and
and
and
azure
one
I'm
not
sure
which
one
yeah.
A
Okay,
great,
so
that's
it
on
the
agenda.
Dennis
you
mentioned
you
were
going
to
potentially
start
looking
at
a
soulless
scaler.
I
think
that
sounds
great.
I
don't
know
zibby
necroman
if
you
have
any
top
level
things
or
dennis.
If
you
have
any
big
questions,
I
know
we've
got
some
docs
here
on
getting
started
with
curating
scalars
and
some
examples
here,
which
is
a
good
starting
point.
A
D
I'm
I'm
actually
pretty
far
along
with
it
I
mean
I
I
have.
I
have
a
prototype
set
up
and
it's
you
know
it's
functioning
pretty
much
as
desired,
so
you
have
a
have
some
kinks
to
work
out,
maybe,
but
not
nothing,
nothing
huge!
D
I
do
have
a
couple
of
questions
though
I
mean
one
is
I
mean
I
found
some
this
documentation
for
a
push
scaler,
which
is
something
that
we
would
be
interested
in
where
you
know,
instead
of
instead
of
pulling
or
maybe
in
addition
to
polling,
you
know
we
would
actually
be
able
to
activate
on
events.
D
I
did
not
find
a
lot
of
documentation
about
that,
though,
and
there's
not
a
lot
of
information,
and
you
know,
if
I
look
at
the
which
document
is
it
one
of
these
contributing
docs?
You
know,
or
you
know,
creating
a
scalar,
I
mean
there's
nothing
really
about
creating
a
push
scaler.
So
can
you
tell
me
is
that
something
that
others
have
used?
I
mean
I
looked
at
a
few
examples.
I
think
I
looked
at
rabbit
kafka
and
ibm
mq
and
I
didn't
see
that
they
had
anything
like
that.
D
D
C
Push
scalers
the
only
built-in
example
of
them
is
the
external
scaler,
which
basically
was
added
to
allow
push
over
grbc.
So
when
you
connect
externally,
the
grpc
stream
could
just
push
that
is
active
state
instead
of
just
having
it
pulling
based
and
the
only
example
using
it
now
is
the
hdp
add-on.
I
think,
okay,
which
is
an
external,
an
external
scaler
but
yeah.
There
isn't
there's
no
reason
not
to
have
a
another
built-in
push
scaler
as
well.
C
If
that's
something
that
works
for
you,
but
so
far
we
haven't
needed
any
internal
ones
other
than
the
basic
support
for
jrbc
scalars,
which
we
call
a
standard.
D
Skill,
all
right
so
does
does
that.
Does
that
pull?
It
is
the
intent
then
to
have
it
work
in
conjunction
with
with
polling.
C
Yes,
yes,
I
think
it
works
in
conjunction
with
polling.
So
basically,
both
are
going
to
be
called.
Sorry
is
active,
is
going
to
be
called
and
that's
going
to
scale
the
deployment,
but
also
it
supports
a
a
channel
and
goes
essentially
or
a
stream
in
grpc
that
allows
you
to
push
and
is
active
and
immediately
at
that
time,
it's
going
to
activate
some.
D
Okay,
all
right
so
so
they
they
both
work.
I
mean
theoretically
together,
so
I
mean,
if
I
didn't
want
polling,
I
mean:
how
is
there
a
way
to
shut
that
off,
or
is
that
just
something
that
I
would
have
to
polling
is
going
to
cause
inactive
right
or
is
active,
pulling
yeah
pulling
calls
is
active.
All
right
and
another
question
I
had
actually
is
is
the
the
sequence
of
events
is
okay,
so
on
every
polling
cycle?
D
D
D
D
So
if
I
have
a
named,
you
know
if
I,
if
I
have
two
named
metrics,
I
mean
and
we're
looking
at
using
two
metrics
off
the
bat.
So
first
we
have
you
know
the
the
queue
depth
or
you
know
the
queue
length
or.
However,
you
know
message
count
on
a
queue
and
the
other
is
is
spool
usage,
which
is
you
know,
the
amount
of
disk
space
that
that
is
actually
being
consumed
by
yeah,
but
by
by
a
queue
if
it's
a
guaranteed
messaging.
D
So
in
those
two
situations
I
mean
you,
have
you
have
two
different
reasons
that
you
might
want
to
scale.
You
know
one
for
performance,
one
for
you
know
to
protect
the
health
of
the
broker,
but
you
might
want
to
have
both
of
them
active
at
the
same
time.
D
Now,
if
I'm
going
to
call
or
if
I'm
going
to
use,
get
metrics,
I'm
going
to
do
two
calls
to
my
api
every
single
time
that
I'm
polling
versus
one
time
right,
because
I'm
going
to
do
get
metrics,
I'm
going
to
look
up
the
you
know
the
one
particular
value
for
q
depth,
I'm
going
to
call
getmetrics
again
and
I'm
going
to
get
the
get
the
the
the
spool
usage
space.
So
I
was
hoping
that
I
would
be
able
to
call
like
is
active.
D
You
know
effectively
it's
going
to
return
both
metrics
back
cache
that
and
then,
when
it
does
call
get
metrics
it
would
it
would
just
instead
of
calling
the
api
again
it
would
it
would
simply.
You
know
I
would
just
use
the
cached
values,
so
it
yeah
the
way
that
get
metrics
is
defined.
It's
you
know
it's
not
actually
actually
get
metrics.
It's
get
metric
that
you're
calling
multiple
times.
So
that's
maybe
a
little
bit
confusing.
It's
not
terribly
confusing,
but
you
get
the
idea
I
was
just
wondering.
Is
there?
D
Is
there
a
way
or
or
some
mechanism
that
I
can
use?
That
would
allow
me
to
effectively
call
both.
You
know
call
it
once
you
know,
cache
those
metrics
and
then
and
then
use
that
when
it's
what's
actually
calling
the
get
metrics-
and
I
was
hoping
that
you
know
if
it
was
if
the
is
active
was
always
called
you
know
before
the
get
metrics,
then
then
that
would
work.
But
if
that's
not
the
case,
then
I
don't
really
have
a
good
mechanism
to
use.
D
Do
you
understand
what
I'm
saying
I
mean
I'm
just
really.
What
I'm
trying
to
do
is
just
minimize
any
kind
of
performance
on
a
hit
on
the
broker,
because
I'm
calling
the
api
every
single
time
that
I'm
I'm
looking
to
establish
the
the
metrics
for
a
particular
queue.
Now
I
mean,
if
you're
doing
one
queue
or
just
a
handful.
D
That's
no
problem,
but
you
know
if
I
have
hundreds
or
thousands
of
deployments
you
know
just
you
know
hammering
that
api
every
pull
cycle
you
know
in
in
an
uncontrolled
fashion,
maybe
not
be
you
know,
might
not
be
the
best
best
way
to
do
it
so
yeah
and
that's
we
can
look
at
using
the
push,
but
you
know
I
was
just
wondering
if
there's
some
other
method
that
I
could
use,
do
you
guys
understand
what
the
problem
is
or
or
maybe
just
what
my
design
concern
is.
C
Info.Metric
yeah,
I
was
trying
to
see
if
we
actually
have,
because
essentially
the
way
you
should
think
of
this
get
metrics
api
is
a
rest
api
for
hpa
from
kubernetes
itself.
We
define
a
bunch
of
metrics
there
and
then
hpa
queries
them
from
us
every
now,
and
then
I'm
trying
to
see,
if
actually
when
we
get
the
request
for
the
query,
if
we
get
all
the
metrics
that
it
wants
or
just
one
by
one
but
as
far
as
I
can
tell,
we
just
get
one.
B
Yeah,
but
the
name
of
the
metric
is
basically
or
the
metric
that
you
are
passing
to
hpa
is
basically
the
metric
that
you
define
so
so
in
the
get
metric
you
can
define
your
own
metric
that
will
contain
both
both
values.
Like
I
mean
both
so
in
the
get
metric,
you
will
do
the
calculation
from
both
chords
from
the
from
the
from
the
two
that
you
mentioned
before
and
basically
store
it
under
yours,
some
custom
customer
name-
and
this
is
just
how
hpa
will
do
the
calculation.
D
B
So
you
mentioned
that
you
would
like
to.
You
would
like
to
have
two
ways:
how
to
scale
the
skilled
deployment
from
your
scaler
right
and,
and
so
you
can,
you
can
in
the
get
metric
function
you
can.
You
can
basically
call
both
endpoints
at
the
same
time
and
based
on
that
values
you
can,
you
can
create
your
own
metric
that
will
be
passed
to
hpa,
so
it
will
be
just
one
number
from
from
two
numbers.
B
D
Okay,
yeah,
and
I
don't
I
mean
we're
going
to
call
when
I,
when
I
call
the
endpoint
you
know
to
to
retrieve
the
metrics
from
you
know,
from
the
actual
broker
itself.
I'm
only
going
to
call
I'm
going
to
call
it
once
and
both
you
know
both
values
actually
come
back
on.
One
call
right:
I
don't,
I
don't
think
I
can
define
an
amalgam
type
metric,
I
think,
is
where
you're
going
like
you
know,
you
sort
of
you
know,
make
these
metrics.
D
You
know
normalize.
C
D
You
know
yeah
blend
them
into
one,
so
I
mean
I
I
might
be
able
I'm
looking
at
it
now
I
mean
the
metric
spec
list
is
actually
two
values
I
mean
or
it's
it's
a
list
I
mean
I
could
try
to
return
both
in
one
call
and
see
how
that
works,
but
no,
I
I
don't
think
I
could
actually
like
blend
those
metrics
into
one
scalable
value.
B
Okay,
okay,
just
just
be
bear
in
mind
that,
basically,
if
you,
if
you
send
multiple
metrics
to
hpa,
the
hpa
will
always
select
the
greatest
value.
So
basically,
okay,
this
is,
it
is
not
doing
some
kind
of
operation.
It's
just
selecting
the
select
the
greatest
number
for
the
actual
actual
target
which
is
being
like
used
for
calculation
on
the
target
matter,
because.
D
C
Yeah,
the
other
thing
I
was
gonna
maybe
suggest
is
you
could
always
cache
in
the
scalar
itself.
So
the
same
scalar
object
is
called
like
when
we
call
get
metrics
multiple
times
with
metrics
with
different
metric
names.
Oh
no,
actually
they
get
created
every
time,
never
mind.
Sorry,
all.
D
Right
so
well,
let
me
ask
you
this.
So
when
I,
when
it's
when
hpa
is,
is,
is
calling
into
the
scalar
and
you
know
they
they
get.
You
know
the
get
metrics
gets
called.
Is
it
would
it
go
through
the
list
of
all
available
metrics?
You
know
it's
sort
of
like
a
burst
at
one
time
or
would
it
you
know?
Is
it
more
or
less
random
or
stochastic
in
terms
of
like
when
they
get
called.
B
B
A
question
basically
the
the
way
how
hpa
is
getting
the
metrics
is.
Basically,
it
is
requesting
a
metric
for
specific,
specific
objects
in
in
the
namespace
based
on
on
labels,
so
it
is
like
selecting
the
and
then
that
is
just
basically
calling
the
the
metric.
That
is,
that
belongs
to
the
specific
specific
object.
D
B
Okay,
so
are
you
talking
about
about
one
scale,
object
with
three
scalars
or
multiple
scalars
or
one
scale.
C
B
C
D
B
D
D
It's
not
it's
not
abundantly
clear.
You
know
what
the
best
way
to
do
this
efficiently
is.
So
that's
that's
really
all
it
is.
I
mean.
B
D
B
D
Yeah,
that's
that's
what
I've
noticed
I
mean
so
like,
for
example,
with
like
ibm
rabbit.
I
mean
they're
all
just
looking
at
cue
depth.
So
you
know
my
our
engineers.
Are
you
know
a
couple
of
the
architects
think
it
would
be
a
really
good
idea
if
we
were
able
to
return
back.
You
know
and
scale
on
the
spool
size
as
well.
D
B
So
so,
yeah,
sorry
so
sorry,
so
the
other
option
is
basically
in
this
case.
In
one
scale
object
you
can
define
like
two
types
of
your
scalar
and
one
type
of
the
scalar
will
base
the
scale
based
on
the
queue
and
the
second
will
scale
based
on
the
other
other
value.
So
you
will
have
two
scalars,
you
know
in
one
scale,
object
and
http
will
do
do
the
thing,
as
I
told
you,
so
it
will
select
the
greatest
number.
B
D
B
B
Would
say:
okay
anyway,
if
you
have,
if
you
have
any
further
questions,
you
can
reach
us
on
kubernetes,
like
on
cada
channel.
B
And
by
the
way,
do
you
plan
to
implement
end-to-end
tests,
because
this
is
something
that
we
are
starting
to
push
forward
so
basically
for
each
killer?
Your
cars
users,
too.
B
D
B
Something
that's
need
to
be
deployable,
so
if
you
look
at
the
tests
directory
in
header
repository
you
will,
you
will
see.
D
D
D
I
see
okay
and
all
right.
Well,
okay,
I'll
have
to
look
at
this,
because
this
is.
This
is
not
clear
to
me
because
this,
if
you're
gonna
end
to
end
test,
I
mean
you
actually
have
to
have
the
technology
deployed
in
order
to
test
against
it
right.
Yeah.
D
B
A
Awesome
thanks
dennis
for
for
joining
in
good
questions.
Okay,
so
that's
all
we
had
on
the
agenda
anything
else.
Anyone
wants
to
cover
before
we
sign
off
for
the
week.
A
Great
all
right
well
I'll,
go
ahead
and
post
this
recording
to
youtube
once
it's
finished
processing
thanks
everyone
for
joining
and,
as
we
said,
the
slack
channel
is
a
great
place
to
reach
out
as
well.
So
thanks
everyone
chat
later.
Thank
you.