►
From YouTube: KEDA Standup - 2021-02-02
Description
B
A
Yeah,
I
was
just
saying
that
I
can
only
join
15
to
20
minutes.
Okay
nick
is
not
making
it.
I
don't
know
if
it's
because
of
the
delivery.
A
B
Thanks
everyone
for
joining,
we
are
recording
this
I'll
post
it
to
youtube.
After
the
fact
this
is
now
february.
2Nd
proposed
agenda
great
it's
already
here.
Maybe
we'll
start.
We
don't
have
too
many
folks
here.
Let
me
make
sure
I
have
my
attendee
list
visible
tom.
If
you
just
want
to
go
first
and
kind
of
quickly
introduce
yourself-
and
I
I
assume
you
already
have
everything
you
want
to
talk
about
on
the
agenda,
but
if
there's
anything
else,
you
can
pop
it
in
there
too,.
B
Great
shubham
I'll
go
ahead
and
let
you
go
next
and
just
copy
your
stuff
from
last
time.
B
Right
and
anything
you
want
to
cover
just
just
here
to
join
in
just
just
joining
today,
perfect
sounds
good
aaron.
Did
I
copy
and
paste
your
name,
but
not
actually
have
you
introduce
yourself
I'll?
Let
aaron
go.
I
think
I
did
skip
him
and
then
ahmed
if
he
wants
to
and
we'll
just
jump
into
it
from
there.
E
Oh
you're,
good
aaron,
schlesinger
cloud
advocate
at
microsoft.
Great.
C
Ahmed
all
right,
ahmed
software.
B
Developed
by
microsoft,
that's
right
and
then
I'm
jeff.
I
don't
have
anything
for
the
agenda.
The
like
just
a
few
updates.
I
s
shoot
I'm
remembering
now.
I
still
didn't
finish
that
governance
scale
or
stuff,
but
we
did
a
few
other
random
updates.
Android
and
myself
have
been
working
with
the
customer
who's
working
to
unload,
cada
and
azure
functions.
They
were
having
some
issues
with
the
event
hub
trigger,
specifically
around
scaling
down,
so
su
yoshi
from
our
team
is
looking
into
it.
B
You
might
have
seen
sonia
who's,
one
of
our
pm's
posted
on
the
slack
channel
asking
to
chat
with
that
folks
who
are
interested
in
using
kata
she's,
going
to
be
doing
some
more
customer
outreach,
and
then
I
spent
a
bunch
of
time
yesterday,
working
with
brendan
burns,
who
was
playing
with
cada
and
azure
functions
and.
B
Questions
but
everything
worked
well,
we
had
a
bug
in
our
core
tools,
but
it
was
a
minor
bug
in
general.
He
said.
Oh,
this
is
he's
like.
This
is
a
very
compelling
story
here.
So
it's
good
to
have
him.
I
think
he's
going
to
be
using
it
for
some
presentations
he
has
coming
up,
so
he
just
needed
some
help.
Answering
a
few
questions,
but
in
general
keda
worked
flawlessly
for
him
from
what
I
can
tell
so,
okay.
B
So
with
that
we'll
switch
to
our
regularly
scheduled
programming,
tom
you've
got
the
first
one
cata
2.0,
which
was
great.
I
assume,
did
zibianik
help.
Do
this
one
and
we
just
got
it
out
the
door
and
we're
all
great
now.
A
Yes,
I
was
triggered
happy
on
one
aspect,
so
we
had
to
first
pull
it
back,
but
then
we
we
fixed
that
and
now
it
is
available
the
helm
chart
is
available.
I
think
the
operator
is
also
available,
so
you
can
now
use
it.
There's
also
a
pull
request
open
for
those
using
azure
functions,
so
that
is
coming
soon,
which
is
also.
B
B
And
is
this
just
the
oh
you've
got
that
here
I
see
yeah.
I
wonder
if
we
can,
because
like
in
in
theory,
do
we
have
to
update
this
whenever
we
do
a
release?
We'd
like
the
core
tools
team
needs
to
update
something.
In
this
case,
we
did
because
we
introduced
the
new
crv,
which
is
the.
A
B
I
see
so
like
the
yaml
actually
changed,
but
in
the
future
it
might
not
be
needed
great.
That
makes
sense.
Thank
you
great,
hopefully,
that
I'll
I'll
pull
this
up.
I
have
to
open
my
browser
that
has
my
microsoft
sign
in,
because
some
single
sign-on
rule
or
something
but.
B
But
this
looks
pretty
straightforward
and
I
would
imagine
this
could
be
reviewed
much
faster
than
the
last
set
of
changes.
So
thanks
to
things
they're
just
waiting
for
amethyst.
I
know
I
see.
B
A
A
A
B
Okay,
I
is
this-
I
I
assume
this
isn't
the
top
of
your
to-do
list.
This
is
some
I'm
like
curious
on
enough
of
this,
and
especially
that,
like
I've,
never
done
the
github
action
stuff
that
maybe
I'll
just
ping-
you
if
I
end
up
poking
into
this
or
trying
to
fill
out
this
form
I'll.
Just
let
you
know
tom
before
you
before
you
spend
time.
That's
fine,
but
I'm
I'm
yeah
this
one
piques!
My
curiosity,
I
don't.
A
B
A
For
this
and
it's
finally
coming,
they
even
have
drop
downs
on
all
of
that.
So
the
bug
reports
and
feature
requests
will
now
be
cleaned
great.
When
does
that
roll
out?
Do
we
know
the
feature
flanked
us
in,
but
it
doesn't
really
work
at
the
moment.
So,
okay.
A
Yeah
great,
but
that
will
help
and
then
a
request
by
zip
neck
is
we
have
a
lot
of
issues
open
in
terms
of
our
event
test
stability?
So
if
somebody
in
the
community
wants
to
jump
in
and
help
us
with
those,
that
would
be
really
great.
B
That's
good
yeah,
I'm
curious,
even
if
there's
some
way,
we
could
build
some
momentum
around
folks,
jumping
in
and
building
a
few
of
these
like
a
little
hackathon
or
bug
bashi
type
thing.
B
Awesome-
and
I
know
I
know
you're
short
on
time,
but
these
are
all
actually
super
helpful
updates.
I'm
excited
about
all
of
these
things
any
before
we
kind
of
go
around
the
room,
and
I
I
know
I
I've
got
a
question
on
the
status
of
the
http
stuff
aaron,
because
I've
seen
some
traction
there
and
was
sharing
that
out
with
a
few
folks,
even
myself
this
week,
but
yeah
any
any
questions
or
comments
or
thoughts
from
anyone
else
on
the
call
on
any
of
those
updates.
B
Okay,
great
yeah,
maybe
we'll
maybe
we'll
do
a
quick
conversation
then
about
aaron.
Do
you
want
to
just
kind
of
give
the
latest
on
what
the
http
scalar
stuff
is
doing?
I,
like
I
mentioned,
I
saw
some
activity
in
the
repo
I
think
you
mentioned
yesterday
when
I
was
chatting
with
you
that
the
alpha
is
probably
pretty
close.
So
I
assume
that
means
people,
maybe
shouldn't,
go,
try
to
build
this
stuff
just
yet.
I
don't
know
any
thoughts
on
that
or
what
folks
should
expect
sure.
Sorry.
E
Light
is
maybe
that'll
help
yeah,
so
we've
got
pretty
much,
there's
pretty
much
alpha
functionality
now,
but
I
gotta
add
some
more
docs
and
some
testing
because
we
have
like
negative
10
test
coverage
or
something
like
that
so
yeah,
so
the
docs
tom
you
added
issues
for
adding
a
design
dock
and
an
architecture
diagram
which
I'm
working
on
right
now,
there's
a
few
other
kind
of
logistical
things
before
I
would
recommend
people
check
this
out,
probably
the
biggest
one
is
that
we
don't
have
a
place
to
put
the
images
on
docker
hub
right
now,
so
I'm
just
like
putting
them
in
my
my
personal
docker
hub.
A
And
I
would
maybe
that's
a
bigger
discussion,
but
can
you
push
them
to
the
github
container
registry
for
now,
please
sure
yeah,
because
I
I've
used
I'm
migrating
my
personal
projects
to
them
and
they
give
you
a
lot
more
insights.
They
give
you
a
pulper
tag,
for
example,
which
docker
hub
does
not
give
you,
so
it
might
be
a
good
test
for
cada
to
see
if
we
migrate,
everything
or
not.
E
That'll
make
my
ci
cd
setup
life
a
little
easier
because
I
don't
have
to
set
up
the
external
stuff,
okay
yeah,
so
I
think
after
I
do
that
and
some
docs,
the
ones
that
I
mentioned.
It's
probably
alpha,
it'll
be
in
a
good
alpha
state
where
people
can
go,
try
it.
You
can
of
course
go
try
it
now,
but
docs
are
a
little
bit
shaky.
E
So
if
you
can
figure
out
what
to
do,
despite
the
docks
things
work
pretty
well,
we've
gotten
to
the
point
where,
like
I,
I
found
some
cases
where
scaling
doesn't
happen
fast
enough.
We're
at
the
point
we're
starting
to
look
at.
Why?
How
do
we
make
scale
from
zero,
faster.
E
A
couple
other
little
odds
and
ends,
so
I've
got
to
go
and
like
fill
out
the
issue
queue
better
and
add
some
better
descriptions
and
so
forth.
But
yes,
as
far
as
the
alpha
goes
after
I
do
docs
and
the
github
package
registry
thing.
E
Alpha
should
be
good
to
go
and
I
can
talk
with
you
all
about.
When
is
a
good
time,
and
if
I
should
write
something
up
or
whatever
but
yeah.
I
think
we're
in
good
shape
to
be
at
least
ready
to
do
it.
This
week.
B
That's
great,
that's
good
update
and
I
thought
too
like
this
is
more
of
an
aside.
I
thought
there
was
at
some
point
I
remember
going
to
this
repo
and
there
was
like
a
diagram
of
what
was
happening
per
the
earlier
comment
around
like
a
reference
diagram,
but
maybe
I'm
misremembering
or
maybe
it
was
on
an
email
or
something
oh
yeah,.
E
There
is
one
in
a
closed
pull
request
in
there.
That's
a
little
bit
out
of
date,
so
I'm
gonna
go
and
update
that
and
put
it
probably
link
to
it
from
the
readme
makes
sense.
B
That's
cool
yeah,
that's
great,
and
if
you
have
any
issues
with
like
permissions
and
even
to
like
it
sounds
like
github
container
registry
might
be
the
preferred
destination
moving
forward,
which
makes
sense,
if
you
have
any
issues
getting
access
to
like
publish
a
container
registry
under
the
keto
org.
B
Let
me
know
I
think
you
should
have
them,
but
similarly
like
we
do
have
the
keto
oregon
docker
that
that,
if
we,
if
you
needed
permissions
to
push
to
that
organization
too,
but
I
tend
to
agree-
I
I
could
even
foresee
a
world
where,
in
the
future
we
move
all
the
keda
images
over
to
github.
But
I
don't
quite
know
exactly
what
restrictions
exist
in
docker
hub
anymore.
I
just
know
there
was
a
bunch
of
stuff
happening
in
that
space
and
I
was
confused
by
it.
It.
A
B
Yeah
I
wish
there
would
like
I'm.
I
was
just
like
there's
some
aspects
of
the
http
add-on
that
will
definitely
be
kind
of
like
unique
to
how
it
works
with
caden,
how
it
publishes
metrics,
but
other
things
like
that,
how
we
can
speed
up
scaling
from
zero
to
one
I
just
like.
I
know,
k
native
does
a
bunch
of
optimizations,
but
I
don't
think
the
tech
is
really
in
a
spot
that
we
could
like
pull
out
pieces
of
the
skill
from
zero
to
one
thing
and
integrate
it
with
this
thing.
B
The
only
other
project
I
know
that
does
zero
to
one
is
open
faz
and
I
think
it's
a
similar
story,
and
I
don't
know
how
many
optimizations
they've
done.
So
maybe
this
will
become
the
like.
It's
just
such
a
universal
problem,
the
zero
to
one
for
http
without
having
bad
cold
start.
I
I'm
in
some
way
surprised
that
there's
not
some
existing
ip
to
integrate
with
all
this
other
stuff.
Just
to
make
that
problem
a
little
less
pronounced,
yeah
open
fans
uses
prometheus.
E
One
thing
I've
seen
I
think
k
native
does
this:
is
they
just
keep
a
tiny
little
pod
with
like
half
of
the
cpu
slice
around?
So
it's
you
know,
scale
from
z,
quote
unquote
zero.
But
it's
not.
You
know
it's
not
zero,
so
the
first
x
request
actually
gets
serviced
by
the
hidden
pod,
while
the
real
pods
are
spinning
up.
B
Oh
interest
and
it's
like
half
a
core,
because
we
do
something
similar
in
the
azure
function
service,
where
we
have
what
are
called
placeholder
containers
that
are
like
running
a
generic
image,
running
our
runtime
running
like
the
node
processor,
the
java
process
or
whatever
else,
and
then,
when
we
specialize
them
when
we
get
them
ready.
We
like
stick
the
user's
code
into
it
really
fast,
but
all
the
other
things
are
already
done
ahead
of
time.
But
it
sounds
like
a
native.
B
It's
not
getting
like
a
it's,
not
a
shell,
that
they
have
at
half
a
core.
It's
like
they
actually
have
your
your
service
at
half
a
core,
and
then
they
create
a
bigger
one
that
can
actually
handle
more
traffic
and
that
way
they
can
handle
a
few
requests
on
that
half
core,
but
you'd
almost
have
a
half
core
per
per
service.
That
you've
got
running
is
that
is
that
how
you
understand
it
too?
Aaron.
E
Yeah
they
have,
they
have
kind
of
two
tiers.
Actually
they've
got
that
and
then
they've
got
what
they
call
an
activator,
which
is
basically
approximately
the
same
thing
as
the
interceptor
in
this
architecture.
E
So
the
interceptor
holds
the
request.
Tilla
can
find
one
of
those
tiny
little
pods
when
it
forwards
to
the
tiny
little
pod.
It
also
concurrently
pushes
to
their
version
of
cada,
so
their
scalar
to
kp.
F
E
Spin
up
yep
kpa
so
to
spin
up
more
pods
at
whatever
scale
that
they
anticipate.
They
need.
B
E
E
The
just
the
last
thing
they,
the
last
I
checked,
they're
like
there
was
an
issue
I
think
for
doing
something
heuristically
like
doing
predictions
or
something
like
that.
So
by
no
means
am
I
suggesting,
like
you,
know,
let's
go
and
start
doing,
predictive
scaling
and
so
on
really
what
I'm,
what
I'm
thinking
is.
E
Is
there
a
way
to
do
this
like
little
slice
of
a
cpu
thing
without
making
someone?
You
know
if
someone
has
five
services
in
their
cluster
without
telling
them
hey,
you're
scaling
to
zero,
but
under
the
hood
like
using
a
bunch
of
their
resources
in
their
cluster
without
them,
knowing.
B
The
only
other
thing
I'll
flag-
and
I
think
I
mentioned
this
in
calls
before
there's
a
few
folks
in
microsoft-
research
that
I'm
happy
to
connect
you
with
as
well.
You
might
be
able
to
motivate
them
too.
They
took
a
bunch
of
historic
usage
data
for
functions
like
azure
functions.
It's
now
all
on
github,
it's
anonymized
and
it's
whatever
scrambled.
B
So
it's
not
like
you'll
see
any
of
you
know
personal
data
in
there,
but
they
trained
a
predictive
model
for
our
service
that
we
now
use
that
tries
to
like
actually
predict
and
the
success
rates
are
actually
better
than
I
expect.
That
said,
they
reached
out
to
me
like
last
year.
Sometime
and
they're
like
hey,
do
you
have
any
other
thoughts
of
what
we
could
do
and
I
was
like
yeah.
B
You
should
do
that
same
thing
for
cada
and
like
get
that
working
and
cater
and
they're
like
okay,
we're
gonna,
look
at
it
and
I
don't
know
if
they
ever
did
but
like.
I
am
curious
what
it
would
take
for
them
to
take
a
similar
model
like
that,
and
I
don't
know
how
how
personalized
it
is
to
the
azure
function
service
and
how
much
it's
just
a
generic
put
data
in
here
get
prediction
out
there
thing,
but
but
yeah
there's.
This
is
a
really
interesting
space
to
me
and
yeah.
To
the
earlier
comment.
B
I
don't
really
see
like
a
universal
consensus
or
even
like
everyone
rallying
around
a
specific
project
or
piece
of
tech
to
help
try
to
make
traction
on
it,
and
I
just
I
wonder
if
this
can
help
drive
that
conversation
forward.
I'm
not
saying
it
has
to
be
this
specific
project
in
this
specific
implementation,
but
cada's
kind
of
a
little
bit
in
a
safer
vendor-neutral
space
than
than
other
products.
So
maybe
maybe
there's
something
there.
I
don't
know.
B
Yeah
it's
hard
because
so
far
kata
is
all
async
non-http,
and
so
people
don't
have
to
worry
about
it.
This
will
be
the
first
one
where
yeah
to
that
I
am
curious
and
maybe
like
I
would
imagine
and
will
not
even
imagine
like
the
you
know,
the
general
consensus
is
well.
You
just
keep
one
replica
around
like.
Why
would
you
scale
to
zero?
But
to
your
other
question
I
don't
know
how
high
the
appetite
is
for
like
how
many
people
want
to
be
able
to
scale
their
service
all
the
way
down
to
zero.
B
I
mean
we
see
kata
people
want
to
do
that
all
the
time
for
non-http,
so
it
leads
me
to
believe
that
people
would
want
it.
I
just
think
the
trade-off
of
like
yeah,
you
get
it,
but
it
also
means
that
you're
going
to
have.
You
know
seven
seconds
of
latency
on
that
first
request
and
people
are
like
that's
not
worth
it
I'll.
Just
keep
one
around.
B
Exactly
it's
like,
I
probably
have
the
cores
anyway,
like
I'm,
not
that
tight,
yeah
yeah.
I
agree
interesting.
This
is
very
cool,
though
I'm
excited
to
see.
Oh,
I
might
wait
for
the
alpha
docs,
but
even
looking
this
I'm
like,
oh,
I
think,
there's
enough
here.
Maybe
I
could
get
this
working,
but
I'll
keep
an
eye
on
this
and
then,
as
soon
as
there's
an
alpha
I'll
go
and
give
this
a
spin
as
well.
So
thanks
aaron,
this
is
really
good.
B
All
right
tom
had
to
leave.
He
had
to
to
drop
to
something
sibiux
are
able
to
join.
Today
we
covered
everything
in
the
agenda
any
anything
else.
Anyone
wants
to
flag
questions
comments,
a
quick.
D
Question
on
the
go
for
it,
http
in
the
the
concept
of
the
interceptor
there's
a
discussion
between.
I
think
it
was
me
and
tom
around
using
looking
at
the
services
api
v2.
You
know
the
ingress
v2
and
if
you
squint
into
the
interceptor,
it
seems
like
potentially
an
implementation
of
services.
V2
could
be
the
interceptor
is
that
is
that
a
good
way
to
think
about
it?.
E
Yeah,
I
think
it
is-
I
read
after
you.
I
think
you
put
the
link
to
that
on
an
issue
yeah
wherever
that
was
that's.
When
I
kind
of
skimmed
it
and
then
I
came
back
and
read
it
a
little
better.
But
admittedly
I
haven't
read
the
whole
thing
like
that
entire
talk.
So
I
think
from
what
I
read
yeah,
it
seems
like
the
interceptor
is
an
implementation
of
it
yeah
and
then
I'll
just
add
on
to
that.
We
also
wanted
to
do.
I
think,
there's
some
overlap
too.
E
Cada
core
also
wants
to
do
an
smi
scaler,
so
I
don't
know
what
the
future,
if
any
of
the
interceptor
really
is,
maybe
maybe
the
external
scaler
in
the
http
add-on
just
talks
to
smi
or
just
talks
to
some
other
ingress
v2
or
services
v2
api.
E
Honestly,
the
interceptor
is
just
kind
of
a
quick
and
dirty
thing
that
is
meant
to
be
swapped
out
in
the
future.
If
someone
wants
to
so
I'm
going
to
leave
it
up
in
the
air
for
now
and
just
say,
the
interceptor
is
fairly
performing.
It
works.
Okay,
it
implements
like
the
worst
non-standard
api.
It's
like
a
single
endpoint
that
returns
a
cue
size,
but
that
all
being
said
like
it's
intended
to
to
be
explored
to
swap
out
possibly
swap
out
later
or
improve
or
whatever
it
needs
to
happen.
D
Yeah
and
my
my
naive
understanding,
I
may
have
read
the
docs
a
few
more
times
than
you
is
that
it,
the
services
v2,
is
really
a
gateway
api
and
to
think
of
it
as
a
standardized
interface
for
any
multiple
types
of
ingress
controller
type,
things
it
doesn't
leave,
it
doesn't
define
any
implementation
detail.
D
So
for
like
north
south,
you
know
north-south
traffic,
you
might
use
a
gateway,
english,
so
many
different
terms,
and
you
swap
you
put
in
your
own
implementation
for
a
particular
protocol
or
your
own
implementation
of
a
protocol
and
then
the
role
of
smi-
and
this
is
kind
of
what
I
was
proposing-
was
east
west
traffic
right.
D
E
F
B
Great
perfect
yeah
thanks,
I'm
glad
you
dropped
in
in
that
jonathan
sorry,
my
camera
froze
up
too,
so
I
just
turned
it
off,
so
you
don't
have
to
look
at
me
staring
blankly
into
space,
but
that
said,
yeah
any
any
other
comments.
Thoughts
from
anyone
before
we
wrap
up.
B
Sweet
thanks
everyone
for
the
help
nice
for
everyone.
I
guess
I
don't
know
if
anyone's
still
on
the
call
helped
with
the
2-1
release,
but
that's
good
to
see
that
that
went
out
the
door
excited
to
see
some
of
the
stuff
around
http
add-on
drop
and
then
we'll
meet
again
in
two
weeks,
with
potentially
some
more
updates
so
I'll
go
ahead
and
publish
this
recording
to
youtube
and
I'll
be
on
slack.
If
anyone
needs
me
in
the
meantime,
thanks
so
much
everyone,
okay,
see
ya.