►
From YouTube: 2020-10-29 - KEDA Standup
Description
Meeting notes: https://hackmd.io/cEi_FerdQvyTvB1i-5U0kg
A
How's
it
going
hey
jeff,
just
making
sure
all
my
stuff
is
set
up.
I've
I've
had
to
start
this
new
set
of
meetings
that
honored
and
I
are
helping
with
that's
at
8.
30
am
our
time,
and
so
I'm
always
barely
finishing
that
meeting
and
jumping
to
this
one,
but
but
we
are
good
all
right.
Let
me
just
pull
up
the
agenda.
I
see.
We've
got
a
good
few
folks
here.
I
think
we're
gonna
have
some
good
discussion
today.
This
should
be
a
great
one.
A
So
thanks
in
advance
for
joining
all
you
who
came,
I'm
actually
excited
about
this
meeting
today.
Good
evening
morning,
tom
thanks
for
being
able
to
join
and
for
folks
who
are
joining.
This
is
our
last
thursday
stand
up
for
the
foreseeable
future,
starting
next
time.
On
november.
A
10Th
we'll
be
meeting
on
tuesday,
we'll
also
be
moving
this
in
hour
forward,
but
zoomy
neck.
I
think
you
already
had
daylight
savings,
I'm
about
to
have
daylight
savings
on
sunday,
so
maybe
it's
not
an
hour
forward
for
you,
but
in
general,
utc
is
moving
forward
an
hour.
So
first
things
first,
is
if
you're
new
to
this
meeting
or
watching
this
recording.
That's
the
most
important
thing
to
know,
because
we.
A
Spooky
slides,
I
yeah,
I
gotta,
do
a
live
stream
for
azure
functions
yesterday
for
some
of
you
folks,
and
it
was
spookytastic,
maybe
I'll
start
using
obs
when
I
do
these
keto
stand-ups
to
make
it
more
intense
zoom
is
just
so
slick.
A
Okay,
we'll
jump
right
into
it.
We've
got
a
bunch
of
topics
here
to
start
with,
I'm
looking,
I
think
there
are
a
few
well
there's.
Definitely
some
folks
that
say
you
haven't
met
because
I
see
like
aaron's
on
the
call
he's
got
a
good
topic,
that's
on
the
agenda
as
well,
so
maybe
we'll
just
start
and
we'll
kind
of
do
a
quick
round
table.
If
folks
want
to
just
introduce
themselves.
A
I
think
the
big
one
is
first,
you
can
just
share
your
name
and
then,
if
there's
any
topics
that
you
want
to
discuss,
especially
if
they're
not
already
on
the
agenda
just
feel
free
to
flag
it
in
any
it
kind
of
updates
too.
So
maybe
I'll
start
just
kind
of
follow
this
one.
So
I'm
jeff
I
work
at
microsoft
on
azure
functions
and
on
cada.
The
only
update
I
have
before
we
go
into
some
of
the
agenda
things.
I
spent
a
little
bit
of
time
this
week
and
updated
the
samples.
A
So
when
you
go
to
the
samples
lift
we
break
down,
what's
a
2.0
sample
once
a
1.00
sample,
and
then
I
moved
some
of
the
samples
like
rabbitmq
to
2.0
a
few
of
them,
I'm
blocked
on
because
anything
that
has
azure
functions,
we're
waiting
for
some
core
tool,
stuff
that
tom's
helping
with
that's
the
big
thing
and
then
there's
been
a
few
azure
customers
who
are
using
cada
in
azure
kubernetes,
there's
about
three
of
them
that
recently
I've
been
having
about
an
hour
deep
dive
with.
A
I
actually
have
one:
oh
crap,
oh
no,
that's
tomorrow!
I
I
have
an
hour
deep
dive
tomorrow
with
a
finance
company.
I
was
terrified
because
I
haven't
prepared
my
slides,
so
that's
it.
I'm
gonna
go
down
the
list
and
call
folks
out
tom.
You
are
next
on
my
list,
so
I
will
let
you
go.
D
Hey
all
my
first
time
at
this
meeting,
I'm
aaron
I
work
as
a
cloud
advocate
folks
kubernetes
and
go
at
microsoft.
D
I'm
excited
to
show
this
demo
today
been
working
with
cada
for
months,
but
on
keda
just
started.
I
submitted
two
like
tiny
little
pr's
and
then
this
thing
that
I'll
show
today.
So
thanks
for
having
me.
A
Great
I'm
very
excited.
I
haven't
even
seen
this
I've
only
read
about
it,
so
I'm
I'm
actually
very
excited
as
well.
Honored.
I
see
you
next.
E
Hi,
I'm
anirud.
I
work
with
jeff
on
the
azure
functions
team
in
microsoft,.
F
Yeah,
it's
it's
yeah,
you
got
it
right.
Actually
great,
hey
how's
it
going
or
you
can
call
me
kai.
Okay,.
A
F
A
I'm
a
devops
engineer
at
carvana,
I'm
actually
new
to
this
meeting
and
I
saw
it
actually
posted
in
the
keto
chat
and
I
just
thought
I'd
just
jump
on
and
see
what
it's
like.
A
Awesome
yeah
thanks
for
joining,
feel
free
to
listen
in
and
chime
in
if
anything,
pops
up.
So
thanks
for
joining-
and
I
spell
carvana
right-
it's
not
super
important,
but
just
curious
yeah
yeah,
you
spelled
it
correctly
sweet.
I
didn't
know
if
it's
gonna
be
okay
or
something
all
right
back
on
my
list.
James.
G
I'm
james
dang,
I
work
with
kyle
raquavana
on
the
devops
team.
It's
my
first
time
on
the
call
as
well.
A
H
Hi
I'm
mir
mir.
I
work
for
I'm
software
engineering
manager
at
a
company
called
link
here
link
here,
like
that
l,
I
n
q
r
here
q
y
a
like
that.
U
I
a
I
I
got
it.
A
Sweet
anything
or
just
kind
of
dialing
in
listening
in
any
any
topics,
you
want
to
make
sure
we
add
to
the
agenda
that
we
have
saved
some
time
for.
H
Yes,
so
me
and
shoshek,
we
work
for
the
same
company
and
we
wanted
to
create
a
pr
and
just
want
it
before
we
create
the
period
make
sure
that,
whether
you?
What
do
you
like
about
the
idea.
A
Great
sweet
yeah
we'll
definitely
take
some
time
to
check
out
that
thanks
for
joining
me
here
and
on
that,
maybe
we
will
go
to
you
next.
I
Yeah,
my
name
is
shashik.
I
work
at
as
a
back-end
engineer
at
the
same
company
linkier.
I
just
wanted
to
discuss
that
pr.
We've
been
using
kata
for
about
a
few
months
now
it's
been
really
helpful
for
us,
so
just
wanted
to
see
if
that
pr
contribution
would
be
worthwhile
absolutely.
A
B
Hi
everyone
I'm
ritika
from
I
work
in
nec
and
we
are
just
investigating
kira
for
something
in
our
company
and
thanks
for
having
me.
A
Great
any
topics
I
should
add
here
just
listening
in.
B
Yeah
for
now
just
listing
what
we
are,
we
are
interested
in
cata
and
to
investigate
and
see
if
we
can
understand
more
of
kira.
A
Sure
sure
yeah
feel
free
to
chime
in
with
any
questions,
and
where
did
you
say
you
were
working.
A
Yeah,
like
this
yep
perfect,
thank
you
for
joining
great
to
have
you
all
right.
We
are
almost
there.
Oh
I've
saved
some
of
the
best
for
last.
You.
J
A
Thank
you
for
joining
shoebomb.
All
right
last,
I
think,
is
zibianek
who
needs
no
introduction,
but
I
will
let
him
introduce
himself
regardless.
K
Thanks
jeff
hi,
my
name
is
biniak,
I'm
working
as
an
engineer,
editor,
that's
serverless
team
and
no
other
topics.
Then,
what's
on
the
agenda.
A
C
A
All
right,
so,
let's
talk
about
keda
2.0
we
had
recently.
If
I
come
over
here
to
our
releases
tab.
We
had
2.0
rc2
six
days
ago,
which
had
the
new
ibm,
mq
scaler
and
a
few
of
these
pr
merges
us
stuck
in.
We
just
want
to
checkpoint
and
see
oh
and
I
actually
am
curious
on
this
helm
stuff
because
I
haven't
dug
into
it
yet,
but
something
happened
with
home
too
anyway.
I'll
just
pause
any
any
thoughts
on
2.0.
A
How
do
we
feel
about
the
the
go
live
release
of
2.0
official
and
then
helm?
I
guess
if
you
net
tom
anyone
else,
if
you
want
to
chime
in
and
kind
of
give
an
update
on
where
we're
at.
K
That
we
are
on
a
good
way,
so
we
have
received
a
couple
of
small
fixes
from
the
rc2
so
and
suashi
did
a
great
job
on
fixing
the
memory
leak
problem.
It
was
like
the
like
the
biggest
issue
we
had.
So
I
guess
that
if
nothing
appears
like
anything
serious,
we
can
we
can
release
the
the
release
next
week.
Probably
I
don't
know
maybe
on
one
wednesday
or
something
like
that
in
the
middle
of
a
week,
so
yeah,
that's
it.
Maybe
I
will
keep
it
on
tom
for
the
helm,
issues.
D
I
noticed
I
noticed
a
few
of
those
helm
issues
in
github
issues.
I
worked
on
helm
for
a
while,
so
someone
is
stuck
there
and
wants
a
hand
dm
me
on
kubernetes
slack
or
whatever
so.
C
Nrci
didn't
test
the
deployment,
so
apparently
I
broke
the
home
chart.
So
I
reversed
that
okay,
so
we
need
to
yeah
basically
migrate
to
full
helm,
api
version
2
again,
but
make
sure
the
crds
are
still
installed
correctly.
So
it's
the
first
issue
which
is
shipped
as
of
today
with
the
rc3
which
just
reverts
the
the
hunting.
K
So
don't
have
questions
yes,
sorry
so
tom.
So
the
issue
is
like,
maybe
not
about
releasing
the
helm
two,
but
about
fixing
the
charts.
So
if
we
are
able
to
fix
the
charts
that
they
work
with
just
the
home
free,
it
is
okay
right.
C
Yeah,
we
can
release
home
chart
separately
and
I
will
reopen
the
removal
of
home
to
issue.
A
C
So
I
just
have
to
take
some
more
time
to
have
a
look
at
it
or
if
somebody
else
wants
to
do
it,
but
if
we
ship
get
a
2.0,
if
stable
already,
we
can
still
remove
helm,
2
support
and
ship
the
helm
chart
version
3
with
cada
2.
A
A
A
What
doesn't
work
right
now
if
helm,
2.0,
oh
slash,
rc3
works.
B
A
C
A
C
Stopped
doing
it
yeah
you,
you
can
see
in
the
vr
what
I
did
to
just
bump.
The
version
remove
the
the
web
hooks
the
cr
hooks,
but
they're,
not
really
installing
correctly
in
that
case.
So
we
have
to
look
why
but
well,
I'm
also
busy
with
the
blog
post.
So.
A
Yep,
that's
fine.
I
I
think
we
can
I'll
see
if
I
can
get
a
app
back
on
see.
If
this
is
something
we
can
do,
I
think
I
was
the
one
who
actually
did
the
pr
to
do
the
helm3
support.
So
I
have
done
some
of.
A
Maybe
I
can
even
spend
some
cycles
tonight
and
see
if
anything
jumps
out,
but
not
a
helm
expert,
so
we'll
see
probably
less
of
one
than
even
utah.
So
all
right,
that's
great,
so
the
helm
one.
A
Any
reason
we
don't
want
to.
Maybe
that's
a
better
question
sounds
like
we
want
to
all
right
and
then
we'll
have
to
do
a
few
things
at
the
time
I
can.
I
can
work
on
azure
applications,
specifically
like
an
azure
update
or
maybe
an
open
source
at
microsoft,
blog
post
tom.
You
mentioned
a
blog
post.
I
don't
know
if
this
is
one
of
the
different
like
use
case
blog
posts.
A
Do
we
want
to
do
a
blog
post
on
kda.sh
and
if
so,
I
can
write
it
or
if
somebody
else
wants
to
write.
I
know
we
have
the
one
on
beta,
and
so
it
might
be
pulling
some
of
the
cool
stuff
from
here
and
just
saying,
like
hey,
2.0,.
C
There's
a
google
doc
which
we're
working
on
for
the
cada
sh.
A
And
it's
possible
that
this,
the
azure
amplification
one
might
go
out
on
like
november
7th
after
this
has
already
passed,
but
I'll
see.
Sometimes
they
have
like
these
weird
things
around
links,
not
working,
and
they
don't
approve
it
until
the
links
actually
work.
End-To-End,
but
we'll
see
what
we
do
anything
else.
We
have
to
do
from
the
release
process.
Who
would
be
the
I
guess
who
wants
to
take
on
the
ownership
of
this?
I
guess
that's
maintainers.
We
could
probably
close
on
this
offline.
We
don't
have
to
close
it
here.
K
Yeah
one
more
one
more
little
thing
regarding
this,
so
probably
we
can.
We
can
move
like
the
development
back
to
the
master
branch
or
we
should
probably
agree.
Maybe
two
main
right.
I
am
good
at
this,
so
we
can
probably
do
this
to
do
this
change
like
right
before
the
release.
So,
okay,
I
can
do
that.
K
G
A
And
I
I
just
updated
the
samples
repo
to
maine
last
night,
but
obviously
that
was
a
super
easy
repo
to
update.
It
wasn't
hooked
up
to
anything,
but
I
felt
good
about
it.
A
Yes,
all
right,
this
is
exciting
november
4th
unless
something
terrible
arises.
That's
super
exciting
all
right
before
we
get
to
the
demo
last
one
here
tom,
you
want
to
chat
a
bit
about
the
stand-up
moves,
which
I
guess
I
teased
I'll.
Let
you
just
comment
on
because
I
know
you
weren't
here
last
time
when
we
talked
about
the
sum
too.
C
A
So
I
can
do
it
at
least
for
a
few
months,
because
it
will
just
be
once
a
week.
I.
E
Think
it's
same
here,
especially
since
all
of
us
are
working
from
home
now,
so
I
think
I'm
also
fine.
I
can
also
do
it.
E
I
think
it's
fine,
it's
fine!
It's
yeah.
K
Yeah
or
or
we
can
like,
do
a
compromise,
maybe
move
it
for
30
30
minutes.
I
don't
know,
I'm
not
sure,
I'm
happy
with,
like
all
the
options
so.
B
L
A
A
lot
of
weight,
but
I
love
all
of
the
people
on
the
call.
Is
anybody
here
have
a
negative
reaction
to
1600
or
maybe
in
chat?
You
can
say
plus
one,
I'm
actually
calling
from
a
terrible
time
zone
right
now
and
it's
like
10
o'clock
at
night
and
that's
great
for
me,
but
I,
but
I
think
this
is
fair
tom
and
I
think
we
can.
We
can
do
this
so
I'll
keep
an
eye.
E
Out
yeah,
I
completely
completely
agree
jeff.
I
think.
E
I
think
both
tom
and
zubnik
have
our.
E
E
A
A
Sweet
sweet
all
right.
Now
we
get
on
to
the
fun
stuff
aaron,
I'm
just
gonna
turn
the
time
over
to
you.
I
think
you
should
have
permission
to
share
your
screen
or
do
whatever
else
I'll.
Just
give
the
time
to
you,
and
you
can
give
a
bit
of
background
here
and
start
the
conversation.
D
Sweet,
let's
see,
share
screen
host
disabled
participant
screen,
sharing.
A
D
So
we
had
been
using
cada
for
a
while,
and
that
was
kind
of
a
pretty
natural
choice
to
get
that
auto
scaling
component
and
the
rest
of
the
machinery
behind
the
scenes
is
pretty
much
a
little
control
plane
to
set
up
all
the
other
machinery
like
ingress
and
services
and
the
deployment
that's
going
to
be
scaled
and
everything
else,
and
then
a
cli
on
the
front
end
for
the
developer,
who
wants
to
push
the
container
up
to
the
cluster
and
get
it
on
the
you
know,
quote:
unquote:
production
ready
on
the
internet.
D
So
before
I
go
to
a
little
demo,
there
are
three
components
of
this
thing
and
what
we
were
looking
at
is
to
see
what,
if
any
of
these
components
make
sense
inside
of
cada
versus
what
makes
sense
is
like
a
complement
to,
and
then,
if
there's
a,
if
someone
is
a
complicate
comp
limit
to
cada
then
like,
where
should
it
go?
Should
it
be
outside
the
cater
outside
the
keto
org
inside
the
kid
or
whatever
that's
kind
of
a
secondary
discussion?
D
There's
an
external
scaler
implementation,
which
is
external
only
it'll,
probably
turn
into
an
external
push
at
some
point,
and
then
there's
a
reverse
proxy
as
well,
and
the
reverse
proxy
is
basically
a
hop
between
ingress
and
the
back
end,
the
actual
app
that's
scaling
that
doesn't
really
need
to
be
there.
That
could
just
be
your
standard
ingress
controller,
but
it
was
a
little
bit
easier
for
us
to
build
for
for
reasons
at
sort
of
at
the
outset.
D
So
let
me
just
show
a
quick
demo,
because
I
think
that'll
that'll
kind
of
show
the
feel
the
experience
the
developer
experience
we're
trying
to
trying
to
achieve
and
then
I'll
show
some
of
the
back
end
as
well.
How
that
works
sort
of
on
top
of
the
developer
experience
so
open
another
tab
here.
D
So
I
have
this
khp
command
line
tool
right
now,
we've
called
this
thing:
cata
http
for
lack
of
a
better
term,
better
name
right
now
it
just
has
a
run
and
remove
so
run
takes
in
like
the
name
of
the
quote
app
and
then
a
container
and
some
other
metadata
and
then
behind
the
scenes.
I
have
k9s
the
little
dashboard
running
that
it's
just
gonna
show,
like
the
scale
up
scale
down
stuff.
All
of
this
is
in
a
namespace.
D
Just
I
kept
everything
into
inside
of
one
namespace,
so
there's
cada
running
here
and
then
there's
that
proxy
component,
which
is
really
like
I
mentioned
before
the
ingress
external
scaler
and
the
little
control
plane
that
this
cli
is
talking
to.
So
let
me
do
a
khp
run.
D
I've
got
this
little
xkcd,
comics
renderer,
so
dash.
I
is
going
to
specify.
I
want
to
run
that
image.
Dash
p
is
going
to
be.
I
want
to
run
it
on
port
8080
and
we
kept
the
interface
to
this.
To
start
with,
there's
more.
D
We
can
do,
of
course,
and
we
were
looking
at
exposing
like
a
yama
based
but
simpler
type
of
interface,
like
maybe
the
sort
of
your
docker
compose
like
interface
or
something
else,
on
the
same
level
of
simplicity,
to
be
sort
of
one
step
beyond
and
one
step
more
level
of
control
than
the
cli,
but
not
having
to
go
all
the
way
whole
hog
into.
I
need
to
learn
all
the
kubernetes
yaml
to
get
this
thing
to
work
so
anyway,
it
is
just
the
cli
for
now.
So
I'm
running
that
deployed
app.
D
You've
got
your
xkcd
thing
running.
All
the
other
stuff
is
available
as
well.
D
Scaled
object,
k9s
is
there
we
go
so
you've
got
your
xkcd
scaled
object
now
and
then
there's
services.
Oh,
this
is
really
slow.
D
Services
are
there
and
there's
more
there's
a
deployment
there
there's
an
ingress
there,
there's
a
bunch
of
other
stuff
there
as
well,
but
sort
of
the
cli
took
care
of
all
of
it
and
the
developer.
The
person
using
the
cluster
didn't
have
to
care
the
cada
operator,
the
not
not
this
cada
operator,
the
actual
person
who
installed
cada
and
who
is
operating.
The
cluster
has
a
record
of
all
of
this
happening
inside
of
that
control
plane.
D
So
you
also
get
a
sub-domain,
and
this
is
kind
of
fudged
at
this
point
for
the
prototype.
But
we
have.
We
have
a
pr
in
that
uses
the
http
application
router
to
get
a
real
automatic
subdomain
containing
creating
thing,
but
for
now
it's
fudged
I'll
just
show
the
web
app
just
renders
xkc
comics
via
the
via
the
comic
id.
So
just
for
fun.
Let
me
put
it
in
chat.
D
And
you
can
put
whatever
id
you
want
in
in
that
query
string.
So
the
last
thing
I'll
show
just
to
prove
that
it
it
works
and
it's
on
keda
I'll
scale
it
up
and
down.
Let's
do
200
000
requests,
concurrency
of
2000.
D
Xkcd.Containerapps.Dev,
let
that
run-
and
I
think
I
said
cata
to
a
polling
interval.
I
think
it's
10
seconds
maybe
might
be
30.
So
this
might
oh,
no
okay,
yeah,
so
keda
scales
up
the
ingress
and
the
actual
app.
We
may
change
how
this
works,
or
we
may
think
of
more
about
how
this
works,
at
least
but
yeah,
that's
cada.
Obviously
working
and
that's
sort
of
you
know.
I
show
this
because
this
you
know
sort
of
the
proof
that
all
the
machinery
scaled
object.
D
Everything
else
was
set
up
and
cada
is
working
properly
to
scale
this
thing
up
and
then
scale
down.
This
will
take
a
little
bit
longer.
So
I
won't.
I
won't
hold
up
the
rest
of
the
the
rest
of
the
meeting
with
this,
but
basically
this
goes
into
cooldown,
just
just
like
any
other
k2
app
and
then
starts
scaling
everything
down
back
to
scalar
proxy,
we'll
scout
onto
one,
and
then
this
of
this
xkcd
thing
will
eventually
scale
down
to
zero,
but
that'll
take
obviously
that'll
take
longer.
D
C
D
This
one,
the
actual
app
the
per
the
one
that
the
person
deployed
will
scale
to
zero,
and
then
we
didn't
scale
the
proxy
down
to
zero,
just
because
it
needs
to
be
up
and
running
to
to
route
to
other
apps,
as
well
as
this
app.
C
But
it
will,
it
will
scale
the
proxy
to
only
one
instance.
In
this
case
then
yeah
yeah,
okay,
right
all
right,
then.
D
Yeah
right
now
it
doesn't
really
hold
the
request
very
well
for
yeah
for
lack
of
a
better
term.
I
think
it
has
a
timeout
of
20
seconds
and
we
should
probably
be
a
little
bit
smarter
about
how
long
it
waits.
I
could
probably
have
a
little
bit
more
now
it
could.
It
could
be
smarter
about
it.
It
could
look
at
the
back
end,
look
to
see
if
there's
a
pod
currently
pending
and
then
hold
it
for
longer
than
that,
but
yeah
right
now,
it's
dumb.
C
A
Yeah,
this
is
really
cool,
as
I
mentioned.
I
haven't
seen
this
before.
I
just
read
about
it
super
exciting.
The
the
question
I
have
in
this
is
part
my
ignorance
on
how
all
of
the
proxy
stuff
could
work.
Do.
What
would
I
like
if
I
wanted
to
use
nginx,
for
example,
for
like
my
ingress
and
I
do
like,
let's
encrypt
certificates?
D
You
could
use
it
with
this,
but
really
this
proxy
shouldn't
be
a
proxy.
It
should
probably
just
create
a
new
ingress
object
and
let
nginx
do
all
the
magic
and
then
this
thing
this
would
be
one
less
piece
of
machinery,
then,
could
offload
the
proxy
off
to
engine
x,
but
right
now,
nginx
could
point
to
this
proxy
and
things
would
still
work.
It
would
just
be
another
kind
of
unnecessary
hop
to
be
honest
with
you.
A
Sure
yeah
and
it
would
only
be
like
it-
you
know
I
I
can't
do
things
like
the
you
know,
auto
cert
bot
thing
on
ingenix,
and
that
was
the
only
one.
It's
like.
Okay,
if
I
go
with
this
approach,
what
happens
to
those
capabilities?
So
it
was
just
trying
to
map
it
together,
but
yeah
any
other
questions
from
folks
conversation
topics.
K
D
It's
shared,
it's
they're,
all
just
multi-tenant,
and
they
match
the
sub-domain
to
the
service
name.
The
cluster
there's
a
cluster
ip
service
per
application,
so
they
right
now
they
are
stateful
because
they
have
the
they
keep
the
metrics
that
they're
like
tightly
coupled
with
the
metrics
api.
D
The
external
scaler
excuse
me,
so
they
keep
state
on
how
many
requests
are
coming
through,
so
that
they
can
feed
it
into
cada,
but
they
should
be
broken
out.
So
the
proxy
can
be
just
completely
stateless
right
now,
it's
multi-tenant
anyway.
So
it
would
make
a
lot
of
sense
to
be
able
to
have
it,
stateless
and
then
be
able
to
scale
it
up
and
down
and
then
yeah
and
then
it
can
just
you
can
spread
them
out
all
across
the
cluster.
K
And
if
I,
if
I
understand
correctly,
like
all
the
objects
and
everything
is
created
through
the
cli,
so
have
you
been
thinking
about
like
creating
some
controller
or
some
operator
right?
That
will
manage
the
resources,
because,
basically,
if
I
manually
delete
the,
for
example,
the
ingress
or
anything
any
other
object,
it
won't
be
reconciled
right.
D
Right
yeah
right
now
the
cli
talks
to
a
tiny
little
rest
api.
Actually
that's
in
the
proxy
and
yeah.
We
have
to
make
that
better.
I
think
you're
right.
Maybe
the
best
approach
is
to
have
an
app
object.
I
don't
know,
but
maybe
the
best
part
is
put
an
app
object,
app
object
into
kubernetes
and
then,
like
you
said,
have
a
controller
running
that
that
does
that
reconciles
it.
K
Or
you
can,
you
can
create,
like
let's
say
if
you
define
in
the
in
the
scaled
object,
the
external
external
like
let's
say
scalar,
but
the
one
that
you
are
using
and
you
can
probably
parse
the
metadata
in
the
scalar
and
the
scalar
itself
could
be
the
controller
that
creates
all
the
objects
based
on
the
metadata
in
the
scaled.
D
Yeah,
I
should
actually
I'll
actually
take
back
actually
the
that
statement
that
we
shouldn't
have
a
proxy
or
we
could
get
rid
of
it
completely.
D
The
the
one
reason
why
you
might
want
to
have
one
is
this
thing:
I've
seen
this
thing
in
k
native,
that's
called
an
activator
and
that's
basically
kind
of
a
background
controller.
D
That's
smart
enough
to
know
whether
there
is
a
pod
running
for
an
app
and
when
there
is
a
pod
running,
it
updates
the
ingress
object
to
point
directly
to
that
service.
When
there's
not
a
pod
running,
the
ingress
object
points
to
it
and
it
that's
when
it
holds
the
request
until
there's
a
pod
running
it
forwards
the
request
and
then
updates
to
point
the
ingress
to
that
pod,
so
that
that's
as
far
as
I've
thought
through
that's
the
one
reason
why
this
you
know
quote-unquote
proxy
component
should
be
around
should
be
a
thing.
D
A
A
How
do
I
do
it
with
http
that
I
would
love
to
figure
out
how
we
could
first
class
this
in
many
ways
too,
like
I
think
we
at
one
point
tried
to
integrate
with
osiris
osiris
did
things
slightly
differently,
but
in
general
I
think
the
big
problem
was
like
osiris,
just
kind
of
is
on
the
shelf
from
everything
that
it
looks
like
from
that
repo.
I
think
it
would
make
sense
to
figure
out
what
parts
of
this
like.
A
I
would
imagine
things
like
that
proxy
would
probably
be
a
separate
project,
that's
related
to
cada
that
maybe
we
make
it
super
easy
to
pop
in
with
it,
but
it
would
be
like
cadet
core
slash
whatever
k
to
dash
http.
A
I
don't
know
exactly
how
to
pull
things
apart,
but
I
would
love
to
see
how
we
could
start
to
move
this
and
whether
pieces
of
improvements
or
functionality
go
into
the
main
keto
core
at
the
very
least.
Making
it
really
easy
for
people
to
opt
into
this
additional
functionality
is
something
I
would
love
to
see.
Progress
being
made
on.
D
Yeah-
and
I
can
I
can
break
this
up-
I'm
not
going
to
break
it
up
in
the
code,
but
I
can
break
it
up
conceptually
and
I
can
write
sort
of
a
proposal
as
well,
because
there
yeah
there's
a
lot
in
here
that
that
is,
you
know
out
of
scope
from
cada,
but
there
may
be
parts
like
maybe
zveinik,
maybe
something
like
the
controller.
D
N
K
But
maybe
maybe
the
the
issue
is
like
more
more
transparent
for,
like
other
folks
that
are
not
like
directly
involved.
So
if
they
like
search
for
the
http
and
cada,
they
will
find
the
issue
immediately.
So
you
can
open
an
issue,
put
a
proposal
in
there
and
maybe
link
limited
documents
in
there.
If
you
want
to
put
it
in
the
document,
that's.
A
What
I
was
thinking
and
that
way
like
as
we
have
comments
or
evolve,
you
don't
have
to
kind
of
like
event,
source
the
github
issue
to
figure
out.
What's
the
current
state
but
yeah
github
issue
that
points
to
a
google
doc
where
we
can
go,
add
comments
and
update
and
kind
of
have
be
the
source
of
truth,
I
think,
would
be
fantastic,
yeah,
aaron
anything
you
can
do
to
help
here
and
ping
me.
If
there's
anything
I
can
do
as
well.
This
is
this
is
cool.
A
Cool
sweet
all
right
so
back
to
the
agenda.
We've
got
two
more
topics:
what
at
least
on
the
agenda-
and
I
let's
see
if
I
can
share
nope.
I
can't
because
I'm
not
the
host
anymore.
It's
fine!
The
way,
I'm
not
really
sharing
anything
with
the
agenda,
but
if
you
right
click
me
and
make
me
the
host
again,
then
I'll
I'll
share
my
screen
whenever
the
next
one,
though,
is
from
you
tom,
I'll,
open
the
issue,
while
I'm
getting
share
permissions.
A
I
actually
didn't
read
this
at
a
time,
so
I'm
not
even
sure
what
it
is,
but
it's
cada
single
source
of
truth
for
scalar.
C
A
metrics
from
somebody
I
don't
remember
his
name,
but
it
was
an
interesting
idea
since
we
already
scraped,
let's
say
all
the
metric
sources:
well,
the
scalar
sources
to
make
decisions
on
that
means.
We
have
the
metrics
and
his
idea
was
to
provide
metrics
for
all
of
these
so
that
they
can
also
be
used
in
other
systems
like
prometheus.
We
do
provide
some
metrics
already,
but
not
on
the
exact
metrics
that
we
we
got
from
the
dependent
system.
A
K
Yeah,
I
I
think
it
does
make
sense,
because
currently
as
systems
that
we
are
like
exposing
some
metrics
in
the
in
the
metrics
adapter
or
like
the
other
stuff,
that
is
pushing
the
metrics
to
the
hpa
to
the
actual
scaling
in
kubernetes.
So
we
are
exposing
these
metrics,
but,
for
example,
if
you
are
using
scale
jobs,
so
you
are
scaling
your
jobs.
You
don't
have
such
a
matrix,
because
skill
jobs
doesn't
use
the
metrics
server
or
mat
matrix
adapter.
So
I
see
I
guess
it
would
be.
K
It
would
make
sense
to
expose
expose
the
metrics
directly
from
from
the
data.
K
There
are
some
like
hooks
already
like
prepared
by
the
operator
sdk
framework
or
the
build
up
framework,
so
it
should
be
pretty
easy
easy
to
expose
them
directly
on
operator,
and
it's
just
about
like
collecting
the
metrics
and
exposing
them
on
the
on
the
right
interface.
So
I
think
it
does
make
sense
to
to
have
it
because
then
we
when
we
can
create
some
kind
of
nice
dashboards
or
something
like
that
to
see
what
was
the
scale
and
etcetera
sure.
A
So
if
I
understand
this
right,
I'm
just
going
to
try
to
echo
back
what
I
think
I
understand
today.
When
you
use
scaled
objects,
we
pull
some
metrics
and
then
we
publish
them
to
our
external
metrics
adapter,
which
then
the
hpa
can
pull
from
which
in
theory
works
fine.
If
I'm
reading
this
issue
right,
if
I'm
using
skilled
objects,
the
problem
is
some
code
pass,
notably
scale
jobs.
We
don't
publish
the
metrics
to
our
external
metrics
adapter.
A
We
just
use
it
to
drive
scaling
decisions
in
the
operator
itself,
which
means
that
those
metrics
are
invisible
to
something
like
prometheus,
so
they're
saying.
Could
the
cada
operator
itself
expose
a
metrics
endpoint
so
that
prometheus
could
go
get
metrics
for
both
skilled
jobs
and
skilled
objects?
Is
that
more
or
less
accurate.
A
A
Yeah
it
sounds,
it
sounds
like
this
makes
sense.
It
sounds
like
it's
actually
not
super
complex
and
I
even
see
some
conversation
here.
It
looks
like
operator
framework
where
I
have
some
stuff,
so
yeah
yeah,
I
mean
tom.
I
don't
know
what
the
next
steps
you
have
in
mind,
but
I
think
the
consensus,
at
least
for
folks
on
this
call,
and
that
makes
sure
I
have
chat
open
as
well
in
case
anyone's
chiming
in
there-
is
that
this
this
feature
would
make
sense
to
add
in
yep.
A
Great
yeah,
that's
pretty
cool
all
right,
so
mirror
had
to
drop
off
but
chelsea.
I
believe
you
are
still
here,
and
so
the
last
item
on
our
set
agenda
was
discussing
this
pr
that
you
were
looking
to
open
so
I'll
just
turn
the
time
over
to
you
to
chat
and
if
you
need
to
share
your
screen,
let
me
know
and
I'll
I'll
give
you
those
permissions
as
well.
I
Right
yeah,
it's,
I
guess
nothing
to
share,
so
we
for
our
use
case.
We
use
kafka
a
lot
of
a
lot
of
kafka's
scaling
on
pretty
much
the
queue
size
and
kafka
topics.
I
I
Basically,
I
think
how
it
works
right
now
is
the
auto
scale
process
the
zero
to
one
process
scaling
from
note
from
zero
instances
to
one
occurs
as
soon
as
a
single
message
comes
into
the
queue
I
think
for
our
case,
we
found
it
useful
to
wait
for
a
certain
number
of
messages
to
accumulate
before
scaling
from
zero
to
one,
and
so
we
kind
of
added
a
flag
called
minimum
threshold
that
allowed
messages
to
accumulate
until
that
minimum
threshold
is
met
before
scaling
up
to
one
instance
and
then,
after
that,
the
hpa
takes
over.
I
A
I
definitely
have
some
thoughts
on
this
one,
but
I'll
give
others
a
chance.
First,.
K
K
Maybe
maybe
the
other
question
is
like
like
do
we
want
to
make
this
exclusive
just
for
kafka,
or
do
we
want
to
make
it
part
of
the
specs
so
expose
it
for,
like
the
other
scalers
as
well?
That's
the
that's
the
general
question,
because
at
the
moment
we
can
we
can
put
the
property
in
the
directly
in
this
specs.
So
as
we
have
like
the
max
maximum
replicas
and
minimum,
we
can
have
like
this
this
field
or
we
can
put
it
into
the
into
the
scalar
specs
so
like
into
the
kafka.
A
Like
up
where
we
have
main
replica
yeah
yeah,
my
my
two
cents
were
like
the
scenario
makes
a
ton
of
sense
even
coming
from,
like
the
other
half
of
my
life,
which
is
azure
functions.
This
is
something
I
hear
from
that
site
as
well.
So
I
think
the
pattern
is
sound.
I
don't
think
it's
an
anti-pattern
in
any
way
yeah.
I
found
this
issue
and
I
guess
that
is
the
the
biggest
question
is
like.
A
Do
we
build
this
in
a
way
that
it's
unique
to
kafka,
or
in
this
case,
the
issue
that
they
said
they're
like
hey?
I
might
actually
want
to
do
this
with
the
event
hub,
scaler,
the
rabbit
on
cusco
or
the
kafka
scaler,
and
so
is
it
something
that
we
could
first
class
as
a
pattern
that
either
scalars
opt
into,
or
at
least
is
consistent
across
them
and
maybe
even
automatic
across
them.
I
don't
know
enough
about
the
code
base,
but
to
me
that's
kind
of
the
biggest
question.
I
A
A
Trying
to
think
I
know
we,
you
might
not
know
I'm
curious
if
there
is
a
like
if
there
is
a
spot,
where
we're
doing
anything
like
this
today,
where
like
before
a
metric,
or
I
guess
it's
just
before
activation
like
if
we
could
have
whenever
the
activation
path
gets
triggered.
It
checks
to
see
if
this
property
is
defined
and
make
sure
that
whatever
metric
is
less
than
activation.
I
don't
know
if
that's.
K
K
Yeah
like
for
from
the
just
top
of
my
mind,
I
guess
it
should
be
part
of
the
like-
is
active,
active
function,
so
the
one
that
is
being
been
checking
the
scalars.
So
basically
we
will
have
to.
We
will
have
to
modify
like
the
this.
This
function
for
all
the
scales,
so
that
would
basically
just
do
the
check.
As
you
said,.
K
Unfortunately,
the
letter
here,
because
each
score
has
to
implement
the
executive
on
this,
but
we
we
can
probably
start
like
something
like
that.
If
user
specifies
this
on
a
scaler
that
doesn't
implement
this
this
feature,
it
would
basically
just
skip.
A
Sure
right
right,
yeah
yeah,
so
it
maybe
even
the
one
is
like
if
we
started
with
kafka,
at
least
from
an
example
of
like
it's
very
I
I
I
don't
know
enough
about
going
how
it
works.
It
feels
strange
that
we'll
copy
paste
the
same
logic
across
them
all,
but
it
might
even
be
fair
and
shashik.
A
Since
I
know
you
want
to
do
this,
if
like
we
could
almost
start
with
what
this
would
look
like
in
kafka,
just
keeping
in
mind
that
before
we
merge
it
in,
we
might
we'll
just
have
to
have
a
discussion
which
is
like.
Can
we
do
this
everywhere
else?
And
maybe
it's
a
copy-paste
exercise,
and
we
pretty
much
just
say
I
mean
there's
a
lot
of
this
actually
today
that
is
more
or
less
copy-paste
exercise
where
it's
like
load
in
the
metadata.
A
So
I
could
just
see
there
just
being
some
consistent,
very
standard
pattern
here
and
is
active,
which
is
before
you
go
to
whatever
line
here
is
telling
you
to
scale
the
thing
we
just
do.
A
check
which
is
like
hey
actually
is
the
metric
we
just
found
in
this
case
lag
like
is
lag
less
than
the
the
threshold
that
they
defined
and
I
think
the
way
we're
talking.
If
I'm
understanding
this
right
too,
it
would
even
be
that
in
the
spec
that
we
have,
we
might
add
another
property
like
a
first-class
property.
A
That
is
something
like
minimum
threshold
that
other
triggers
could
all
pull
from.
D
K
Okay,
so
maybe
if
you
can,
if
you
can,
please
open
the
pr
or
maybe
like
comment
on
that
issue,
and
maybe
like
show
us
what
you
have
done
for
the
photo
kafka
kafka
bit
and
then
we
can
think
about
like
okay.
This
is
like
good
enough
to
be
like
replicated
across
all
the
scalers
or
something
like
that.
It'll
be
great.
K
I
A
Yeah
thanks
for
joining
and
yeah
the
scenario
is
absolutely
sound.
So
any
other
questions
you
have
shashik
while
we're
on
the
call.
A
Great
yeah
and
if
you
need
anything
in
the
meantime
as
well,
whether
if
you're
not
on
it
already
obviously
github
issues-
are,
are
great.
I'm
I'm
really
bad
at
looking
at
github
notifications,
but
I'm
pretty
good
at
looking
at
slack
stuff.
So
you
can.
You
can
reach
us
all
on
the
kubernetes
slack
channel
as
well.
If
you
have
questions
throughout
this
week
or
next,
if
you
start
to
spend
some
cycles
on
it,.
A
K
Interview,
yeah
jeff:
could
you
please
post
that
issue
in
the
in
the
agenda?
If
you
haven't
done.
K
A
I'll
paste
it
in
chat
here
for
convenience
in
case
you
want
the
notes
open,
but
I'll
pop
it
in
the
meeting
notes
too
great.
That's
all
the
scheduled
agenda
items
anything
else.
Anyone
wants
to
bring
up
while
we're
all
on
the
call
before
we
meet
again
on
november
10th
after
2.0
has
gone
out,
and
we
are
all
going
to
be
bright,
bright-eyed
and
bushy-tailed
earlier
in
the
morning
or
earlier
in
the
evening.
For
many
of
you.
C
I
have
one
more
thing
that
I
forgot
to
mention,
so
we
finally
sent
the
designs
for
the
merch
to
print,
and
yesterday
I
got
a
fedex
notification
that
something
is
coming
my
way,
which
was
not
the
idea.
So
normally
there
would
be
a
website
where
you
can
order
the
merch
yourself,
but
it
sounds
like
I'll
be
doing
a
lot
of
shipping
soon.
C
We
have
stickers
t-shirts
and
hoodies.
C
A
Great
awesome,
perks
of
the
perks
of
the
gig,
all
right,
that's
fantastic!
Finally,
and
if
you
want
that
evangelized
at
all,
I'm
more
than
happy
to
amplify
it,
but
I
also
don't
want
you
to
be
overwhelmed
with
all
the
people
who
want
swag.
If
we've
learned
anything
from
hacktoberfest,
it's
that
people
will
do.
A
For
a
free
swag,
oh
boy,
all
right
thanks
so
much
everyone
from
for
joining
and
yes
fulfilled
by
tom
you're,
you're,
very
reliable,
even
in
pandemic
times
thanks
everyone
it's
great
to
have,
especially
some
of
you
folks
who
haven't
joined
before.
Hopefully,
this
was
useful,
as
mentioned,
feel
free
to
reach
out
on
slack
or
on
github,
and
we
hope
to
see
a
few
of
you
in
a
few
weeks
and
have
a
bit
of
a
celebration
on
keda,
2.0,
so
great
job.
Everyone
we'll
chat
again
soon.