►
From YouTube: 2020-02-06 KEDA Standup
Description
A
Very
cool,
perfect,
yes,
Thank,
You,
Sandra
and
I
saw
ya
on
the
I
think
it
was
last
night.
I
was
thinking
around
the
charts.
Repo
and
I
saw
you
contributing
some
stuff
there,
which
was
great
too
just
picking
the
list
and
again
I'm
trying
to
go
in
kind
of
reverse
order.
So
Mel
do
you
want
to
introduce
yourself
and
even
give
a
short,
intro
or
update?
If
you
want.
C
Cloud
native
team
I
am
almost
done
with
my
PR
for
agile
owner
I.
Just
have
to
finish
writing
the
test
which
I
haven't
done
the
identity
part,
but
I'm
gonna
do
a
separate
PR
for
that
and
I
guess.
Other
thing
is
I'm
looking
for
something
to
work
on
after
this,
that
is
not
super
high
priority
timewise,
but
maybe
requires
more
kubernetes
domain
expertise
is
I
feel
like
I
can
contribute
there.
D
A
The
one
that
comes
to
mind
up
that
over
my
head
and
I'm
sure
there's
other
ones
too,
and
even
puffing
in
the
chat
window
but
I
know
with
there
was
some
issue
in
conversation
with
kubernetes,
1.18
or
1.8
I,
don't
know
where
we're
at
now
1.18
anyway.
There's
some
HPA
features
that
are
getting
released
to
help
you
control
scaling
that
we
would
want
to
integrate
with
the
surface
in
so
that
when
we
have
time
because
1.18
isn't
out
yet,
but
that's
one,
that
I
saw
some
chatter
about,
be
pretty
cool
too.
A
E
Yeah
I
think
that
Lucas
is
here
but
he's
using
like
this.
His
so
but
anyway,
I'm
spinning
I'm
from
Red
Hat
I'm
working
on
the
cloud
functions,
team
and
yeah
I've
proposed
the
PRS
to
the
operator
hub
repo.
So
once
this
goes
merged
we
can
have
the
on
operator.
I
was
basically
waiting
for
the
102
degrees.
So,
let's
see
perfect.
E
A
F
A
And
you
said
sigh
verse:
how
do
I
I
don't
need
to
put
it
in
the
notes
house
just
curious
where
your
work,
both
work,
yeah.
F
F
A
G
A
I'll,
add
that
here
too,
oh
actually,
I
think
yes
they're
at
Tom
added
an
item
here.
So
that
is
something
we
should
be
able
to
talk
about
great
thanks
for
joining
Kevin
well
good
to
meet
you
both
I
Burke
I,
know
your
mic.
Staying
I,
don't
know
if
you
just
want
to
use
chat
and
do
a
brief
intro.
We
won't
make
you
unmute,
that's
totally
fine
and
I
will
proxy
for
you.
I
will
be
your
voice,
so
I'm
looking
at
the
list,
I
think
that
Oh
Jeremy
Jeremy
you
want
to
do
a
quick
intro,
hey.
A
Great
cool
and
then
I
Burke
from
Red
Hat,
K
native
and
oh
yes,
that's
right!
So
thanks
for
joining
I
Burke
that
that's
one
nice
thing
too,
since
I
am
being
your
voice.
Today
we
posted
on
the
slack
channel.
It
might
even
been
I
bet
who
posted
it.
There's
a
pull
request
in
the
kubernetes
I'm.
Sorry
Kay
native
repo
right
now
around
potentially
using
kata
to
help
scale
the
Kafka
source
with
K
native
it's
in
here,
somewhere
I'm,
not
being
able
to
find
it
now,
which
is
which
is
great
to
go.
Oh,
it's
hell!
Yeah!
A
E
And
the
first
first
part
is
basically
the
proposal
for
scaling
the
the
sources
into
Evan
things,
so
that
so
there
is
the
proposal
from
from
Alex
on
the
clinic
on
the
cough
cough
cough
can
source,
so
I
think
that
if
this
like
gets
like
a
good
feedback
from
1880,
the
landing
site
I
think
that
the
table
I
incorporate
it
with
the
with
the
more
sources.
And
so
this
is
the
first
step
and
I
suppose
that,
with
the
upcoming
2.0
release
order
the
duck
typing
thing
we
can.
E
A
Moment
great
yeah
and
in
Lucas
and
zip
eunuch
I
posted
in
the
slack
channel
for
the
K
native
group.
Thank
you
for
that
link
the
other
day
because
there's
like
the
kata
native
slack
chatter,
and
it
was
more
just
like
we
chatted
about
a
month
ago.
It's
good
to
see
some
progress.
I,
don't
know
if
it's
worth
doing
even
like
a
some
of
the
stuff,
you
mentioned,
there's
a
beinac
in
terms
of
like
hey.
Once
we
have
duct
tape
being.
Is
there
anything
else
we
want
to
start
to
do
so.
A
Okay,
so
I'll
start
with
a
few
updates
from
my
side,
then,
and
then
we
can
go
over
some
of
these
topics
and
see
if
anything
else
is
amiss,
so
the
biggest
one
and
I'm
kind
of
proxying.
For
folks
who
aren't
here
yesterday,
we
did
roll
out
the
1.2
release,
which
brings
a
lot
of
great
stuff.
It
has
the
post
resin,
my
sequel,
scalar,
that
Daniel
was
working
on
I,
don't
see
Daniel
on
the
call
this
week,
which
is
fine.
A
A
We
haven't
yet
updated
the
charts
so
I'm
going
to
talk
to
Satish
today
about
making
sure
that
we
have
the
1.2
version
of
the
helm
chart
so
that
that
starts
being
the
default
and
latest
version,
but
that
is
out
now
I
know
we
talked
about
doing
a
release
scheduled
later.
I
was
talking
to
Satish
a
little
bit
about
it.
A
This
was
his
first
time
doing
a
release,
so
some
of
it
was
just
learning
how
to
do
it,
but
I
think
we
might
do
another
release
in
like
two
weeks
or
so
to
pull
in,
like
the
azure,
monitor,
PR,
that's
open
and
anything
else.
So
if
even
if
we
squeeze
in
the
pod
identity,
one
with
it,
we'll
see
what
happens
there,
but
that's
the
kind
of
thinking
now
so
when
dot
2
is
out,
1.3
should
be
coming
in
the
next
two
weeks
or
so
so
open
to
any
other
thoughts
or
comments
on
on.
A
If
there's
any
features
that
people
want
to
make
sure
we
hold
off
for
on
that
one
great
any
questions
on
1.2
any
of
that
stuff,
the
only
other
updates
I'm
trying
to
think
from
my
side.
Cmc
have
sandbox
I'll
just
grab
this
one.
So
there's
an
issue
here
that
Tom
I'm
guessing
Tom
late,
which
is
the
actual
request
that
we
made
it's
kind
of
interesting.
A
So
we
got
assigned
the
cig
around
run
time,
which
is
a
brand
new
sig.
It
just
create
got
created
in
October.
The
short
answer
is
they
haven't
had
a
meeting
yet
all
year.
They
have
their
first
meeting
today,
but
we
are
not
on
the
agenda
so
on
the
20th,
I
think
you're.
Even
looking
at
the
chat
right
now
on
the
20th
they're
thinking,
we
would
do
a
presentation
on
kada.
It
actually
happens
at
11:00.
So
there's
one
right
after
this
meeting
I'll
join
at
11:00
a.m.
if
you
go
to
the
CNC
F
community
calendar.
A
Any
of
you
are
welcome
to
join
as
well,
but
they'll
be
looking
around
and
then
hopefully
in
two
weeks
on
the
20th,
they
will
make
their
decision
or
make
their
evaluation
on
on
on
boarding
at
to
CNC
F.
So
hoping
that
goes
well,
we'll
see
what
happens.
So.
That's
the
update
there
all
right
and
I'm,
just
looking
here
I
noticed
a
Tish
joined,
which
is
great,
so
Tish
I
covered
the
release
you
did
as
well.
So
thank
you
for
doing
that,
all
right,
any
other,
oh
yeah.
A
It
looks
like
some
of
that's
in
the
agenda
here
so
check
out
that
one
a
lot
of
these
are
Tom
ones.
So
we
can
talk
about
the
HTTP
one,
if
anything,
but
since
Tom's,
not
here
I,
almost
kind
of
just
want
to.
Is
there
any
other
specific
topic
or
any
larger
update?
That
I
know
wants
to
chat
about
before
we
maybe
talk
about
the
HTTP
scalar
stuff,
and
then
we
can
go
from
there.
A
You
can
unmute
or
paste
in
chat
either.
One
works.
Fine
okay
sounds
like
we're
fine,
so
I'm
gonna
jump
straight
to
this
one,
because
I
think
it's
interesting
and
I
know
cybers
you
you
mentioned
you
have
specific
interest
in
this,
so
we've
got
a
good
issue
here.
There.
It
links
to
a
few
I
think
it
links
to
kind
of
this
parent,
one
which
is
kated
today.
The
way
it
works
is
it's
always
pulling
metrics
from
somewhere.
A
Usually
that's
some
way
or
something
like
Kafka
or
as
your
monitor
or
whatever
else,
and
so
there's
not
a
direct
way
to
have
skip
K
to
scale
something
that's
HTTP
endpoint,
whether
it's
an
Asscher
function
that
say,
should
appear
just
an
HTTP
micro
service
or
whatever
else
so
there's
an
issue
here
which
is
like
hey:
can
we
do
some
work
so
that
kata
can
help
do
the
event
based
scaling
thing
with
HTTP
workloads
and
there's
a
few
there's?
Actually,
some
really
good
conversations
in
Vienna
chimed
in
quite
a
bit.
A
I
would
recommend
looking
at
the
slack
channel,
there's
a
whole
thread
somewhere
here,
maybe
even
two
or
three
threads
around
HTTP
scaling
and
this
one
with
will
that
went
into
kind
of
like
sto
and
talked
about
scaling
to
zero
and
the
options
there.
I
guess.
The
short
answer
is
because
k
de-allocate
is
really
doing.
Is
it's
discovering
metrics
and
then
pushing
them
to
the
HPA?
There's
no
default
metrics
for
us
to
pull
out
of
the
box
with
with
with
HTTP
workloads
and
kubernetes.
So
there's
a
few
kind
of
proposals
on
the
table.
I.
A
Imagine
will
probably
want
to
do
all
of
them.
It's
just
a
matter
of
which
ones
we
do
first.
So
the
first
one
that
works
today
there's
a
link
to
a
blog
post
here.
This
is
just
kind
of
an
FYI.
You
can
manually
set
this
up
today,
where,
if
you
use
like
there's
a
blog
post
sample
here,
if
you
use
something
like
nginx
ingress
and
you
use
Prometheus,
that's
gonna
scrape
metrics.
A
You
can
then
use
the
kata
Prometheus
scalar
to
say,
hey
how
many
requests
per
second
am
I
getting
or
how
many
requests
over
a
five
minute
window
am
I
getting,
and
you
can
use
that
to
drive
scale.
So
that's
totally
possible
today
with
no
work
on
kada.
It's
a
little
bit.
You
know,
there's
a
lot
of
pieces
here.
You've
got
to
have
Prometheus
and
Prometheus
has
to
talk
to
your
ingress
and
then
kata
toxic
Prometheus.
So
there's
a
bit
of
jumps
around
there
there's
some
other
discussions
that
have
come
up.
A
I
know,
Tom's
been
looking
right
now
at
the
service.
Mesh
interface
is
looking
at
implementing
some
interface
standards
for
traffic
tricks,
and
so
the
thinking
here
is,
if
you
are
using
a
service
mesh
that
implemented
SMI
k2
then
could
then
ask
that
service
mesh
about
the
metrics
like
how
many
requests
per
second
and
use
that
to
drive
scale.
A
So
it's
still
early
I
think
there's
probably
a
link
here
to
that
discussion
in
the
service
mesh
interface
repo,
where
it's
kind
of
just
being
designed,
but
it
is
super
interesting
and
I
think
it
makes
a
lot
of
sense.
It's
just
we'll
have
to
work
with
the
SMI
spec
team
to
implement
the
spec,
and
then
a
service
mesh
we'll
have
to
implement
that
implementation
and
then
k2
would
hook
into
that.
A
Obviously
it's
not
using
kata
today
for
the
serving
stuff,
but
just
in
terms
of
workloads,
that's
something!
That's
come
up
on
the
Select
channel
a
lot
that's
worth
noting
so
I
kind
of
rambled
on
for
a
bit
there.
Anyone
else
have
any
thoughts
or
questions
around
this
HTTP
scaling
workload.
If
there's
one
of
those
options
sounds
more
appealing,
if
we're
missing
another
option
or
just
thoughts
in
general,
on
the
pattern.
G
I
guess
I
was
just
curious.
Do
we
have
an
idea
of
like
how
much
buy-in
the
SMI
spec
has,
because
that
sounds
really
cool
to
me
if
it
ends
up
being
widely
adopted,
but
I
just
don't
know
what
it's
like
statuses
with
other
projects
like
I,
don't
know
if
SEO
has
a
you
know
is
planning
to
adopt
that
spec
or
other
big
service
meshes.
Does.
A
A
I,
don't
know
how
true
that
is,
especially
with
something
like
this
traffic
metrics
thing
that
we're
talking
about
I,
don't
know,
I,
don't
know
how
many
controllers
are
honoring
this
stuff,
so
I
can
at
least
take
that
action
item,
because
it
is
I
know
the
folks
who
are
working
on
this
speck
at
Microsoft,
at
least
from
the
Microsoft
side
to
understand
you
know,
even
if
we
did
this
SMI
spec
approach
and
we
helped
them
close.
This
feature
like
what
does
that
get
us
or
is
it
much
more
future?
E
Okay,
yeah,
maybe
just
just
a
short
note.
It
could
be
like
the
other
approach
with
the
creative,
so
basically
feels
like
the
creative
service
for
your
deployments.
We
can
eventually
like
enable
Kayla
to
scale
scale
those
connective
objects
like
instead
of
deployments.
It
could
be
like
pretty
easily
gone
I
suppose
so
that
could
be
the
other
option.
Basically,
for
the
HTTP,
you
can
use
the
Committee
photo
square
and
fund
for
the
event
based
you
can
use
to
pin
to
the
SCADA.
E
So
instead,
instead
of
instead
of
deploying
by
the
kubernetes
deployments,
you
will
use
the
creative
serving
parts
service.
I.
Think
that
basically,
because
the
SMI
like
requires
you
to
have
installed
service
mesh,
which
adds
another
layer
in
like
in
your
cluster,
which
could
be
like
overhead
and
I
can
or
I
I
know
that
there
are
some
movements
in
a
creative
community
to
like
to
get
rid
of
steel
or
like
to
be
able
users
to
plug
in,
like
some
lightweight
proxies
instead.
So
this
could
be
the
other
option
for
you
just
in
the
future.
A
It
really
is
it's.
This
is
a
super
interesting
space
for
me
personally,
right
now,
and
it
like
is
Hibiya
Nick
did
a
great
job
explaining
and
it's
it's
hard,
because
kind
of
all
of
the
options
are
a
little
bit
complicated
in
their
own
ways
like
service
mesh
itself
is
a
little
complicated
came
native.
A
Serving
zip
Enoch
mentioned
requires
a
service
mesh
because
there's
just
a
lot
of
capabilities
to
drive
especially
scale
to
zero,
where
you've
got
to
like
hold
on
to
that
request,
while
you're
scaling
out
that
things
like
service
meshes
and
K
native,
serving
or
or
whatever
else
help
add
Osiris
is
another
piece
of
tech.
We
had
some
chatter
about
that
does
scale
to
zero.
That
I.
A
That
said
kind
of
in
saying
this,
there's
a
huge
opportunity
here,
just
in
general,
for
the
community
to
figure
out
a
simple
way
to
do
this
right
now,
I'm
kind
of
along
those
same
lines
as
you
Munich,
where
my
dream
world
would
be
some
world
where
there's
a
very
simplified
flavor
of
K
native
serving
that
can
leverage
k2
like
scaling
that
could
both
do
the
HTTP
0
to
n,
but
based
on
requests
or
concurrency,
as
well
as
the
non
HTTP
0
to
N.
There's
a
ways
to
get
there
like.
A
We've
we'd,
like
that,
would
require
some
big
changes
across
the
board.
But
there's
there's
huge
opportunity
here.
So
I
don't
know
the
best
place,
but
if
I
guess,
if
anyone
has
thoughts
or
ideas
or
like
keeping
an
eye
on
this
space
is
definitely
one
to
keep
your
eye
on.
But
as
well
as
like
the
opportunity
aspect,
even
Mel
I
know
you
mentioned
looking
at
like
I.
Don't
know
if
there's
anything
around
the
HTTP
stuff
in
kubernetes
that,
like
there's,
there's,
there's
a
world
there's
still
a
spot
here
to
simplify
this.
This
HTTP
scaling
and
I.
A
Don't
know
what
it's
going
to
be
it
very
well,
maybe
the
cane
aid
of
serving
as
that
continues
to
grow
and
evolve
and
hopefully
implement
some
of
the
k2
stuff
too.
So
we'll
see
who
knows
great
doing
a
quick
check
on
my
list
of
attendees.
Ok,
so
I'm
just
gonna
come
back
to
the
agenda
and
see
if
there's
anything
else,
we
talked
about
service
mash.
We
talked
about
CNCs,
sandbox,
I,
think
Tom
added
this
issue.
Let
me
pull
it
up,
really
quick
and
see
if
I'm
able
to
grok
it
just
from
his
updating.
A
A
So
there's
two
models
for
kada,
I,
guess:
there's
three:
there's
kind
of
three
flavors
when
people
build
scalars,
which
are
our
most
contributed
thing
as
a
scalar,
because
there's
lots
of
event,
sources
and
they're
pretty
easily
didn't
great
with
so
the
first
model,
I
guess
the
first
distinction
that
I
mention
every
single
scalar
has
a
maintainer,
and
it's
really
up
to
somebody
to
maintain
say
that
they're,
the
maintainer,
the
main.
The
main
reason
we
do.
A
This
is
like
if
somebody
wants
to
use
as
your
service
bus
with
kada
and
they're,
worried
that
hey,
if
I
run
into
a
bug,
is
anyone
gonna
say
that
they're
on
the
line
to
fix
it
if
Microsoft
is
listed,
is
maintainer,
then
that's
kind
of
at
least
me
and
my
team
saying?
Yes,
if
you
run
into
a
bug,
I'm
not
going
to
guarantee
we're
gonna
fix
it
within
two
hours,
but
we
will
continue
to
fix
bugs
and
it's
not
just
like
we're.
A
Gonna
contribute
the
scalar
and
then
go
pop
off
and
maybe
never
look
back
at
it.
Maintainer
community
kind
of
means,
like
hey
somebody,
came
and
built
this
they
may
or
may
not
still
be
around
like
wow.
A
cloud
is
a
great
example.
I
can't
remember
who
contributed
this
cloud.
I
scalar.
If
there's
a
bug,
that's
discovered
in
it
in
six
months
time,
I,
don't
really
know
who
or
anyone
who
will
come
and
fix
that
bug,
and
so
maintainer
community
is
just
kind
of
like
best
effort.
A
A
The
other
aspect
of
scalers
is
that
the
majority
of
the
scale
will
all
of
the
scalars
I'm
showing
here
are
built
into
the
Keita
operator
itself,
like
you-you-you,
create
a
pull
request
on
the
Keita
repo
and
that's
the
scalar,
there's
also
a
world
that
you
have
these
external
scalars,
which
means
you
write
some
code
that
gets
deployed
separate
from
Keita.
The
Keita
knows
how
to
talk
to
and
integrate
it
and
it's
the
extensibility
story.
So
Tom
just
created
an
issue
here
which
is
like
we
should
probably
document
how
we're
going
to
look
at
scalars.
A
Do
we
want
to
make
it
so
that
when
people
contribute
scalars,
they
have
to
start
as
external,
so
we
don't
make
the
package
too
big
and
then
they
can
become
internal
or
or
this
that
and
the
other
are
kind
of
approach.
So
far
has
been.
Kata
itself
is
still
relatively
small
from
a
footprint
and
a
point
of
view,
and
it's
super
convenient
to
just
install
one
thing
and
then
have
everything
rather
than
being
like
hey
I
want
to
use
kata
with
four
of
these
event
sources
and
now
I
have
five
pods
that
are
running.
A
In
my
in
my
cluster
to
do
one
for
each
scaler,
so
he
has
an
issue
here
to
talk
about
it.
I
have
my
own
thoughts
so
far,
we've
kind
of
been
like
we're
not
at
the
scale
yet
where
this
is
giving
us
pain,
but
it's
probably
at
least
worth
documenting,
so
I'm
not
going
to
necessarily
give
what
I
think
is
the
answer
here,
but
just
calling
out
this
issue
and
some
background
there,
as
Tom,
did
put
it
in
the
agenda.
I'll
pause
in
case.
Anyone
has
any
thoughts
before
we
move.
A
A
Governance,
no
decision
made
just
covered
topics.
Okay,
so
then
the
last
thing
I
think
we'll
talk
about.
Unless
someone
else
has
any
topics
is
we
talked
about
the
SMI
spec
changelog
Satish?
This
one
might
be
one
that
I
don't
know
if
this
would
make
sense
to
you.
Oh,
we
kind
of
have
this
I'm,
not
sure.
If
this
is
different.
A
They
actually
have
a
change
log
markdown,
but
we
have
github
releases
so
I'm,
not
sure
if
we
need
a
change
log,
since
we
do
use
gap
releases.
If
I
come
here
to
the
Cato
releases,
then
I
can
see
like
for
1.2.
Here's,
here's
all
the
things
that
changed
so
I
don't
know
time
added
to
the
agenda.
I,
don't
know
if
he's
saying
he
wants
to
turn
this
into
something
on
top
of
like
the
kts
age
site,
but
I
personally
am
ok
with
this.
For
now.
Okay,
that's
all
the
agenda
items
anything
else.
A
Anyone
else
wants
to
cover
anything
we
didn't
cover
or
we
we
good
for
for
this
week.
I
will
take
that
as
we
are
good
all
right,
thanks,
I'm
enjoying
Mel
I
think
the
only
other
kind
of
potential
action
I'm
that
was
brought
up
is
as
you're
looking
for
something
new
I.
Think
I
threw
out
a
few
random
ideas.
I
know
that
there's
a
few
issues
here,
especially
the
ones
that
are
marked
Help,
Wanted
I,
think
those
are
good
ones
that
are
like.
E
Jeff
I'll
have
just
a
short
note:
I
wasn't
here
last
week,
but
I
know
there
were
some
discussions
or
maybe
Toby
before
about
the
development
and
debugging,
and
you
know
about
the
experience
for
developers
around
Keita,
so
I
have
added
like
instructions
to
do
a
readme.
Basically,
we
are
using
the
operator
as
the
a
framework.
So
it's
pretty
easy.
It's
pretty
easy
like
to
test
together,
like
basically
get
the
other
other
topic
deploying
custom.
Oh
yeah,.
E
Yeah
and
you
just
needs
to
run
the
operator
SDK
run
and
if
you'll
basically
take
your
code
and
it
will
run
it
in
the
cluster,
so
you
don't
need
to
build
an
image
anything
if
you
just
run
it
and
you
can
even
like
a
debug
debug
the
code,
so
you
can
connect
debugger
to
this
and
use
it
directly
from
your
IDE.
So
it's
pretty
easy.
You
don't
have
to
build
an
immediate
service.
So
just
a
short
note
for
everyone.
You
know:
that's.
A
A
good
know
and
I
love
all
these
improvements
to
the
contributor
experience,
because
I
know
a
lot
of
the
folks
who've
joined
the
call
and
or
even
just
communicated
on
slacker
or
github.
Mention
like
that,
there's
still
some
bumps
in
the
road
one
question
I
have
here
around
the
operator:
SDK
does
the
operator
SDK
CLI?
A
Does
that
work
on
Windows
and
Mac,
because
I
know
one
of
the
other
pieces
of
feedback
I've
heard
is
some
folks
saying
the
windows
development
experience
was
a
little
bit
harder
and
I.
Don't
know
if
there's
just
one
example
that
this
might
maybe
something
that's
optimized
a
little
bit
more
for
Mac.
It.
E
A
A
Let
me
add
some
action
I'm
here
for
me:
okay,
deck
up,
Greater
SDK
check,
SMI
spec
adoption
and
before
I,
sign
off
I'm.
Actually
gonna
look
at
if
I
had
action
items
last
week
that
I
didn't
do
so.
We
did
do
the
1.2
release.
We
did
help
review
the
azure
monitor.
Pr
I
did
not
create
a
release.
Schedule
I,
screwed
up
I,
don't
know
if
rajasah
ended
up
creating
this
issue
around
new
line,
characters,
I
didn't
see
it
and
then
oh,
this
one
actually
is
in
Nixon
you're
on
the
call.
A
There
was
some
question
around
I
guess:
I
believe
it
was
last
week
mentioned
that
when
they
were
debugging
one
of
their
custom
stuff,
that
some
of
the
log
messages
weren't
showing
up
and
they
were
curious
what
they
could
do
to
get
the
log
level
up
and
I
realized
a
few
things.
One
is
that
we
don't
document
that
which
is
fine.
I
can
take
that
as
another
one,
but
the
other
one
is
I,
wasn't
even
totally
sure
on
the
right
levels.