►
From YouTube: App Runtime Platform Working Group [Apr 5, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
B
Yeah
so
I
heard
from
the
other
working
groups
that
they
are
running
their
CI
pipelines
in
their
working
group
gcp
account.
We
also
have
a
working
group
account.
We
use
it
for
some
buckets
already,
and
but
we
don't
do
anything
else.
So,
for
example,
compute
engine
is
still
disabled
and
no
VMS
are
running
and
yeah
I
just
think
it
makes
sense
running
our
CI
stuff
in
there.
I
know
that
you
also
have
these
Fun
Time
stuff
running.
Not
in
this
account.
B
Maybe
you
want
to
move
at
some
point,
maybe
not
yeah,
but
we
have
some
CI
that
we
host
on
our
infrastructure,
I
think
makes
sense
moving
it.
There,
I
just
wanted
to
cross
check
if
that's
fine,
if
I
can
just
enable
the
compute
engine
and
yeah
also
how
to
structure
this.
B
If
we
want
to
have
some
shared
repo
and
for
our
working
group
for
the
whole
working
group,
or
should
we
just
start
with
our
own
repo,
because
for
us,
for
example,
we
have
now
the
pcap
release
and
the
HP
proxy,
which
we
host
to
complete
independent
reports
so
hosting
the
CI
and
either
of
those
doesn't
make
a
lot
of
sense.
So
we
wouldn't
need
a
new
flower
as
stuff,
but
yeah.
We
could
also
have
some
shared
repo
just
yeah.
That's
my
ask.
A
Yeah
I
I
think
moving
things
to
this,
and
using
this
account
is
a
good
idea.
I
think
you're
you're.
As
long
as
you
have
the
permissions
I
think
you're
totally
allowed
to
enable
the
compute
engine
and
start
using
it
when
I've
talked
to
Chris
Clark
about
this
there's
no
like
set
number
for
how
much
we're
allowed
to
spend
it's
more
like
what
is
reasonable
and
keeping
an
eye
on
it.
A
A
They
do
not.
Okay,
whatever
seems
sane
to
you,
yeah
for
a
shared
people
for
a
working
group,
I
would
say:
let's,
let's
keep
them
separate,
separate
repos
for
now,
and
then
obviously
we
couldn't
see
what
you
do
because
it
sounds
like.
Maybe
you
have
a
little
bit
more
momentum
right
now
to
move
things
in,
so
we
can
see
what
patterns
you
start
with
and
see
if
we
want
to
follow
those
patterns
for
our
other
areas,.
A
A
A
D
Yeah
thanks
yeah,
so
just
wanted
to
start
discussion
here
about
a
follow-up
or
discussion
on
select
a
better
set
about
what
is
going
on
our
Java
next
steps
for
this
removal
of
the
global,
lock
rate
limit.
It
just
saw
the
pull
request
that
you
can't
raised
for
that
documentation
and
we
became
again
aware
that
you
plan
to
to
remove
this
limits
in
a
foreseeable
future
and
recommend
that
we
everyone
should,
when
use
the
new
byte
based
limits,
which
is
defined
per
Kubota
or
RX,
space
or
apps.
D
To
be
honest,
we
did
not
look
into
it
in
the
last
months,
because
yeah,
we
just
were
not
so
aware
that
it's
maybe
urgent.
So
we
would
like
to
know
what
are
your
plans
and
how
fast
do
you
want
to
remove
this
old
one?
And
if
you
have
any
experience
you
want
to
share,
you
can
share
about
possible
issues
when
going
to
the
new
limits
on
quota
level.
So,
for
example,
I,
maybe
just
to
confirm
the
the
the
setting
on
the
org
level.
D
It's
just
the
upper
limit
which
defines
what
is
possible
on
on
space
or
an
app
level.
So
if
you
have,
for
example,
one
megabytes
on
Arc
level,
when
no
app
developer
can
specify
more
than
one
megabyte
on
on
app
or
a
manifest,
for
example,
if
it
was
the
first
would
be
the
first
question.
We
have,
to
be
honest,
not
tried
it
out
on
our
side
just
before
you
can
just
ask
you
before
we
try
it
out,
and
the
second
question
is:
there's
a
sentence
in
the
documentation.
D
E
Yes,
your
to
some
of
the
things
you
said.
Yes,
there
was
a
lot
of
questions
in
there.
I
think
to
your
to
your
first
question:
what's
our
timeline
for
the
removal
of
the
global
Diego
line
based
limit,
that
was
the
first
one
I
heard
we
have.
We
currently
have.
No,
we
have
no
plans
in
the
near
future
to
rip
it
out.
We've
marked
it
as
deprecated,
but
I
would
expect
it
to
stick
around
for
several
months.
A
E
At
least
so,
the
timeline
is
no
is
not
like.
It's
not
dire.
The
to
the
next
question,
which
is
which
was
like.
Are
there
any
I?
Think
you
you?
You
first
asked
if
there's
any
quirks
about
the
new
quota,
and
then
you
ask
some
specific
questions
about
the
quota,
the
the
specific
to
the
specific
questions
about
the
quota.
E
Yes,
the
org
and
space
quotas
determine
the
upper
bounds
of
all
the
app
instances,
log
rate
limits,
and
if
you
were
to
try
and
start
a
new
app
instance
that
would
take
you
over
the
law,
great
quota
for
your
space
or
org,
it
would
fail
to
start,
but.
D
Just
to
make
it
clears
over,
for
example,
the
space
the
quota
is
the
upper
lipid
for
one
single
app
instance,
or
is
it,
but
for
the
sum
of
all
limits
for
all
the
apps,
the.
E
E
So
when
Walking
the
Tightrope
of
adding
this
feature,
the
real
question
was
like:
do
we
assign
some
arbitrary
law
great
limit
to
every
app,
so
that
this
feature
starts
working
as
expected,
right
off
the
bat
and
starts
working
like
the
memory
quota
per
for
organ
spaces
and
for
apps,
or
do
we
make
sure
that
no
one's
behavior
is
interrupted
right
off
the
bat
and
set
every
app
to
unlimited
and
then
put
it
up
to
the
operators
and
the
developers
to
switch
it
off
of
unlimited,
and
we
opted
for
the
latter
path?
E
It
obviously
the
most
confusing
part
of
that
calculation
is
if
a
single
app
in
an
order
space
is
unlimited
and
the
order
space
quota
is
changed
to
something
that
is
not
unlimited.
Then
That
app
can
never
be
like
updated
again
right.
It
can't
be
restarted.
It
can't
be
scaled.
Like
things
it
can't,
nothing
can
happen
with
that
app,
so
we
decided
to
enforce
a
limit.
An
arbitrary
limit
of
like
no
no
apps
in
the
org
or
space
can
be
Unlimited
in
order
for
the
space
or
org
quota
for
law
great
to
be
set.
E
D
E
The
it
it
when
you
were
saying
before
you'd
have
to
change
all
the
apps
to
have
a
limit,
a
non-unlimited
limit.
Yes,
you
would
have
to
do
that
in
order
to
implement
a
space
or
org
level
quota.
E
Think
there
is
a
separate
I
believe:
there's
a
separate
property
in
Cappy
I,
don't
remember
the
name
of
it
off
my
head.
That
would
limit
the
maximum
log
rate
limit
for
an
app
except
for
unlimited.
It
doesn't
affect
unlimited,
but
it
would
affect
the
the
maximum
the
you
know
in
a
similar
way
to
those
a
maximum
like
memory
quota
for
an
app
okay.
E
So
the
feature
is
very
similar
to
the
existing
memory
quota.
One.
It's
the
Unlimited
part
that
introduces
a
lot
of
weirdness.
D
E
Yeah
I
think
there's
one
Quirk
there,
where
I'm
not
confident
that
the
upper
bound
applies
to
Unlimited
like
if
you
set
the
upper
bound.
That
may
not
affect
the
ability
of
a
developer
to
set
the
app
logary
limit
to
unlimited
I.
E
Only
apply
to
a
limited
law
grade
limit,
if
that
makes
sense.
E
I
think
there's
one
there's
one
at
least
one
other
Quirk
worse
worth
worth,
noting
right
now
that
we're
actually
in
the
process
of
trying
to
change
that
is,
this
is
byte
based
limiting
rather
than
log
based
limit
or
line
based
limiting
right.
There
is
some
weird
quirks
related
to
that.
Where
that
we've
only
kind
of
just
started
to
discover
one
is
that
the
way
we've
set
up
the
law
rate
limiting
is
with
the
classic
the
go
rate
limiter
right.
E
So
it's
a
it's
a
token
bucket
and
if
you
drain
the
token
bucket,
you
have
to
wait
for
enough
tokens
to
fill
up
the
bucket
for
you
to
go
and
the
it
leads
to
some
weird
Behavior
at
present.
E
If
you
are
consistently
at
the
edge
of
your
limit
and
basically
going
up
and
down
over
the
limit
and
under
the
limit,
it
can
lead
to
some
weird
Behavior,
because
we
emit
a
message
that
says,
like
law,
great
limit
exceeded
every
time
you
exceed
your
limit
and
what
we
noticed
is
that
if
the
app
is
going
up
and
down
over
the
up
and
over
the
limit
and
then
under
the
limit,
what's
going
to
happen,
is
you
almost
double
up
the
number
of
logs
that
the
app
is
actually
emitting
because
of
the
App
log
rate
limited
exceeded
message
which
we're
in
the
process
of
trying
to
trying
to
fix
right
now,
annoying
problem,
the
the
we
we
switched
up,
the
the
old
line
based
one
would
only
emit
an
App
log
rate
limit
exceeded
message
once
per
second,
we
did
not
copy
that
mechanism
because
we
figured
it
can
be
because
it
can.
E
It
could
be
quite
confusing
if
you,
if
you
log
rate,
limit
an
app
in
the
middle
of
a
stack
trace,
for
instance,
and
you
don't
say
that
the
that
you've
like
cut
out
some
of
the
logs
there,
that
seems
that
would
seem
non-optimal.
So
we
switched
it
up
to
like
every
time.
There's
a
gap.
Every
time
you
block
some
amount
of
logs
rate
limits
some
amount
of
logs.
E
You
admit
that
message,
but
again
like
that,
can
lead
to
a
situation
that
is
doubling
up
your
your
log
output
accidentally,
the
I
think
we're
gonna
we're
right
now
we're
we're
working
towards
a
fix
for
that
and
one
of
the
things
we're
thinking
about
is
bursting
and
the
other
is
introducing
a
penalty
box
function
of
some
kind.
So
if
an
app
were
to
go
over
the
log
rate
limit,
then
we
would
actually
put
a
halt
on
them.
E
E
The
bursting
part
is
also
important
because
we
realize
there's
one
other
Quirk
here,
which
may
not
be
super
obvious,
because
it's
byte,
based
rather
than
line
based,
there's
a
possibility
that
if
you
set
your
law
rate
limit
too
low,
that
no
log
line
ever
is
emitted,
because
if
your
log
line
is
too
many
bytes
long
then,
and
your
Library
limit
is
too
low,
then
no
law.
E
If
even
if
the
rate
is
below
64,
which
would
allow
at
least
one
log
to
come
through
every
now,
and
then
you
know,
even
if
you
set
it
too
low,
that's
those
are
some
quirks
and
and
bugs
that
we're
looking
into
into
fixing
right
now.
Do
you
have
any
questions
or
thoughts
about
about
those?
That's.
C
That's
interesting
this
at
the
moment.
Most
of
the
time
when
we,
when
we're
dealing
with
with
noisy
applications,
we
always
have
like
problem
that
the
time
application
he's
he's
running
in
some
mirror
State
and
it
trades
too
many
too
many
stick
traces
and
and
most
of
the
time
the
the
default
loggers
are
loading
with
one
one.
One
one
line
of
this
take
Trace
is
one
one
local
line,
so
we
had
lots
of
low
glands
and
we've
limited
this
application.
C
So
now
what
you're
saying
is
that
everything
can
be
picked
in
a
single
line,
but
then,
and
then
the
the
lines
will
might
might
not
get
through
and
if
the
any
the
application
is.
Is
that
lucky
that
that
the
thick
place
is
is
bigger
than
64
K?
This
would
mean
it
probably
separated
in
two
messages
or
or
multiple
log
messages
by
Diego.
C
E
That
I
mean
that
should
be
possible.
I,
don't
think
that
changes
too
much
from
right
now,
right
that
I
guess
the
difference
is
you
could,
theoretically,
with
a
line
based
you're
saying
with
the
line
based
limiting
you
could
pack
your
stack
Trace
into
one
line
and
therefore
guarantee
that
you
get
all
of
it
or
get
none
of
it.
E
Yeah
with
the
new
behavior
that
would
be
different
because
you
could
pack
it
all
in
one
line.
But
if
you
are
exceeding
your
quota,
it
makes
it
more
likely
that
you
would
run
into
that
byte
quota
and
drop
the
whole
line.
If
your
app
is
noisy
rather
than
getting
part
of
the
stack
Trace
one
one
positive
change
is
that
you
would
know
if
you
you
could
get
part
of
the
stack
trace
and
at
least
know
that
there
was
a
stack
Trace
emitted
rather
than
potentially
completely
missing
the
the
stack
trees.
C
Another
interesting
things
that
at
the
moment,
we
have
all
the
all
the
capacitors
throughput
of
all
the
components
like
in
in
log
lines
per
second.
C
Now,
when
we're
changing
everything
to
byte
paste
low
grade
limiting,
it
would
be
interesting
because
we
might
have
like
bigger,
bigger
log
messages
going
through
and
which
would
mean
that
that
the
throughput
will
will
be
lower,
so
we'll
have
to
we'll
have
to
play
around
with,
with
such
things
like
bigger
log
messages,
less
throughput,
which
would
be
okay,
because
the
the
whole
stack
won't
be
overloaded,
yeah
and
the
other
thing
that
borders
me
in
this
direction
is
that
is
there
a
way
how
to
limits.
C
Is
there
a
way
to
set
some
some
some
upper
bounds
of
the
whole
throughput
so
that
the
the
the
health
of
the
whole
stake
is
is
guaranteed
practically
because
we
might
have
like
one
app,
plugin,
I,
don't
know
or
or
shaving
up
limit
of
of
of
100K
another
one
of
500k,
and
maybe
some
some
people
would
need
more
working
with,
and
we
let
them
look
one
one
megabyte
per
second,
maybe
I
don't
know.
C
So
how
can
we
ensure
the
the
health
of
the
whole
stick
if
we,
if
we
have
a
large
throughput,
if
you
have
one
because
I'm
asking
because
we
have
some
some
works
but
where
the
customers
have
the
big
amount
of
of
applications
and
if
the
then,
if
you
see
them
or
quarter
it,
and
if
it's
divided
among
all
of
the
apps,
it
will
be
interesting
for
us
how?
How
do
we?
C
E
I
think
so,
there's
no.
We
purposely
didn't
Implement
a
global
one,
because
we
wanted
to
encourage
more
yeah.
The
idea
was
to
encourage
more
capacity
planning
like
what
happens
with
memory
if
like.
If,
if,
if
the
operator
can
determine
like
how
much
log
throughput
actually
works
for
the
system,
then
they
can
set,
they
can
ration
it
out
like
a
resource
in
a
similar
way
to
memory
and
apply
it
at
the
org
or
Space
level
to
limit
their
applications.
E
If,
if
you're
talking
about
you're
talking
about
customers
who
have
a
high
load
in
one
org,
they
may
not
need
multiple
organizations,
because
you
could
set
a
very
high
limit
in
their
org
and
then
set
lower
limits
in
their
spaces
is
another
option.
It's
not
just
on
the
org
level.
E
E
The
with
the
we
felt
that
the
global
one
was
encouraging
operators
to
not
like
think
about
log
lugs
as
a
resource
enough,
and
it
was
leading
to
a
lot
of
a
lot
of
complaints
about
load,
unlike
unnecessary
load
on
the
system
or
app
logs,
not
getting
through,
because
the
global
thing
had
been
set
too
low
or
too
high.
D
Go
for
it
definitely
so
on
our
side,
we
have
maybe
the
specific
situation
that
the
customer
doesn't
care
about
the
resources
because
we
are
providing
the
locator
stack
and
our
customers.
We
have
a
lot
of
Customs
on
one
single
Foundation.
We
just
want
to
use
as
much
as
possible,
so
they
don't
have
any
interest
in
in
creating
such
limits.
They
want
unlimited
locks
for
sure,
but
our
interest
is
to
not
have
to
scale
out
the
local
data
unreasonably
because
the
customers
misuse
the
lock
logging
feature.
D
D
Yes
or
maybe
we
even
we
will
never
have
something
like
that,
because
our
sales
people
don't
want
to
have
another,
don't
want
to
complicate
the
selling
and
the
price
list
with
too
many
things,
and
then
people
have
to
pay
because
they
say
why
should
I
pay
for
for
logging.
Logging,
for
me,
is
part
of
a
normal
platform
and
I
already
play
for
memory.
D
So
maybe
one
idea
I
had
that
we
just
just
couple
the
number
of
locked
lines
in
this
ALT
level
to
the
memory
the
customer
buys
so
the
more
memory
he
buys
or
they
buy
more
look.
They
we
configure
for
them
in
the
org,
but
when
yeah-
but
maybe
that's
not
true,
if
you
have
similar
issues
or
if
you
have
don't
have
this
issue
at
all,
because
usually
your
customer
both
own
the
foundation
and.
E
True,
the
the
different
use
case
than
I
think
we
primarily
talked
about.
I
could
still
see
a
world
where,
like
obviously
I,
don't
know
your
building
structure,
like
maybe
tying
billing
to
from
from
logs
and
to
memory,
makes
sense
I.
My
first
thought
was
establishing
sort
of
the
amount
of
bytes
of
logs
that
can
flow
through
your
system
optimally
and
then
setting
some
rational
org
limit.
E
That
would
presumably
be
quite
high
on
to
your
customers
and
then,
if
they
are
start
to
exceed
that
limit,
then
getting
conversations
with
them
to
bump
that
up
in
the
same
way
that
you
would
get
in
conversations
to
bump
up
the
memory
but
I.
That's
that's
just
my
first
thought
about
how
that
would
work.
E
It
does
sound
like
there's,
maybe
more
to
talk
about
here
and
we're
we're
still
making
changes
to
this
feature
as,
like
we
said
before
so
I
I'm,
open,
I
think
we're
open
to
more
discussion
about
this
and
more
PR
as
well.
Definitely
as
well
bro
I
think.
If
we
can
help
you
all
out
in
some
way
get.
D
E
D
It's
good
for
us
to
hear
that
you
don't
want
to
remove
the
old
limits
in
the
next
one
or
two
months,
but
maybe
yeah
give
us
more
time,
and
we
definitely
even
look
into
this
one
and
how
we
can
use
it
and
wasn't
ever
experience
the
and
consider
these
facts.
For
what
you
mentioned,
for
example,
is
this
yeah
brackets
and
so
on?
Yeah
thanks.
That's
very
helpful.
D
C
C
It
I
was
just
gonna
say
no
just
wanted
to
ask,
because
the
the
documentation
mentions
that
the
the
new
white
based
application
by
this
duplication,
low
grade
limit
is,
is
a
per
application
and
and
what
happens
if
an
application
has
multiple
instances?
Will
this
limitation
among
all
the
instances
or
it's
or
it's
the
same
limit
for
it
for
for
every
instance,.
A
E
If
the
the
any
new
app
instance
is
going
to
be
assigned
the
same
Library
limit
that
you
assigned
to
your
application
and
if
that
goes
over
and
if
you're,
so,
if
you're
like,
if
someone's
scaling
their
app
to
some
crazy
number,
that's
obviously
gonna
go
over
the
quota.
That's
been
applied
to
your
space
or
organization
or
hopefully
going
to
go
over
the
space
or
or
quota
in
the
same
way
that
the
the
memory
quota
would
work.
A
Okay,
thank
you
for
a
great
discussion.
Log
rate
limiting
those
are
the
only
two
items
on
the
agenda.
Was
there
anything
else
anyone
wanted
to
bring
up.