►
From YouTube: App Runtime Platform Working Group [Mar 1, 2023]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
see
first
item:
oh
look:
it's
mine,
I
was
talking
to
the
TOC
yesterday
doing
my
like
update
with
them
that
I
do
every
eight
weeks
or
whatever,
and
we
were
talking
about
how
go
120
just
came
out
and
I
showed
them
how,
in
a
lot
of
our
releases,
we
document
which
version
of
go
that
we're
on
in
a
standardized
way,
and
they
said:
oh,
that's,
actually
they
liked
that
they
are
thinking.
Oh,
maybe
we
can
like
roll
this
out
across
all
boss
releases
and
then
I
realized.
A
A
Bringing
it
into
line
to
a
standardization
so
getting
it
auto
bumped
in
CI,
so
it's
not
hard-coded
to
go
in
18,
which
it
is
right
now
documenting
the
version
in
a
docs.gov
version
file
and
then
including
it
in
release,
notes
when
release
happens.
Obviously,.
B
We
will
definitely
do
it
before
cutting
the
first
release,
so
right
now,
there's
zero
releases.
Is
it
still
on
some
of
your
skin
finders,
or
how
did
you
get
attention
about
it.
A
C
A
Just
seeing
that
this
was
the
R1
now
I
do
realize
it's
not.
You
know
formally
released
at
it's
still
in
beta
I,
don't
know
whatever
before
beta
is
so.
C
B
A
B
Yeah,
actually,
a
discussion
turned
out
to
be
very
short,
I
guess
with
your
comment
and
that
that
was
the
most
important
okay,
so
maybe
just
some
background.
So
we
saw
a
lot
of
five
or
two
complaints
recently,
so
I
know
we
talked
about
502's
in
general
and
over
time
again
and
again,
but
to
me
it
felt
like
just
in
the
last
two
weeks
we
had
tons
of
tickets
just
way
higher.
B
Also,
one
more
observation,
so
in
our
case
and
the
people
were
mostly
using
Auto
scaling,
so
the
app
Auto
scaler
and
they
just
had
a
lot
of
events,
so
apps
were
regularly,
is
getting
up
and
down
and
additionally,
it
seems
like
they
just
have
very
long
running
processes
or
requests.
Sorry
and
then
they
are
just
interrupted
with
502s
some
some
of
this
I
just
learned
after
putting
in
on
the
agenda
and
we
could
mitigate
it
by
having
a
longer
graceful
shutdown
interval
time.
B
A
Yeah
I
haven't
heard
about
other
people
having
this
issue,
but
of
course
you
know
we
don't
run
our
own
platform,
so
it
can
take
a
long
time
for
those
kinds
of
complaints
to
Bubble
up
to
us,
I'm
curious,
you're,
saying
502s
and
you're,
saying
like
502s
back
to
the
end
user
right
like
go
around
our
will
attempt
to
retry
and
it
gets
502
every
time
and
then
it
returns.
502.
B
Yeah,
it
was
502
to
the
end
user
in
this
case,
I
think
yeah.
So
we
are
mixing
up
two
things
now:
I
I
guess
so
this
interval
Interruption
is
the
new
one.
I
think
it's
pretty
clear.
What's
happening,
there
can't
be
much
more
than
a
five
foot,
maybe
a
retry
but
I
guess
mostly
for
these
long-running
things.
It's
then
posts
or
something
that
can't
be
retried.
B
Okay,
okay
and
the
other
thing
we
had
in
mind
like
Max
mentioned
on
the
issue
is:
if
maybe
the
this
TCP
error
is
not
given
back.
B
No
sorry,
I
lost
the
lost
the
background
and,
let's
disregard
that
part.
Okay,.
A
Well,
if
you
turn
up
that
property
on
yours
and
it
ends
up
fixing
it,
we
can
come
back
and
decide
if
we
think
we
should
change
it
for
everyone
and
bump
it
up.
D
Yeah,
basically,
something
which
I
would
like
to
ask.
If
you
have
some
kind
of
a
feedback
about
that
so
yeah
we
we
will
try
to
do
some
sort
of
over
committing
our
runtime
memory,
but
so
far
I
haven't
heard
someone
achieving
such
a
goal.
On
Diego
previously
working
into
former
data
container
schedule,
we
found
a
straightforward
configuration
for
Memorial
work
committing
but
seems
like
there
isn't
such
a
configuration
in
Diego.
So
I
was
wondering
mainly
whether
you
have
dry
simular
approach
and
have
some
success.
A
E
Yeah
it's
it's
there,
I
don't
like
when
I
was
managing
platforms.
I
think
we
shied
away
from
that
just
because
of
problems
during
evacuation
and
and
everything
thinking
that
that
there
was
more
memory
available
than
there
was
and
then
abscape
killed
by
own
killer
and
things
crash
for
no
reason,
it
was
just
a
little
too
scary
for
for
us.
D
Yeah,
exactly
we
have
really
really
height,
deviation
between
requested
quarter
memory
and
to
really
physically
used
memory.
So
yeah,
the
the
deviation
is
really
really
large
and
we
charge
our
customers
based
on
used
memory,
but
we
revision
and
pay
for
quota
memory
and
we
are
trying
to
kind
of
close
this
Gap
somehow
and
we
would
like
to
eventually
try
with
over
committing,
but
so
far
searching
for
what
kind
of
configurations
we
need
to
play
around.
D
If
there
are
such
configurations,
or
maybe
we
need
to
kind
of
do
some
kind
of
a
proof
of
concept
to
see
if
it
required
some
extra
coding
or
I,
don't
know
what
exactly.
E
It
should
it
should
be
I
think
there
are
configs
there
that
will
let
you
do
that
and
then
it'll.
It
just
tricks
the
auctioneer
to
thinking
that
there's
more
memory
available
than
there
is
we.
We
ended
up
solving
the
problem
simply
by
billing,
based
off
of
their
quota
memory
versus
what
they
actually
used
and
making
it
apparent
how
little
they
were
using
and
how
much
they
could
save.
D
D
A
D
D
D
Think
that's
good
I
appreciate
appreciate
that.
So,
first
of
all,
we
are
now
trying
to
kind
of
a
achieve
even
distribution
among
the
sales,
because
we
are
currently
using
this
bin
pack
first
fit
approach
which
kind
of
introduced
angle
so
that
we
are
able
to
host
really
large
applications,
something
quite
32
gigabytes
on
sales
with
limited
resources.
But
we
see
that
having
a
even
distribution
is
critical
for
the
over
commitment,
but
yeah.
So
and
let's
see
if
we
found
out
something
useful.
C
So
one
thing
which
is
bugging
me
is
a
documentation
issues,
but
it's
not
really
related
to
this
working
group
because
the
documentation
projects
are
somehow
distributed
between
the
working
groups
and
the
page
VR
contributing
is
more
of
a
development
guide.
This
is
in
the
I
think
in
the
interfaces
working
group
and
seems
that
nobody
merges
for
prerequests
and
after
they
emerged.
C
It
always
takes
time
also
a
very
long
time,
because
some
issues
of
the
process,
but
it
appears
in
the
official
documentation
and-
and
here
I,
have
impression
that
this
process
is
somehow
maybe
belonging
to
our
working
group,
because
I
think
this
I
don't
know
how
the
project
is
called.
So
the
overall
all
check
which
collects
all
the
pages
and
builds
when
the
documentation
I
think
it
somehow
belongs
to
us,
we're
not
sure
how's.
It
called
yeah.
C
Be
yeah
exactly
yeah
I'm,
not
sure
how
I
should
address
these
issues.
I
think
also
the
select
channel
for
docs
is
not
maintained
or
not
watched
anymore
yeah.
So
this
is
the
thing
because
we
wrote
all
this
feature
for
this
installed
train
with
mtls,
and
we
have
a
big
customer
which
runs
the
useful
feature,
and
we
pointed
them
to
the
official
documentation
and
also
we
had.
Let's
do
a
fix
now
for
the
page,
whichever
prerequest
is
now
open
for
I,
don't
know
also
three
weeks
already
again,
yeah.
A
Yeah
I
am
I've
been
yes,
the
dark
seam
has
been
spread
out.
Currently
it's
a
only
VMware
docs
team
right
now
with
massive
turnovers,
so
the
docs
team
does
not
include
anyone
who
is
who
worked
here
during
pivotal
days
and
Greg
and
I
sat
down
and
had
a
meeting
with
them
last
week
to
explain
to
them
what
the
cff
is
and
what
their
responsibilities
were.
And
honestly
they
just
had
no
idea
they
didn't
know.
There
was
an
open
source
like
they
should
be
watching
they
didn't.
A
They
didn't
understand
why
they
were
getting
PR's
from
people
outside
of
VMware
and
that
made
them
nervous
right.
So
I'm
gonna
say,
there's
a
learning
curve
and
please
be
patient,
but
if
you're
running
into
any
issues
like
that
feel
free
to
DM
them
to
me,
because
I've
been
chatting
with
them
about
okay,
what
their
role
is
and
what
their
responsibility
is
and
they're
just
getting
starting
to
get
up
to.
C
I
never
assumed
that
I
also
raise
the
issue
to
our
management,
but
the
documentation
is
still
VMware
only
right,
but
you
may
expect
also
that
there
was
not
a
Huawei
in
the
management
because
I
said
oh
yeah.
We
all
also
have
not
so
many
resources
and
we
will
take
it
with
us,
but
I
don't
expect
that
they
come
back.