►
From YouTube: App Runtime Platform Working Group [May 4, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
happy
may
welcome
to
our
monthly
working
group
forum.
We
have
three
things
on
the
agenda
today.
First
stephon
is
going
to
talk
about
his
new
proposal.
The
intent
to
enable
mtls
for
syslog
drains
then
carson's
going
to
discuss
a
new
rfc
about
approver
requirements
and
then
jovan
is
going
to
talk
about
challenges
from
the
new
logger
gator
architectural
changes.
B
I
guess
you
see
my
screen
so
basically,
we
already
shortly
announced
it
some
months
ago,
but
we
are
looking
into
enabling
client
certificate
support
for
the
syslog
train
so
that
we
have
mtls
and
not
only
tls,
both
for
the
system
protocol
and
for
the
http
protocol.
B
B
We
created
a
issue
in
the
local
data
agent
released,
mainly
because
most
of
the
changes
have
to
be
done
in
components
of
the
data
agent
release
or
another
part
will
be.
We
have
to
be
done
in
the
copy,
the
least
regard
controller,
but
I
think
we
created
it
here,
because
this
is
the
main
part.
Maybe
now
a
question
later
on
is
how
to
involve
the
people
from
the
copy
release,
the
best,
so
the
main
content
we
created
in
a
google
document,
so
old
style
like
we
have
done
before
cloud
font.
B
We
have
discussions
on
google,
docs
and
yeah.
I
don't
go
into
all
the
details
now,
maybe
it's
just
a
short
overview
and
the
components
which
are
involved,
and
so
basically
there
are
three
parts.
The
first
one
is
the
cloud
controller
on
top.
Second,
one
is
the
system
of
guidance
cache
which
is
which
is
located
located
in
the
scheduler
and
the
third
one
is
when
the
syslog
agents,
which
is
located
on
the
vms,
which
are
writing.
This
locks,
like
the
diego
cell,
mainly
also
above
the
ends
the
router
and
so
on.
B
Fox.
We
did
worked
out
in
a
way
that
we
just
make
use
of
existing
user-provided
service,
possibility
to
add
credentials
to
build
json
and
just
us
at
the
client's
certificate
and
private
key
to
the
json
field
store
in
the
existing
database,
and
the
change
which
has
to
be
done
on
the
dot
controller
site
is
that
we
have
to
enhance
internal
api
call
to
get
resistor
trains
and
so
that
it
also
sends
out
the
client
certificate
and
private
key.
B
Maybe
I
don't
go
into
all
the
details
here,
but
this
is
mainly
to
enhance
the
scalability
of
a
feature,
so
we
don't
get
into
risk,
but
it
breaks
when
the
customers
adopt
it,
which
would
be
not
so
nice
when
in
the
system
binding
cache,
we
have
to,
of
course,
to
store
the
credentials,
and
here
we
want
to
do
it
in
a
smart
way.
So
that's
not,
as
I
said,
with
the
cached
in
a
better
way
and
then
and
finally,
the
call
from
the
syslog
location
of
a
binding
cache
has
also
went
to
retrieve
credentials.
B
Configuring
client
certificates
and
the
system
agent
was
quite
straight
forward,
so
it
worked
just
quite
well
using
the
existing
go
clients,
libraries.
Also,
we
looked
into
the
aggregate
trains
and
yeah
so
also
rights.
Okay,
so
we
don't
have
a
concrete
implementation
yet
where
we
combine
both
aggregates
trains
and
and
user-provided
services
block
trades,
so
that
it
might
makes
as
much
as
possible.
We
use
here
in
the
system,
agent
yeah,
but
I
think
we
now
wait
for
feedback
here
on
this
proposal.
B
Yeah,
so
so
in
this
working
group,
of
course,
we
have
only
people
are
concerning
about
these
two
components.
So
maybe
when
one
follow-up
question
on
my
side
is
what
would
you
think
would
be
the
best
way
to
make
people
aware
from
cloud
controller.
C
A
C
I
guess
I
have
a
meta
question
and
some
of
this
is
just
still
getting
used
to
the
cfs
cff
process,
but
I'm
wondering
if
a
project
like
this
would
be
appropriate
to
have
an
rfc
for
I
know
you
can
have
working
group
level
rfcs
and
we
have
had
some
like
semi-technical
rfcs
before
for,
like
the
move
to
jammy
as
the
as
the
default
stack
for
cf
deployment,
so
that
could
potentially
be
a
way
to
put
more
artifice
around
this
or
or
more
process
around
this.
C
Rather,
I
I'm
not
necessarily
endorsing
that,
but
that
is
potentially
an
option.
I'd
say
the
other
thing
you
can
do
if
you
haven't
already
is
in
a
cloud
controller
in
the
cloud
controller,
repo,
open,
a
separate
issue
for
this
item,
linking
back
to
the
logger
greater
issue
you
opened
and
then
I
can
help
boost
that
to
get
some
feedback
on
that.
B
That's
also
a
thing
which
came
up
into
our
minds.
Also,
we
spoke
with
stephan
berger
he's,
also
in
the
talk
and
who
is,
I
think
we
already
discussed
all
of
that
with
the
team
of
japan
backer,
and
I
think
he
himself
is
aware
of
it,
but
I
also,
I
think
he
said
it
would
be
best
to
create
an
issue
in
the
cloud
control
of
april
and
I've
looked
into
his
rfc,
so
how
it's?
B
Maybe
if
you
think
it's
a
good
way,
also
to
create
just
an
rfc.
We
can
also
do
that.
C
Yeah,
I
don't
know
if,
if
it,
I
don't
know
if
it's
necessary
in
this
case,
I
think
we're
at
least
I'm
still
getting
a
handle
on
when
it
makes
sense
to
create
an
rfc
for
a
project
and
when
it
doesn't
so
I'm
not
going
to
wholly
endorse.
C
It
say
that
it's
the
proper
way
to
do
things,
but
it
is
a
potential
tool
if
we're
having
trouble
coordinating
across
or
coordinating
this
product
process
across
multiple
teams,
and
we
want
to
you-
know
kind
of
force
buy-in,
then
an
rfc
could
be
a
tool
for
doing
that.
B
We
have
any
point
as
to
how
it
looks
like,
for
example,
rfcs,
because
I'm
not
so
familiar
with
this
process
so
far.
A
Yeah
in
the
community
repo
there's
a
folder
with
the
rfcs,
the
one
that
greg
mentioned
is
the
jmu
one.
I
kind
of
see
the
rfcs
as
more
of
like
the
jmu
one
is
a
good
example
of
a
technical
rfc.
It's
like
we
needed
to
get
buy-in
on
jammy
and
everyone
had
to
do
it
because
we
need
to
stay
up
to
date.
This
to
me
that
you're
proposing
is
more
of
like
a
feature,
and
so
it
feels
less
like
an
rfc
to
me.
A
A
So
stefan
it
sounds
like
you're
asking
for
feedback.
A
B
B
B
E
F
Everyone
see
that
okay,
I
saw
some
nods
a
few
weeks
back
amelia
and
I
identified
an
area
of
improvement
within
the
working
groups.
The
main
problem
point
was:
it
seemed
to
be
really
difficult
to
add
folks
as
approvers
within
working
groups,
the
current
requirements
being
pretty
stringent
and
narrow,
a
reviewer
of
codebase
for
at
least
three
months
primary
reviewer
for
substantial
prs,
don't
know
what
substantial
means
not
defined
reviewed
at
least
30
prs
and
nominated
by
working
group
lead.
F
We
have
been
working
with
toc
to
change
those
requirements
and
make
it
make
the
the
acceptance
criteria
broader
in
order
to
bring
in
more
folks
who
are,
you
know,
contributing
in
some
substantial
way
to
working
groups,
but
maybe
aren't
specifically
focused
on
prs,
specifically
some
areas
of
arp
and
other
working
groups.
F
Maybe
don't
have
enough
prs
to
easily
meet
that
that
number,
so
we've
expanded
the
the
number
that
you
or
we've
expanded
the
range
of
things
that
you
can
review
or
do
to
get
nominated,
to
involve
submitting
prs,
submitting
issues,
reviewing
issues
and
contributing
to
technical
discussions.
The
numbers
shrunk
also
from
30
to
20..
F
We
think
that
this
is
going
to
make
it
a
lot
easier
to
bring
in
people
who
care
about
the
working
groups
now
and
hopefully
create
a
flow
that
eventually
leads
to
the
standardization
of
everyone
as
approvers,
who
deserve
to
be
approvers.
So
we're
pretty
excited
about
this.
F
B
Maybe
one
short
question:
how
are
you
defining
being
a
reviewer.
F
Sure
so
we,
the
intention
there
is
to
have
reviewer
people
who
are
interested
in
becoming
a
reviewer,
join
a
working
group
meeting
and
request
to
be
added
to
reviewer
files
for
teams
for
the
area
in
which
that
they
are
interested
in
joining.
I
think
a
lot
of
teams
on
this
team
we
haven't.
Actually
I
don't
know
if
we
have
the
reviewer
files
ready
to
go,
because
we
need
the
teams
in
order
to
set
up
a
rotating
like
round
robin
reviewer
style.
F
F
To
add
you,
as
a
reviewer
to
the
to
to
the
working
groups
like
kind
of
reviewer
sub
team,
that
gives
you
no
no
right
permissions,
but
means
that
you
will
be
pinged
on
any
given
issue
or
pr
that
is
created
thanks.
A
Yeah
currently,
this
is
being
done
manually
by
people
reached
out
to
me
and
has
to
be
reviewers,
for
example,
rebecca
in
the
logging
area
from
vmware.
You
know,
she's,
not
an
approver
but
she's
on
the
team
and
does
work,
and
so
I
assign
issues
and
prs
to
her
just
like
I
do
across,
but
then,
of
course,
she
has
to
pay
her
within
a
real
approver
to
get
the
merge.
She
doesn't
have
access
to
that,
and
so
that's
the
the
very
manual
way
that
it's
working
right
now.
E
For
at
least
a
few
of
the
working
groups,
they've
split
up
access
amongst
their
projects
inside
the
working
group.
Is
that
something
that
this
like?
How
would
that
work
with
this?
If
we
had
different
approvers
with
right
access
to
diego
versus
networking
versus
logging,
I.
F
G
Will
there
be
any
minimum
number
of
of
approvals
followed
for
a
pull
request
so
that
it
gets
merged,
or
how
are
we
going
to
do
this?
Because
at
the
moment
you
had
the
vmware
order,
but
at
the
master,
for
example,
for
us
for
for
for
logger
gator,
we've
opened
many
times
like
pull,
requests
or
suggested
questions.
G
G
What
should
be
done
in
order
to
to
verify
that
our
pull
request
is
is
good?
Do
we
need
like
a
second
pair
of
eyes,
to
to
check
a
pull
request
or,
as
you
said,
we
get
a
reviewered
list
and-
and
someone
is
automatically
picked
from
the
from
this
list-
and
here
you
have
five-
five
pull
requests
to
to
review.
F
Errors
there,
I
I
think
I
heard
two
questions
in
there.
One
was
what
does
it
take
for
a
pr
to
get
approved
and
then
merged,
and
the
other
was
like.
How
do
we
select?
Who
to
approve
on
that
process?
Is
that
is
that
right,
okay,
so
the
first
one?
How
does
a
pr
get
approved
is
maybe
not
in
the
scope
of
this
rfc,
and
I
think
maybe
we
we
could
if
you
want
to
table
that
for
a
second,
we
could
talk
about
that
in
general.
F
The
second,
I'm
sure
amelia
knows
that
process
better
than
I
do
and
then
the
second
one.
How
do
you
select
who
to
request
as
a
as
an
approver
that,
I
think,
is
where
the
reviewer
flow
comes
in,
where
currently
it's
manual,
someone
will
be
selected
like
for
you
to
review
the
pr's,
but
hopefully
soon
automated,
where
you
automatically
get
a
reviewer
assigned.
F
If
no
one
is
looking
if
the
reviewer
is
not
like,
actually
reviewing
your
your
your
pr
or
issue,
maybe
it's
time
to
ping
in
the
slack
channel
or
add
someone
else
who
you
know
is
an
approver
and
you
want
their
eyes
on.
That
is
at
least
the
current
flow,
as
I
understand
it,
because
I
wanted
to
correct
me
if
I'm
wrong.
F
A
Call
it
on
time
just
so
we
have.
We
only
have
a
couple
minutes
or
eight
minutes
left
thanks.
So
much
carson.
I'm
excited
to
want
to
get
our
first
new
approver.
G
Yeah
sure
thanks,
I
wanted
to
to
discuss
the
the
new
situation
about
the
new
logger
gator
architecture,
where
the
dopplers
on
the
low
cash
processes
are
being
split.
In
a
separate
instance
group,
we're
almost
done
with
done
with
our
load
test.
The
initial
results
are
pretty
good.
G
It
was
a
good
decision
to
to
do
this
change
and
we'll
publish
the
results.
I
guess
tomorrow
or
today
after
tomorrow,
but
one
one
one
thing
that
tingles
me
personally
said
for
for
for
two
consecutive
releases,
like
we
haven't,
found
a
way:
how
how
to
do
the
upgrade
properly.
Is
there
some?
G
Is
there
some
downtime
for
the
for
the
cf
life
cycle
operations,
because
the
cloud
controller
asks
low
cash
to
to
to
get
to
to
get
container
metrics
and
during
the
the
upgrade,
the
the
the
route
to
the
cloud
controller
is
being
configured
manually
as
a
as
a
config
parameter
and
and
the
bush
dns
is
not
really
with
the
with
the
locations
and
then
propagating
the
domain
there.
So
we
the
slack
discussions
when
we
see
different
different
approaches
like
change
the
deployment
order,
try
to
play
around
with
it
with
the
domain.
G
Now,
at
sap
we
were
discussing
what
would
happen
if
we,
if
you
use
a
cloud
controller
parameter
which
which
instructs
the
code,
the
cloud
controller
to
use
the
traffic
controller
instead
of
the
cloud
controller
instead
of
location,
to
get
the
the
container
matrix.
I
know
it's
looking,
but
I
think
that
we
need
a
a
unified
solution,
what
to
do
they're
in
the
sea
of
deployment?
G
It's
one
thing
and
the
other
thing
what
we
found
is
that
the
way
how
the
aggregate
genes
are
are
being
built
and
the
cis
locations
now
now
sending
data
to
them
to
the
locker.
G
Renewed
every
every
minute,
and
since
the
bosch
dns
is
not
such
good
or
such
good
loading
load
balancer,
it
simply
selects
one
node
and
and
the
node
and
the
location
node
gets
some
extra
extra
load
to
to
handle
and
we
have
drops,
which
is
like
a
technical
limitation.
I
guess
of
the
whole
thing
and
which,
from
time
to
time,
it's
really
yeah
it.
It's
really
hard
to.
G
To
define
how
do
we,
how
do
we
scale
the
locations
properly
so
that
they
can
handle
this
this
load
even
that
we
found
out
some
they're
they're
scaling
limitations,
but
we
see
that
that
they're
that
they're
pretty
happy
accepting
load,
but
but
at
the
end,
when,
when
the
time
comes
to
to
write
them
the
local
envelopes
to
to
low
cash,
it
fails
on
ninja.
So
it's
not
that
good,
but-
and
we-
I
guess
we
have
to
like-
accept
that.
Maybe
we
should
document
that
somewhere
in
the
official
documentation
as
well.
B
A
Upgrades
is
someone
from.
I
have
not
been
a
part
of
this
at
all.
I've
like
just
seen
it
on
the
peripheral
is
someone
working
on
that
in
the
vmware
sphere,
either
from
copyright
or
from
the
logging
team.
Then.
C
A
Okay,
I'll
spend
some
time
looking
at
it
today,
I
have
time
so
that
way.
At
least
we
can
have
a
vmware
point
of
contact
for
this
issue
to
make
sure
we're
all
looking
at
it.
G
Yes,
yes,
the
problem
comes
because
in
the
canonical
sieve
deployment,
the
api
nodes
are
being
updated.
G
Before
before
the
location,
nodes
are
being
spinned
up
and
and
and
there's
a
configuration
change
for
in
the
api
nodes,
which
and
they're
asking
for
for
specific
domain,
which
at
that
point
in
time
is,
is
still
not
available
or
there's
no
one
to
to
solve
these
requests.
And
then
the
cfcli
requests
are
failing
with
time
optimism.
D
Right,
my
last
recall
recollection
of
the
issue
is
that
there's
for
the
most
part,
if
metrics
are
unable
to
be
gotten,
it
should
still
be
able
to
deploy
apps.
But
I
don't
remember
the
specifics
of
how
and
yeah.
B
Basically,
also
it's
not
just
as
a
discussion,
it's
still
impossible
to
deploy
apps,
but
the
push
takes
a
minute,
sometimes
several
minutes,
because
it
always
waits
for
the
start
metrics
from
the
container
and
if
it
takes
too
long
when
on
our
side,
we
have
also
timeouts
on
our
tests,
and
then
we
have
some
alerts
and
some
people
who
are
looking
at
platform
alerts,
we'll
think
there's
an
outage
and
also
customers
will
have
to
wait
very
long
for
rcf
pushes
and
maybe
they
also
have
timeouts
on
their
side.
B
So
I
think
this
is
the
main
issue
where
the
push
cf
push
takes
too
long
because
it
waits
for
the
start
standpoint
and
it
also,
even
if
it
times
out
or
even
on
the
the
api
side,
it
will
get
a
200
response.
I
learned
and
still
the
cloud
controller
just
ignores
that
there's
no
matrix
elevator
available.
G
B
G
B
A
Okay,
thank
you
for
that
context.
If
that's
already
all
written
somewhere
or
like
in
a
slack
that
it
sounds
like.
Do
you
mind
sending
that
to
me
chiffon
and
slack,
just
because
I
haven't
seen
that
maybe
we
can
start
an
issue
or
something
to
track
if
we
want
to
try
to
fix
this
or
at
least
mitigate.
G
A
Thank
you
so
much
everyone.
If
there's
nothing
else,
I
think
we'll
call
it
a
meeting
and,
of
course
we're
I'm
always
in
slack
always
in
the
working
group
chat.
If
you
need
to
reach
out
it's
good,
seeing
you
all
have
a
great
may
and
I'll
see
you
next
month.