►
From YouTube: App Runtime Platform Working Group [Dec 7, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
If
oh
I'm
gonna
see
her
in
the
dark,
if
someone
could
take
notes
on
who's
here,
I'll
get
started.
A
Let's
see,
I
have
a
status
update.
That
is,
Branch
protection.
Rule
enforcement.
Okay,
so
you
know
with
the
the
spicy
PR
that
got
rid
of
all
of
our
unstandardized
teams.
We
were
not
given
any
warning
for
it,
and
this
drastically
upset
a
lot
of
people,
at
least
on
our
side,
I'm,
not
sure
how
it
was
on
your
side.
It.
A
You
know,
all
of
a
sudden
things
stopped
working,
so
I'm,
trying
to
be
making
sure
that
this
Branch
protection
rule,
which
will
force
you
to
make
PRS
to
all
the
repos,
will
at
least
get
some
warning,
and
so
the
current
status
is
that
the
library
that
they're
using
for
automation
doesn't
have
all
the
features
that
they
need
to
allow
Bots
to
bypass
the
pr
so
they're
working
on
it,
and
they
also
promised
that
they
would
win
before
they
merge
this
time.
So
I
don't
expect
it
to
be
here.
A
Let's
see
I'm
just
taking
a
little
look
at
what
else
is
on
the
agenda.
Maybe
we
can
do
some
of
the
faster
ones.
First,.
A
Well,
it
sounds
like
my
next
one
overlaps
with
common,
so
maybe
we'll
talk
about
that.
C
Yeah
yeah
I
was
basically
basically
wondering
whether
there
is
a
some
sort
of
reviewer
process
defined,
which
one
can
follow,
because
yeah
I
would
like
to
support
a
colleague
of
ours
which
are
in
this
sub
power
to
score
area,
and
they
want
a
pull
request
to
be
reviewed
and
Merchant.
I
use
the
opportunity
to
assign
myself
thanks
Sami
for
your
support
as
a
reviewer.
But
it's
not
clear
to
me
what
the
process
of
reviewing
PR
is
picture.
A
B
A
So
it's
different
per
where
it
is
I
would
say:
Obviously
like
review
the
code
to
the
best
of
your
ability,
download
it
and
try
it
make
sure
it
does.
What
you
think
is
correct
in
a
few
cases,
not
very
many,
we
do
have
automatic
PR
pipelines
that
will
run,
and
so
you
can
wait
for
those
to
go
green.
D
A
The
cases
where
those
don't
exist,
I
would
pull
the
code
and
run
like
the
unit
tests
myself,
for
whatever
it
is
personally
I
don't
run
usually
run
cats
because
it
takes
forever
and
it's
hard
to
set
up
I,
usually
let
the
official
pipeline
run
cats
and
then,
if
it
goes
red,
we'll
just
pull
it
back
out
again.
Okay,
does
that
answer
your
questions.
C
A
Well,
I
thought
what
you
were
going
to
talk
about,
but
turns
out.
Not
is
the
automatic
assigning
of
reviewers
and
approvers
I
will
tell
you
when
I
signed
up
for
being
tackled.
Managing
GitHub
repos
is
not
what
I
thought
I
would
be
doing,
but
it
turns
out
being
a
lot
of
the
job.
That's
not
super
fun.
A
I've
not
come
up
with
a
good
way
to
automatically
assign
reviewers
and
approvers,
or
even
to
track
all
of
the
PRS
that
come
in.
Currently
we
have
projects,
but
projects
can
only
track
25
repos
at
a
time
and
as
you
know,
we
have
many
many
repos.
A
A
A
E
For
automatic
assignments,
so
one
issue
I
see
is
that
we
have
like
a
broad
range
of
topics
in
your
working
group
and
at
least
from
sap
side.
We
are
rather
specific
to
some
topics
so,
for
example,
Dominic
and
myself
we
know,
go
router
h,
a
proxy,
and
these,
like
very
networking,
closed
components
very
well,
for
example
for
digo
it's
rather
plumbing,
and
so
an
automation
would
need
a
finer
granularity
from
my
point
of
view,
at
least
for
the
sap
side
of
the
house.
E
So
I
know
that's
not
all,
but
as
we
see
in
this
call
it's
at
least
an
amount
yeah,
but
I'd
also
like
to
to
move
forward
on
that.
So,
for
example,
for
Diego
release,
I
opened
a
buck
already.
You
know
almost
two
months
ago,
without
reaction
about
Envoy
being
an
unsupported
version
which
I
consider
like
critical,
so
I'm
not
sure
how
to
get
attention
for
those,
and
also
we
are
not
very
familiar
with
the
code.
So
faucet
would
also
be
something
new,
but
yeah.
That's
just
some
sense
on
this.
A
Yeah,
thank
you
for
that
context.
I
think
there
could
be
a
way
to
do
it.
There's
with
something
I
I
know,
there's
like
files
you
can
set,
maybe
in
GitHub,
and
then
we
could
do
it
on
a
per
repo
basis,
and
so
it
could
be
like
you
know,
you
would
just
put
your
name
in
on
the
ones
that
you
feel
without
you
could
right
so
that
you're
not
having
to
review
every
single
repo.
Even
if
you
don't
have
any
context
in
it.
A
But
I
think
it
would
take
some
amount
of
automation
to
to
roll
that
out
to
figure
out
who
goes
where
who
belongs
to
what
repo
and
to
roll
it
out
across
all
of
our
100
something
new
posts.
E
D
E
B
A
A
Make
opened
I
think
that
the
gist
of
it
as
I
understand
it
Dominic.
Is
that
we're
asking
for
what
you're
asking
for
like
when
things
fail,
or
you
want
a
better
failover
right.
So
if
things
start
failing
in
one
easy,
it
will
try
a
different
AZ.
It's
because
maybe
the
whole
AZ
is
down.
A
D
D
D
F
F
F
So
my
my
next
idea
was
even
simpler.
Basically,
instead
of
trying
to
make
the
the
load
balancing
algorithms
smart,
we
could
just
whenever
we
get
a
new
endpoint,
we
could
just
look
at
the
metadata
of
the
AZ
and
then
put
it
like
interleaved
with
other
azs,
so
that
you
have
a
sorted
list
in
the
pool.
So
whenever
you
start
your
initial
request,
it
could
be
okay,
I
started
with
my
local
AZ,
so
the
go
router
knows
its
own
local
AZ.
F
The
first
thing
it
gets
might
be
a
different
ID,
but
it
doesn't
have
any
more
endpoints,
so
it
gets
the
second
register.
Oh,
that's
my
AZ,
so
I
put
this
in
front.
So
in
the
end,
when
you
have
like
an
an
app
that
has
I
don't
know,
maybe
five
endpoints
or
whatever
you
get
if
go,
router
is
in
zone
a
you,
get
a
b
c,
a
b
c,
a
b
c
and
so
on.
You
have
like
a
like
a
well
sorted
list.
F
So
whenever
you
start
running
in
that
list,
you
start
at
the
local
AZ.
And
then,
if
that
fails,
you
you
just
retry,
you
don't
need
any
more
magic.
You
just
pick
the
next
one
which
will
automatically
be
in
this
and
the
the
other
AZ.
So
if
Zone
a
fails,
you
automatically
check
next
time
in
in
zone
B.
You
don't
have
to
like
to
a
I,
don't
know
a
hindsight
thing
and
check
what
went
wrong
last
time
and
wherever
so
you
don't
need
more
metadata
going
into
the
three
try
algorithm.
F
A
D
A
It's
because
of
sticky
sessions
someone's
using
sticky
sessions,
but
like
customers,
really
care
and
I'm
worried
about
getting
into
the
state
where
an
app
Dev
says.
Why
is
one
of
my
apps
getting
a
lot
of
traffic
and
you
know
High
CPU
and
the
others
aren't
like
I
might
and
so
like,
because
the
the
app
devright
has
no
insight
into
what
AZ
their
app
is
deployed.
F
That
I
can
easily
imagine
this
this.
This
could
also
be
rooted
in
the
in
the
same
problem
being
that
you
have
no
control
over
when
exactly
you
will
get
a
register,
endpoint
message
and
from
from
whom
right
so
the
the
order
could
be
anything.
So
if
you're
unlucky
you,
it
could
be
that,
like
the
first
I
don't
know,
five
endpoints
in
your
list
are
just
in
one
AZ,
and
you
always
try
this
one
first,
because
you
never
reach
the
second
zone
right.
F
So
if,
like
one
and
two
fail,
and
but
three
works,
you
still
end
up
in
the
same
zone
so
because
there's
no
like
sorting
in
in
place
of
of
those
of
those
endpoints,
so
it
could
be
related,
I,
don't
know
we
have.
We
have
similar
kind
of
similar
issues
with
customers
asking.
Why
is
this
one
endpoint
like
overloaded?
The
second
one
doesn't
get
like
almost
nothing
and
it's
even
more
specific.
F
They
want
to
be
able
to
actually
select
per
app,
which
kind
of
load
balancing
algorithm
they
want
to
have
like
for
this
app
I
want
list
con
for
this.
One
I
want
round
robin
right.
So
it's
it's
even
more
like
fine-grained
than
that.
B
A
I
had
some
questions
since
I
I
don't
run
a
cloud
Foundry
just
trying
to
get
a
grasp
on
like
how
would
this
look
from
the
user's
point
of
view
right
so
like
how
do
objectives
know,
which
app
is
in
which
a
z
like
you
can
look
at
what
your
IP
is
deployed.
H
F
A
A
Right
and
then
like
an
app
would
have
to
rebalance
and
so
I'm
imagining
a
situation
where,
like
someone's,
CF,
pushing
and
being
like,
oh
I,
hope
this
time
when
I
scale
up
like
it,
gets
into
a
different
AZ
that
I
wanted
to
get
into.
Hopefully,.
A
Like
Diego
does
try
to
balance
you
across
your
azs,
but
you
can
imagine
if
certain
Diego
cells
are
full.
Maybe
it
doesn't
put
you
there,
but
are
people
actually
CF
pushing,
or
is
this
all
done
in
pipelines,
and
so
it
would
all
have
to
be
like
done
automatically.
F
I
F
I
guess
that
workflow
ends
up
also
doing
CF
push,
so
it
also
I
I.
Wouldn't
it
I
wouldn't
care
too
much
since
the
user
has
they
don't
have
any
like
they
don't
have
any
means
to
change
or
give
a
hint
to
where,
where
to
put
this
this
instance,
they
only
care
like
for
latency,
right
and
availability.
It
should
be
up
and
it
should
be
fast.
So
that's
their
point.
A
C
C
C
A
F
It
has
like
10
instances
or
whatever,
even
more
sometimes,
and
if
you
are
unlucky
with
those
10
instances
and
like
you
have
five
of
them
in
one
zone,
all
of
them
in
the
front
seat.
Basically,
you
never
get
to
see
those
other
instances
that
might
be
running
fine
because
you
fail
early
right.
You
have
these
three
retries.
A
Interesting,
how
many
azs
do
you
put
deploy
across
right
now.
F
A
B
A
I
A
I
F
I
think,
once
we
have
it,
we
can
run
a
couple
of
experiments.
We
don't
have
to
like
pick
one
first
and
then
just
do
that.
I
can
yeah
I
can
imagine
like
having
two
or
three
different
kinds
of
implementations
to
try
different
things
leveraging
this
metadata
and
then
we
can
like
evaluate
which
one
works,
the
best
mm-hmm
and
has
the
least
like
ramifications.
If
things
go
south.
A
Why,
where
does
this
follow
in
your
priorities?
Right
now,
are
you
focused
on
this?
Are
you
focused
on
other
things,
and
this
is
kind
of
in
the
background.
F
B
A
A
A
A
Me
I
know
we
have
five
minutes
to
the
end.
I
can
go
over
wow.
This
is
a
usually
these
wearing
group
meetings
are
real
fast
and
we
got
a
very
exciting
one.
This
time
Plowman,
let's
talk
about
runtime
evacuation.
If.
C
B
G
Yeah,
maybe
control
to
discuss
about
it,
but
maybe
you
can
also,
then
do
it
the
discussion
only
with
Benjamin
later
so
we
are
quite
far
now
with
this
feature.
I
spoke
about
it
at
the
beginning
of
this
year
and
now,
finally,
we
have
the
state
where
I
think
most
of
the
feature
is
implemented.
We
are
still
discussing
about
how
to
roll
it
out
best
and
about
the
acceptance
tests
and
then
just
yesterday
asked
Felix.
Who
is
the
main
developer
that
we
may
discuss
the
next
steps
in
this
working
group
meeting
today?
K
So
yeah
I
think
if
I
may
start.
The
main
points
for
this
discussion
is
about
the
two
things.
First
is
the
pr
itself
and
the
other
ones
are
the
acceptance
tests
we
wanted
to
work
on.
K
So
this
PR
is
is
almost
finished.
I
would
say
also
from
the
comments
that
I
got
from
benefit,
understood
them
correctly
and
I
see
that
there.
K
The
last
thing
that
that
is
missing
is
to
make
changes
that
not
two
up
the
upgrades
or
updates
are
needed
for
implementing
or
introducing
this
feature,
but
only
one
and
the
question
is:
how
would
you
think
we
should
do
this,
because
if
we
introduce
it,
we
will
have
flaps
between
the
old
endpoint
that
is
pulling
Swiss,
lock
bindings
and
the
new
endpoint
that
will
introduce
the
client
certificates
for
mtls
yeah.
K
This
yeah
is
I,
think
a
rather
technical
question,
I,
don't
know
if
this
is
to
be
solved
here
and
the
other
one
is.
How
should
we
continue
with
the
acceptance
test
since
I
think
both
parties
here
found
out
that
it's
more
difficult
than
we
thought
initially
to
adapt
them
for
mtls.
K
J
I'm
there
sorry
I
was
thinking
and
then
I
was
like
trying
to
read
the
message.
The
meeting
is
going
to
end.
Maybe
we
should
switch
zooms,
it's
fair,
I'm
I'm,
not
sure
which
which
to
do
do
you
do
you
have
any
thoughts
or
preferences
is
the
meeting
over
in.
A
D
A
J
I
guess
my
first
thought
is
that
I
think
that
there
should
be
ways
to
avoid
flapping
by
like
detecting,
if
we've
seen
the
the
new
endpoint
and
then
like,
not
falling
back.
If
we
have
at
least
once
right,
because
I
think
the
main
Behavior
we're
worried
about
is
like
the
new
input,
never
will
exist
until
a
deploy
or
just
like
the
whole
law
cache
or
the
whole
temporary.
J
Like
outages,
I
think
if
we
were
worried
about
temporary
outages,
if
we
don't,
if,
if
it
doesn't
exist
at
all,
like
falling
back,
seems
safe,
but
if
we've
seen
it,
it
means
that
it's
probably
going
to
exist
in
indefinitely,
and
then
we
should
probably
should
assume
it's
an
in-depth.
It's
a
outage
and
not
a
non-existence
of
the
new
endpoint.
K
So
you
say
if,
if
the
new
endpoint
once
is
there
on
all,
for
example,
in
Cloud
controllers
and
suddenly
there's
an
outage
on
the
cloud
controllers,
we
should
not
assume
that
we
are
just
in
an
update
case
or
whatever.
J
J
And
so
I
guess
that
might
still
cause,
for
example,
an
individual,
a
lot
of
syslog
agent
or
an
individual
binding
cache
to
say,
oh,
while
I'm
starting
up
for
the
first
time
I.
Might
it
might
fall
back
once
or
twice
if
there's
also
a
outage
going
on
of
some
kind,
but
I
would
think
that
otherwise
it
should
perform
fairly.
Normally.
K
Yeah,
the
idea
is
that,
during
an
update
or
during
the
the
rollout
of
this
thing,
we
will
have
like
half
of
the
cloud
controllers
updated
already
and
half
of
them
still
in
the
old,
only
equipped
with
the
old
endpoint
and,
of
course,
at
some
point.
If,
if,
if
the
scheduler
calls
the
API,
it
will
query
old
and
new
old
Cloud
controllers.
So
this
is
why
it
will
be
flapping
anyways
during
an
update.
J
He
sure
true,
never
mind
ignore
everything.
I
said.
K
K
We
could
just
make
the
Swiss
look
agent
query
the
new
endpoint
first,
and
if
that
one
is
not
existent,
we
can
query,
then
the
old
endpoint
on
The
syslog
Binding
cache
would
be
the
same
as
it
is
between
a
cloud
controller
and
syslog
binding
cache,
so
that
would
only
that
would
reduce
the
update
steps
two
to
one,
but
still,
of
course,
if
there
is
an
outage
ongoing,
it
will
query
both
endpoints
and
only
after
that
it
will
say.
Well,
there
is
an
issue
ongoing.
K
No
in
case
of
an
outage,
we
would
we
reverse
query
a
new
endpoint.
It's
it
would
say
well
not
available.
Then
query
the
old
endpoint
would
also
say
not
available,
and
after
that
suslog
binding,
cache
or
suslog
agent,
depending
on
yeah,
where
the
outage
happens
would
then
tell
yeah
an
issue
with
fetching
bindings
is
happening
most
likely
an
outage
or
unavailability
of
one
of
the
services.
J
G
K
Hey:
that's!
Why
why
we?
We
that's,
why
we
initially
did
not
put
two
endpoints
in
there,
but
rather
relied
on
introducing
the
new
one
first
within
in
a
separate
update
step,
and
then
query
only
that
new
endpoint,
and
if
that
one
does
not
work,
then
something
is
wrong
in
general
and
if
we
apply
this
flaky
logic
now,
issues
on
the
cloud
controller
could
lead
to
client
certificates
being
missed
in
the
syslog
agent.
K
K
J
I
wouldn't
expect
the
old
endpoint
would
be
available
in
the
new
point
would
outside
of
like
it's,
the
old
version
of
the
code
outside
of
probabilistically
we're
having
problems
with
like
latency
or
or
some
other
issue
right.
D
J
Not
like
a
special
upgrade
we're
going
from
the
old
version
to
the
new
version,
but
just
like
any
reason.
We're
rolling
a
cloud
controller.
D
D
J
Make
sense
we
could
talk
a
little
bit
about
testing,
though
right.
K
Acceptance
tests
yeah,
so
if
I,
if
I
understood
correctly
from
your
comment
on
the
pr
on
the
last
one,
you
also
tried
a
little
bit
around
with
with
acceptance
tests
and
at
least
on
our
side,
it
turned
out
more
to
be
more
difficult
than
we
initially
initially
thought.
K
K
But
apart
from
that,
you
said,
okay,
we
could
also
try
to
not
use
IP
addresses
to
connect
to
the
application,
but
instead
use
TCP.
Routing
problem
here
is
that
we
actually
do
not
support
TCP
routing,
so
we
have
not-
or
we
did
not
dig
deeper
into
that.
K
Instead,
we
we
tried
implementing
the
acceptance
tests
against
IP
addresses
and
SD
applications
in
the
CF
acceptance
test.
Do
not
directly
do
clients
certificate,
authentication.
K
We
we
use
the
envoy
proxy
for
that
and
this
we
ended
up
in
a
lot
of
ifs
and
else's,
because
we
do
not
know
how
the
acceptance
test
pipeline
is
configured.
Are
the
envoy
proxies?
Do
the
envoy
proxies
check
the
client
certificates?
K
Do
we
use
internal
TL
as
proxy
or
ports
or
external
TLS,
proxy
ports
and
stuff
like
that?
So
it's
should
I
say
this:
it's
difficult
to
to
find
the
correct
way
through
the
acceptance
test
to
finally
test
mtls
and
also
apply
this
to
the
officials.
Cats
pipeline.
K
I
mean
I
I
have
a
PR,
not
a
PR,
but
I
have
a
branch
where
this
at
least
runs
on
our
systems
and
makes
some
assumptions
about
the
the
setup
in
general.
I
could
push
that.
So
we
could
have
a
look
but
I,
don't
know
how
far
you
went
when
testing
that.
J
We
we
mostly
chatted
about
like
what
kind
of
things
we
needed,
what
kind
of
implications
different
ways
of
testing
it
have?
We've
talked
about
different
things
like
what,
if
we
told
people
that
it,
what
if
the
cats
and
sorry
what,
if
cats
included
a
application
and
just
said,
if
you
can
figure
out
how
to
put
put
it
somewhere,
where
you
can
have
TCP
routing,
put
it
there
and
then
set
a
an
variable
that
says
the
URL
to
try
and
connect
to
it
with
I
think
we
talked
about
like
okay.
J
Could
we
bash
deploy
something
I
think
we
said,
probably
not
right.
If
we
could
Bosh
to
play
something
we
could
deploy.
Syslog
release
has
a
testing
tool
for
receiving
syslog,
which
is
actually
pretty
pretty
nice.
J
J
I
mean
we
don't
it's
maybe
also
worth
pointing
out
that
for
cats,
I,
guess,
technically
speaking,
we
don't
even
necessarily
have
a
Bosch
at
all,
but
I
don't
know
of
whether
are
non-bosch
based
CFS,
actually
support
syslog
drains
at
all.
So
who?
Who
knows,
as
far
as
that
goes.
K
Yeah
well,
as
you
were
speaking
of
Bosch
access,
we
tested
all
of
this
with
a
fluent
d-bosch
release,
but
only
manually,
of
course,
and
then
we
went
to
the
into
the
cat's,
rabbit
hole
but
I
think
yeah.
We
we
have
to
do
it
with
most
likely
a
CF
application
here.
J
If,
if
we
have
TCP
routing,
we
definitely
can
test
it
in
our
pipelines
right
so
the
the
I
think
the
default
setup
for
cfd
should
support
TCP
routing.
G
J
Yeah
I
I
know
there
are
some
test
packages
out
there
that
we
could
use
to
like
like
we
could
push
the
application
and
say
Here's,
here's
your
search
application
and
then
say:
okay,
here's,
here's
the
search
for
mtls
right
and,
as
we
were
talking
about
this,
we
were
like
oh
wait.
We
we
forgot
that
we
don't
have
access
to
ca
on
the
drains
themselves.
We
can't
say
here:
this
is
the
ca
that
we're
using
so
I
would
which
really
limits
us
to
real
certs
and
like
like.
J
We
could
push
an
app,
but
then
syslog
agent
would
say
what
is
the
certificate
I?
Don't
trust
it?
They
signed
themselves,
I
I,
no
I'm,
not
doing
it
yeah.
We
wouldn't
end
up
getting
traffic
to
the
to
the
actual
app.
K
Yeah,
that's
why
we
we
use
the
the
go
route:
mtls,
backend
certificate,
I
think
this
is
provided
in
the
in
the
CF
manifest
and
has
to
be
stored
somewhere.
In
our
case,
you
know,
but
still
also
I,
don't
know
if
this
is
a
general
setup
or
sap
specific
setup.
J
I
was
I
was
talking
to
Amelia
and
we
were
saying
that,
like
I,
don't
think
all
environment,
I,
I,
think
you're
right,
I,
don't
think
also
environment
support,
TCP,
routing,
I
think
for
our
testing
environments.
Most
of
them
should
but
also
I,
think
that's
the
reason
why
we
pulled
I.
Think
that
part,
probably
part
of
the
reason
we
pulled
syslog
testing
into
a
separate
test.
Suite.
J
I
say
we
moved
it
out
into
a
separate
test:
Suite
I,
don't
think
most
any
of
us
were
involved
in
that
decision,
but
like
looking
back
into
it,
I
think
that's
why
looking
back
into
the
PRS
and
discussion
about
it,
I
think
that's
why
it
happened.
K
So
would
you
say
that
we
should,
like
you
know,
delay
hostile,
correct
words
like
to
to
to
to
put
this
apart
place
this
apart
and
concentrate
on
the
postpone?
Thank
you.
Stefan
postpone
this
CF
acceptance,
tests
first
and
work
on
the
pr
and
think
about
another
solution
for
the
Sea
of
acceptance
test,
because
I
don't
think
we
can
bring
them
in
in
the
in
the
in
the
pipeline.
For
now,.
J
That's
that's
fair
I,
don't
think
we
have
a
solution,
Everyone
likes
at
least
and
that
can
run
on
everything
and
that
I
I
think
as
far
as
cats
will
probably
end
up
having
to
find
the
best
solution.
I
think
I
think
the
problem.
The
big
problem
with
TCP
routing
is
the
ca
problem
and
I.
Think
the
big
problem
with
I
think
there's
a
lot
of
environment
configuration
that
can
change
what
the
test
is
testing
in
terms
of
continuing
to
use.
Diego
IPS.
K
So,
for
my
certain
next
steps
would
be
to
rethink
how
we
can
we
can
do
the
update,
maybe
design
the
the
polling
a
little
bit
more
robust.
K
Maybe
we
can
store
something
internally
to
make
the
the
scheduler
and
swiss
look
agents
aware
that
an
update
is
now
has
now
happened
and
if
something
goes
wrong
now
it's
an
actual
outage
and
not
a
not
an
update
case
in
which
it
should
pull
the
Legacy
endpoints.
Something
like
that.
G
D
J
Maybe
I
was
a
little
bit
I,
don't
know!
Maybe
it's
a
little
bit
harsh
in
terms
of
like
not
accepting
just
like
saying
use
the
new
endpoint
and
use
the
old
endpoint,
so
I'll
think
about
that
a
little
bit
more
as
well.
I
think
that,
like
I,
would
spend
a
little
bit
of
time.
Thinking
about
this
a
while
ago
and
I
was
like.
Maybe
it's
fine
to
actually
jump
straight
into
the
new
endpoints
entirely
like
Maybe.
J
Maybe
we
can
still
tell
some
people
to
say
just
go
into
the
new
version
and
if
you
really
really
really
care
here's
the
check
boxes
to
allow
you
to
like.
Do
it
a
little
more
staged,
I
think
that's
what
we
I
think
that's
what
we
did
for
for
when
we
moved
from
syslog
from
syslog
adapters
to
syslog
agents,
one
of
the
things
we
said.
J
We
had
a
check
box
and
we
told
people
you
know,
or
we
had
an
option
and
we
told
some
people
that,
like
hey,
you
can
go
straight
to
the
new
version.
If
you
don't
care
that
much
about
downtime.
But
if
you
do
care
about
time
time,
here's
here's!
How
to
deal
with
it
and
that
was
kind
of
icky
and
gross
and
awful,
but
talking
about
the
difficulties
with
trying
to
to
pull
or
try
and
fall
back.
Maybe
maybe
it's
the
lesser
of
two
two
evils
here.
J
C
It's
just
the
interesting
point:
I
was
wondering
whether
such
a
topic
have
been
discussed
previously.
Sometimes
it
happens
on
our
side
that
we
are
required
to
shift
a
work,
quotes
from
a
given
Zone
to
another
zones
due
to
some
reasons,
and
we
do
so
by
doing
some
magics
I
was
wondering
what
actually
the
process
is
of
a
creating
a
Community
Driven
topic
out
of
a
certain
idea
or
whatever
and
whether
it
this
is
actually
such
a
topic,
which
we
can
consider.
C
No
so
far,
I
mean
as
far
as
I
know.
There
is
no
such
a
capabilities
built
in
into
this.
The
runtime,
but
I
was
wondering
whether
we
could
somehow
build
in
so
that
the
runtime
is
capable
enough
or
smart
enough
to
trigger
a
evacuation
of
a
given
a
few.
Some
certain
workloads
from
affected
Zone
and
shifted
the
remaining
cortisols
as
a
something
built
in
into
the
runtime.
C
It
sounds
like
exactly
exactly
I
mean
something
quite
a
word
feature
request,
which
most
probably
requires
a
design
requires,
a
feedback,
whatever
requirements,
etc,
etc.
A
We
sometimes
use
CF
deployment
as
like
our
catch-all
for
like
this
is
going
to
be
a
big
change
across
Cloud
Foundry
and
it
gets
the
most
visibility
there
with.
You
know
your
example
use
case
and
how
you
would
with
like
a
proposal
of
design
and
stuff
like
that.
I
think
that's
where
we
started
for
the
pcap
suggestion
as
well.
The
new
pcap
release,
so
I
would
start
there
and
make
an
issue
with
lots
of
details
that
we
can
review
in.
I
C
Yeah,
but
you
mean
just
the
issue
itself,
not
with
the
concrete
proposal.
We
simply
trigger
a
discussion
based
on
the
idea.
A
Yeah
I
would
just
start
the
discussion
there.
If
you
think
your
your
proposal
needs
a
doc.
You
know
maybe
do
that
and
Link
it
in
the
issue
or
write
it
in
the
GitHub
issue
itself.
D
B
E
We're
just
a
tiny
question
on
the
on
the
safe
deployment
issues,
because
we
also,
for
example,
have
hgp3
on
there.
Also
yeah
I
I
got
some
ice
and
the
thumb
up,
but
so
I
just
wanted
to
know.
How
are
they
reiterated
or
what
is
my
job
to
make
sure
that's
not
forgotten
or,
and
is
it
discussed
in
some
in
the
TOC
or.
A
No,
if
you
want
to
discuss
in
the
TOC
I,
would
suggest
going
to
the
TOC
meetings
that
they
have
weekly
I
suggest
posting
it
in
slack
and
asking
for
comments.
If
you
haven't
already,
would
probably
be
the
best
way
to
get
eyes
on
it,
but
I
guess
it
just
depends
what
you
need
to
move
forward
right.
If
sap
is
planning
on
doing
the
PRS,
you
know
I
would
say
just
well.
Http
3
would
probably
be
a
discussion
with
this
group
and
maybe
also
a
discussion
with
copy
or
whatever
working
group.
A
Cloud
controller
is
in
to
nail
out
some
details,
but
all.
A
Say
that
again:
okay,
but
yeah!
It's
when
you
want
other
people
to
do
the
work
that
it
starts
becoming
a
little
bit
harder
right.
But
if
you're
willing
to
do
the
hdv3
work.
Just
let
us
know
how
we
can
unblock
you
and
keep
raising
it.
E
I
E
I
A
A
Lot:
okay,
but
post
it
again
and
all
I
haven't
looked
at
the
details
of
it.
I'm.