►
From YouTube: Config WG Meeting - 10/04/2018
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
on
the
Status
sides
he
entered
the
like
a
sinking
formation
at
the
things
that
I
was
able
to
collect
so
on.
The
Gallic
side
is
actually
landed.
The
the
file
system
work,
so
yeah
Lee
now
has
file
system
based
input.
In
addition,
the
server
we
can
get
configuration
from
the
file
system,
I
all
cited,
the
the
test
grade
entries
for
the
mCP
versions
of
the
tests.
I
actually
put
the
link
for
that.
A
B
A
A
B
It's
just
it's
going
number
is
just
going
to
be
feature
parity
with
what
what
we've
done
in
a
non
them
CP
case.
So
I'm,
not
nothing,
not
looking
at
all
the
school
of
the
scale.
No
testing
looks
at
scale
appellate,
but
also
load
of
the
data
path,
so
we're
primarily
interested
in
no
control
path,
see
how
many
yeah,
but
they
suspect-
and
we
can-
we
can
go
through
a
few
things
break.
So
it's
even
though
it's
scale
I
think
it's
it's
functional
coverage
at
yes.
A
I
think
mCP
should
be
mostly
no
change
from
perfect
characteristics
perspective,
but
I
think
it
will
be
great
to
actually
prove
that
yeah
nice
yeah
I,
don't
have
any
in
concrete
to
show
that
yeah.
Okay,
so
on
the
ETA
Carinae
side,
I
created
an
API
governance
talk
time
ago
and
I
shared
it
with
serious,
like
DTSC
members,
actually.
A
I'm
gonna
after
the
meeting
so
I
can
I'm
gonna.
Add
the
document
for
this
link
for
this
documents.
So
this
is
about
establishing
a
process
for
a
v4,
API
management.
So
the
last
recent
meeting
we
actually
view
of
that
and
they're
gonna
I'm
gonna
incorporate
through
the
review
feedback
and
then
style
lamenting
the
process
in
the
next
two
weeks
so
on
the
client
library
site.
A
So
we
have
a
contributor
who's
actually
looking
at
implementing
the
client
libraries,
but
he's
currently
blocked
essentially
on
his
work
is
on
hold
because
he
has
other
other
things
to
do
other
work
to
do.
Okay,
so
he's
do
a
couple,
so
the
crowd
operations.
This
is
the
employee
and,
like
you,
cuddle
or
not
yeah,
so.
B
This
is
in
progress.
We've
already
deprecated
the
ISTE
a
little
credence
that
was
a
buddy
TSE
signed
off
on
that
I.
We
still
need
to
I
still
need
to
introduce
a
validate
command
to
replace
the
one.
So
when
you
different,
when
you
deprecated
to
cut
operations,
the
one
thing
you
lose
is
a
the
ability
to
do.
Client-Side
validation,
so
he'd
be
useful.
To
have
that.
I'll
have
to
hope
for
that
in
student
cuddle,
so
I'm
adding
that
command.
Well,
it's
not
some
place.
B
Then
we
can
send
out
the
duplication
notice
that
you
should
not
should
stop
stop
using
a
steel
cuddle
and
start
using
hue
tunnel
for
con
operations
and
if
any
validation
point
them
at
the
new
commandments
there,
and
then
that's
when
how
that
application
will
play
out.
So
the
my
intent
is
for
one
one:
it
will
be
interpreted
state,
but
it
will
still
work
and
then
we'll
remove.
They
don't
want
to
so
people
will
have
three
or
four
months
to
update
tolling.
It
shouldn't
be
a
way
to
Burzum
on
anybody.
A
A
B
There's
a
bigger
issue
here
of
how
we
want
what
do
we
want
is
do
cut
a
little
light
on
a
teacher.
Deprecating
operations
was
the
beginning
of
that
I'm
still
with
a
lot
of
other
uses,
useful
features
in
there,
so
how
we
structure
that
how
it
might
integrate
with
keep
total
plugin
system,
how
much
of
the
eq
cuddle
or
climbing
kubernetes
client
libraries
we
integrate
with
having
more
Trinity's
native,
feel
in
terms
of
flag
usage
or
loading
configuration
or
syntax.
That's
all
I
can
add
all
that
in
the
issue.
Okay,.
A
B
B
Million
with
authentication
got
an
authentication
policy
as
soon
as
you
have
that
in
the
default
installed
you
avoid
that
problem
is,
we
won't
push
the
fennec
ation
policy,
but
for
correctness
in
a
minimal
is
to
install
with
no
authentication
policy
or
note
no
issue
policy
at
all.
We
have
to
figure
out
some
way
to
do
one,
because
pilot
is
still
expecting
it's
still
depending
on
kubernetes
watch
style
notification.
So
you
do
a
watch
and
you
get
an
indication
that
the
watch
disk
is
sent.
B
C
B
Done,
okay
and
then
the
final
one
is.
This
is
also
for
practice.
If
we
push
can
take,
the
pallet
doesn't
like
for
some
reason.
We
don't
rate
limit,
reassures
right
now
so
get
in
a
correct
system.
This
is
not
a
problem,
but
if
you
miss,
if
something's
out
of
sync,
it
doesn't
feel
the
way
it
should
Eunice
what
sup
nothin
crashes,
but
just
your
logs
are
flooded
with
a
bunch
of
pushes
that
doing
anything.
Okay,
so
we
didn't
really
need
to
do
there.
B
B
Up
that
movie,
you
push
something
it
gets
knocked.
Then
the
client
requests
it
in
the
process
of
in
a
kena
says:
okay,
I
want
more
more
config
and
server
says,
while
I
figure
I
can
give
you
I
just
give
it
to
you
before
and
after,
but
I'll
give
it
to
you
again
and
so
by
real,
limiting
it.
It's
to
that
would
help
prevent
a
flood
storm,
and
then
we
can
also
do
more.
Intelligent
thing
later
is
like
keep
track.
B
If
we,
if
we
enact
something
not
to
reput
until
that
can
fix,
it
has
changed
that
whole
path,
we'll
also
look
at
the
different
ones
for
the
incremental
mCP,
because
you
might
be
able
to
we
push
to
configure
that.
Wasn't
bad.
So
there's
a
lot
of
work.
We
can
do
there,
but
I
think
for
initially
it's
just
rate
limiting
push.
It's
good.
Okay,.
A
A
D
A
quick
question
with
regards
to
end
to
end
mcps
scale,
test
Jason
you
have
in
flight
right
now
can
I
get
some
context
on
that
reason.
I'm
asking
is
just
because
I'm
working
on
something
very
similar
for
cloud
foundry
and
just
want
to
see
how
much
if
it
is
overlapping
with
what
you
have
in
flight.
So
what.
B
At
that
point,
it's
we
don't
that
the
scheme
of
clusters
that
we
have
are
not
there
is
not
a
huge
number
of
pilots
or
mixtures.
I
think
the
key
there
is
really
was
stress,
testing
pilot
with
with
a
larger
number
of
proxies.
So
we
want
to
see
that
we
could
get
a
feature.
Parity
between
the
galley
can
support
a
handful
of
pilots
and
a
handful
of
pictures
and
that
as
config
churns
there's
no
nothing
gets
stuck
and
this
usage,
if
Kelly,
doesn't
doesn't
increase
and
that's
all
integrated
with
Corinne
ADIZ.
D
B
It's
not
a
micro
benchmark,
it's
a
I,
don't
even
think
it's
a
post
submit
at
this
point,
I
think
it's
somebody
running
it
periodically
or
in
the
background
on
a
dedicated
test
cluster.
If
the
long
term
plan
is
that
it
is
automated,
that
would
be
the
postman
or
a
periodic,
but
that
should
be
driven.
The
infrastructure
decided
actually
driven
by
the
performance
and
scalability
workgroup.
There
are
making
progress
on
that
I,
don't
know
the
current
state,
but
the
plan
is
to
just
hook
into
that
already
and
then
once
I'm
city
is
the
default
there.
B
We
don't
there's
no
dedicated
MCB
test.
It's
just
the
test
that
there
already
running
have
that
bit:
flipped
41.1
we're
having
to
spin
things
up
in
parallel,
because
we
still
want
the
legacy
path
to
work
and
then
I'm
just
duplicating
all
the
test
infrastructure
and
making
sure
that
things
can
it
comes
up
with
MCP.
It
works
the
same
way
so
from
user
from
Finnell
price
outside
perspective.
Nothing
is
different
and
that's
you
have
confidence
in
that.
Then
we
flip
that
same
bit
and
the
the
maintain
infrastructure
that
the
this
other
team
is
working
on.
D
This
is
essentially
after
so
your
copilot
is
fully
integrated
with
MCP.
It
sends
the
service
entries
and
virtual
services,
destination
rules
and
gateways
over
the
mCP.
So
what
we
are
doing
is
we
are
sort
of
testing.
Before
and
after
mCP,
we
were
testing
this
against
big
deployments
of
class
foundry,
which
it
basically
consists
of
200
containers
like
we
call
them
Diego
cells.
So
there
are
about
20
Diego
cells
on
each
cell.
D
We
are
running
about
I,
think
10,
containers,
yeah,
so
just
a
full-blown
scale
class
boundary
just
trying
to
take
down
a
Cappy
component,
which
is
basically
the
route
provider
to
copilot.
So
we
are
taking
it
down,
making
sure
the
syncing
operation
actually
comes
back
up.
There
are
a
lot
of
cloud
foundry
specific
scenarios
in
there,
but
mainly
we
just
we
also
monitoring,
co-pilots
and
pilots,
vm
cpu
and
memory
usage
as
well.
You
know,
during
the
spikes,
we're
trying
to
observe
it
as
well.
Okay,.
D
This
is
still
in
flight.
It's
too
early
to
make
that
judgment.
Call
at
this
point.
We'd
literally
picked
up
the
story
a
few
days
ago,
so
we
are
in
progress
of
setting
up
the
environment
and
you
know
just
making
decisions
on
the
metrics
and
stuff,
but
it's
definitely
one
of
those
things.
On
top
of
our
list
to
observe
the
CPN
monitor.
Okay,.
B
So
here's
a
metrics
Daniel
thing
recently
added
I'm
speech,
server,
metrics
and
there's
some
hope.
So
you
should,
if
you
wanted
to
be
able
to
plug
in
your
own
metrics,
but
we're
using
open
census
now
from
prometrics
question.
But
if
you
needed
to
it's
defined
as
an
interface,
so
you
could
put
your
own
metrics
into
that.
You
can
see
connection
attempts,
an
ack-ack
failure
rates
and
I
think
the
client
support
for
that
is
also
landing
pretties.
D
Sure
your
server
metrics
would
be
kind
of
interesting
for
us,
because
one
of
the
goals
we
have
here
is
to
measure
the
convergence
from
the
time
the
company
is
created
inside
co-pilots
all
the
way
to
when
it's
propagated
in
all
the
way.
So
that's
one
of
the
goals
we
have
here
so
yeah.
Those
hooks
for
the
server
side
would
be
very
interesting
because
we
do
need
some
metrics
and
co-pilots
for
the
most
part.
Okay,.