►
From YouTube: Config Working Group 2/7/2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
C
C
I
think
it
would
be
safer
to
look
at
released
one
one
for
now.
I,
don't
know
what
state
masters
in
and
there
is
to
talk
about,
because
we're
focusing
all
of
our
development
on
one
one
master
can
be
done
about
a
sink
and
it's
just
kind
of
an
unknown
state.
So
for
mCP
in
one
one.
There
are
sometimes
summarize,
what's
in
there
now
and
tell
me
that
this
matches
up
with
what
you've
seen
there's
an
old
client
server
code,
which
I
can
call
the
old
stack,
and
then
there
are
the
source
sink.
C
A
There's
a
branch
divergence
at
this
point
I
believe
the
Andy
used
to
do
regular,
merges
from
one
one
to
master
and
I
believe
at
some
point.
Bus
stops,
but
we
keep
checking
in
code
to
one
one
and,
for
all
intents
and
purposes
the
old
stack
should
be
considered
as
defunct.
At
this
point,
so
we're
gonna,
we
are
I
mean
the
old
stack
is
their
flat
protected.
It's
kind
of
as
a
way
to
say.
Look,
you
know
if
you
find
something
agreed
is
the
wrong
with
the
stack,
the
new
stack,
you
always
don't.
A
C
You
were
kind
of
phasing
it
in.
We
put
the
news,
we
did
some
refactoring
work,
which
I
think
you
know,
I
didn't
see,
see
it
on,
they
were
put
any
new
stack
and
then
we
switched
everything
over
to
use
a
new
stack
and
we
were
letting
that
kind
of
bake
for
a
little
bit
and
then
the
final
step
would
be
to
remove
the
old,
the
old
stack.
So
any
any
changes
are
going
into
that
and
as.
C
A
Think
that
would
be
yes,
so
even
if
you
end
up
shipping,
the
old
stack
with
one
one
that
will
be
more
of
a
base
stage
rather
than
something
that's
that's
done
deliberately
like.
As
I
said,
the
the
whole
plan
was
to
switch
to
the
new
proto's
to
the
view
model
for
the
one
one
release
and
not
have
the
backward
compatibility
burden
right.
That
was
the
like
the
main
reason
that
we
want
to
have
that
all
of
it
in
one
one
right
and.
A
C
B
C
There's
this
kind
of
stepping
stones
here
there
was
in
order
to
make
this
in
order
to
have
meaningful
names
and
not
for
things
to
be
confusing
me.
We
had
we
renamed,
so
he
didn't
have
the
server
sending
the
response
in
the
client
sending
or
the
clients
any
response
and
the
server
sending
a
a
request
said
we'll
just
do
all
their
names
just
wants
to
make
everything
consistent.
Even
if
we
don't
hook
all
of
it
out
so
the
source
sink
stack
is
their
forward
mcps
that
we're
still
used.
C
C
None
of
that
is
the
architectural
work
is
in
place,
but
we're
not
none
of
that
is
hooked
out,
there's
basically
a
feature
parity
or
that
we
had
before,
but
with
the
final
API
names
and
then,
in
addition
to
that,
we
also
have
this
incremental
option
that
is
off
by
default,
but
but
as
plumbers
there
now.
So
if
we
want
to
or
need
to
turn
that
on
Allah,
not
only
we're
in
a
good
place
to
do
that,
I
mean.
C
Don't
know
what
that
would
be
on
in
one
one:
zero,
but
all
its
code
completely,
and
we
have
stress
tests
at
that
level.
It's
the
sort
of
thing
that
we
could
turn
on
an
appointment.
So
if
we,
if
we
see
that
there's
a
performance
issue
or
if
somebody
wants
to
turn
it
on
in
their
own
deployment,
like
a
foundry,
it's
at
that
point.
B
And
the
next
item
is
actually
something
I
added
with
regards
to
naming
conventions
and
it's
relevant
to
what
we
were
talking
about
a
second
ago.
So
with
release
one
one
and
well,
the
proto
is
being
named.
What
they
are,
how
flexible
is
that
going
to
be
after
release,
one
one,
for
instance,
there
is
something
called
like
the
new
resource
source
client
yeah
are.
C
Api
so
I
think
at
this
point
anything
we
make.
We
need
to
be
very
careful
about
backwards,
compatibility
yeah,
so
it
so
that's
that's
one
thing
so
I,
don't
think
we'll
change
it
in
terms
of
what
we
actually
ended
up
with.
That
seemed
to
be
the
least
horrible
version
of
all
the
options
that
you
come
up
with.
C
B
C
The
way
I
look
at
this
is
in
forward
ntp.
Galley
is
implements
a
source
service,
SEC
service,
to
provide
source
of
configuration.
And
then,
if
you
want
a
client,
you
would
create
a
client
of
that
source
service.
So
if
you
look
at
the
in
the
same
package
which
implements
the
same
functionality,
it
implements
a
client
of
the
source
service
which
is
like
their.
B
C
C
So,
packages
I
think
that
we
have
a
lot
more
freedoms
are
yeah.
So
so
that's
that's
easier
to
do.
I
think,
there's
a
lot
of
work
or
there's
work
that
we're
starting
to
do
it.
I'll
see
what
starting
to
do
a
master
astronaut
calling
out
the
best
places
these
were
targeting
for
postal
online
Nina's
make
making
some
rate
limiting
changes.
C
A
I
think,
like
the
chicken
bar
right
now
for
the
one
one
branch
is
pretty
high
and
if
you
try
to
do
any
like
refactoring
changes,
prolly
not
gonna,
lend
but
I.
Think
for
most
one
one.
That's
entirely
feasible
right,
because,
as
long
as
we
don't
change
the
API,
we
can't
change
the
client,
libraries
and
well.
B
C
But
yes,
I
think
probably
additional
documentation
to
make
it
easier
for
new
people
coming
to
that,
and
if
there's
things
we
can
do
in
the
pack
to
the
package
level
to
make
it
more
obvious
which
which
that's
all
reasonable
but
I,
think
the
same
coming
about
backwards.
Compatibility
applies,
but
we
probably
have
more
freedom
there.
So
we
can
do
either
aliasing
or
you
know,
kind
of
wrapper
functions
to
to
migrate
over
there's
at
least
internally.
C
We
can,
we
know
within
Co
proper
we're,
not
gonna,
but
we
know
if
we're
gonna
have
a
code,
a
build
breakage
but
you're
consuming
this
package
externally
and
other
people
might
be
as
well
so
I
think
we
have
to
be
thoughtful
with
that.
We
have
some
stronger
guarantees
about
that.
This
package
is
an
external
package
yeah.
A
C
That
we,
we
have
the
design
doc
so
there's
something
in
the
village
sequence,
diagrams
right,
I,
think
what
we
need
to
do
is
put
that
into
the
is-2
API
rebo,
with
just
in
one
spot.
No,
it's
when
you
brought
us
the
grouse.
The
proto
is
they'll,
be
some
cross-referencing
and
you
can
see
some
more
explanation
of
what
source
and
sink
is
how
it
maps
to
client
server.
C
A
I
think
one
thing
that
I'm
gonna
call
out
as
something
that's
not
obvious,
but
as
part
of
the
API
is
the
name
of
the
collections
right.
So
they
have
also
changed
and
we
will.
You
should
try
to
be
very
careful
about
changing
that
in
a
backward,
compatible
way
right,
so
I
think
I'm
just
putting
it
out
there,
because
not
only
the
API
proto's
are
subject
to
a
backward
compatibility
concerns,
but
the
actual
the
collections,
the
the
types
of
proto's
in
the
API
repo,
are
all
subject
to
the
same
thing.
A
E
I
don't
mind
talking
about.
Is
there
any
update
on
the
XDS,
so
I
made
a
PR
little
while
ago
about
the
XDS
updater
in
core
data
model,
which
spiked
up
a
quite
a
bit
of
conversation,
I,
believe
it's
PR,
$10.99
for
and
just
wanted
to
get
some
status
on.
That
I
was
really
hoping.
If
I
could
talk
to
you
Nate
in
this
meeting
to
get
some
updates
see
what's
happening
where
everything
that.
A
E
A
C
B
B
C
C
When
do
you
want
to
do
that?
I
think.
The
part
that
we
don't
have
today
in
this
do
is
a
gonna
tend
to
end
target
and
end
scale
tests
that
we
can
repeat
later
on
there.
We
have
large-scale
testing
that
work
that
we
do,
but
it's
more
of
just
all
of
this
do
and
it's
not
specifically
about
like
I,
want
to
dial
up.
C
It's
not
only
primer
eyes
did
I
have
configuration
turn-on
across
all
collections
and
then
the
number
or
higher
higher
number
of
clients
and
I
think
it.
What
folks
is
more
on
pile
out
in
the
salon
galley,
which
I
guess?
Maybe
you
don't
care
about
as
much?
But
if
you
know
the
part
that
you're
consuming
from
this
deal,
we
have
benchmarks
and
stress
tester
for.
D
C
This
point:
it's
well
we're
looking
to
make
sure
we're
not
leaking
anything
which
it's
kind
of
the
low
bar
to
understand
what
the
the
memory
characteristic
for
the
resource
characteristics
are.
So
what
how
much
memory
per
client?
How
does
that
scale?
If
we
increase
the
number
of
collections,
which
it
was
previously
the
number
of
types
that
is
that
scale?
C
C
Do
we
see
that
or
getting
consistent
state
deliver
and
end?
So
we
can
spin
out
the
server
and
I
can
just
start
pumping
updates
through
and
finding
that
out
to
a
number
of
clients
or
the
clients
receiving
what
they
expect
with
the
idea
that
changes
can
be
accumulated
on
the
server
so
climb,
a
risk
may
skip
updates,
but
it
all
eventually
consistent
that
the
client
will
end
up
with
the
right,
Penn
State.
So
there's
some
text
there
and
the
kind
of
hash
over
the
current
snapshot.
C
You
can
see
that
it
thinks
the
dollar
flip
our
got
it
when
there
was
a
full
state
delivery
or
an
incremental
update.
The
end
result
is
consistent
with
what
the
the
full
view
on
the
server
was
on.
So
there's
various
parameters
that
you
have.
You
can
there's
some
built-in
tests
and
benchmarks.
There's
parameterised
200
additional
experiments
for
your
particular
deployment
scenario.
C
You
can
get
a
sense
of
how
how
that's?
How
that
looks,
that
the
issue
I
think
we
be
found
in
group,
sto
and
pilot
was
a
resource.
We
were
doing
an
excessive
number
of
garbage
collections
because
of
how
pilot
was
how
I
wrote
the
initial
code
and
pilot
to
parse
the
incoming
event
stream.
So
it's
yet
that
kind
of
problem
where
you
see
excess.
When
were
first
memory
allocations
and
subsequent
garbage
collections,
that's
the
sort
of
if
you
were
looking
for
so.
A
The
things
that
we
essentially
like
and
we
are
using
micro
benchmarks
at
this
point
and
they're
very
development,
focused
right
so
they're,
not
the
numbers
where
you're
not
trying
to
collect
numbers
for
office
purposes.
At
this
point,
so
is
there?
Oh,
you
know
what
what's
it?
What
is
your
interest
in
this
and
are
you
looking
at,
like
you
know,
overall,
like
performance
improvements
or
desire
like
an
ops
like
a
numbers.
A
E
A
Very
hard
to
actually
around
the
micro
benchmarks,
reliably
for
CPU
measurements,
for
memory,
measurements
they're
pretty
solid,
but
for
CPU
that
is
like
the
numbers
are
so
bad.
Like
I
did
this
for
mixer
like
when
we
were
doing
the
micro
benchmarks
for
mixer
like
we
cannot
create
gates
out
of
them
on
this
VP.
We
have
like
extremely
long-running
stuff
on
very
consistence.
E
It
sort
of
does,
but
so
so
I
remember
like
a
couple
months
ago
in
Jason
and
Jason
and
I
were
talking
about
writing
the
same
sort
of
like
micro
benchmarks.
But
it
was
just
service
entries
only
to
produce
paper
off
data,
and
is
this
something
because
the
micro
benchmarks
they
not
exactly
given
the
numbers
that
we're
looking
for
or
is
it
basically,
what
I'm
trying
to
say
is
a
difference
between
writing
them
as
a
test
or
as
a
benchmark?
Is
that
why
they
generate
in
the
inconsistent
cpu
results
that
you're
looking
for
so.
A
A
A
C
Think
so
the
the
perfect
scale
group
they
have
a
perfect
game.
They
run
the
sort
of
thing
for
for
other
parts
of
this
do
and
I.
Think
idea
is
that
you
have
a
kind
of
dedicated
environment
to
that
to
that
testing
their
dedicated
machine
here,
debit,
dedicated
set
of
BMS,
but
but
clearly
do
not
circle
CIOs
see
cases
where
it's,
because
it's
heavily
virtualized,
you
see
second
delay,
sometimes
and
tests.
So
if
we
were
relying
on
sleep,
it
has
become
very
flaky
I'm
in
a
benchmark
environment
that
that's
not
gonna,
be
flaky
in
the
test.
C
Another
thing
I'll
be
taking
that
really
speed
results,
so
you
we
could
set
this
up
as
a
post
submit
test
or
have
a
dedicated
machine
that
is
periodically
running
these.
That
would
be
I,
think
something
that
you
could
get
a
reliable,
consistent
results
out
of,
but
it
would
be
on
that
particular
machine
because
then,
if
I
ran
it
on
my
machine,
you
know
everything
British
summer.
Of
course
that's
gonna
be
much
different.
Everyone
in
my
laptop
or
dedicated
machine
yeah.
D
A
If
you
guys
are
interested
by
the
way,
just
for
the
memory
usage
aspect
alone,
it's
it's
possible
to
write
gating
benchmarks,
like
you
know,
we
can
write
but
benchmarks
on
just
if
you
just
measure
the
memory
usage
or
their
locations
and
have
a
gate
out
of
it.
I
actually
had
some
scripts
for
this
from
like
almost
a
year
ago,
but
I
didn't
create
a
gate
of
it
right
so
I
mean
we
can
definitely
resurrect
them
and
try
to
create
a
gate
with
those.
C
C
I
think
extending
that
would
be
great
and
that's
the
sort
of
thing,
because
it's
a
test
that
you
can
even
we
could
argue,
we
want
to
put
into
one
it
clearly
if
it
doesn't
catch
any
production
code
and
just
tests,
that's
I
think
that's
the
sort
of
work
we
want
to
be
doing
so.
No,
it's
encouraged.
It
would
be
you
one
one
and
then
it'll
end
up
in
whatever
release
you
know,
master
branch
is.
E
Not
necessarily
I
was
just
looking
at
this
stress
package
yesterday
for
very
first
time
and
when
I
was
going
through
the
code
and
I'm
always
like
kind
of
thinking,
some
instrumentation
would
be
nice
I'm,
not
even
too
sure,
what's
available
right
now,
so.
C
C
What
else
number
of
updates
applied,
but
we're
not
doing
complex
analysis
on
it
at
this
point,
we're
just
you
kind
of
dumped
out
at
the
end.
So
here's
the
your
target
in
a
crate,
what
that's
ten
percent,
but
one
one
percent,
and
then
we
can
see
that
yeah,
that's
actually
the
clients
produce.
That's
that's
consistent.
You
see
the
number
of
snapshots
that
server
generated
versus
the
number
of
snapshots
that
the
client
applied,
whether
whether
all
those
updates
were
consistent
did
that
should
always
be
consistent.
C
A
Feel
free
to
contribute
yes,
yeah
yeah,
one
of
the
things
that,
especially
in
the
config
group
right
now,
we
are
having
problems,
is
lack
of
resources
for
speak.
To
get
some
of
these
things
done,
we
want
to
get
a
lot
of
things
done
right,
so
somebody
actually
stepped
up
to
do
the
client
libraries
for
us,
which
is
great
right,
and
you
know
like
if
you
if
this
is
a
particular
interest
area
like
getting
the
numbers
you
know,
help
is
appreciated
if.
A
D
A
We
can't
depend
on
mCP
right,
I,
think
that
that
would
be
the
common
goal
here
and
if
you
can
get
that
from
that
point
on,
we
got
from
in
beyond
one
one.
We
can
look
into
like
enhancing
things
enabling
incremental
by
default
or
currently
or
the
versioning
model,
essentially
how
we
can
do
soft
versioning
operations.
We
have
some
plans
for
in
gali
I
think
we
will
need
to
coordinate
on
those
like.