►
From YouTube: Kubernetes SIG Scheduling Meetings 20170518
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
I
wanted
to
do
like
a
prohibition,
procedure,
type
thing
and
say,
like
it's
part
of
our
regular
meetings,
we
should
probably
enumerate
any
test
failures
that
back
up
the
queue
just
so
we're
good
citizens,
I,
don't
I,
don't
know
if
I've
seen
any
recently.
I
know
that
there
were
some,
but
if
there's
any
anything
backing
up
the
key.
Would
that
be
reviews
that
are
back
up
their
test
links.
A
Guess
not:
okay,
I
guess
we
get
started
out
of
the
hallway
folks.
Here
you
wanted
to
I.
Think
ro,
Heat
you're
here
I
see
you're
on
here.
So
do
you
want
to
present.
D
B
E
D
B
Okay,
so
hi.
This
is
a
row
right
here
from
Huawei.
We
we
have
worked
on
some
technology
project
on
kubernetes
I
have
been
working
on
kubernetes
on
the
last
one
year
or
so
so.
I
worked
with
the
Teddy
and
team
in
the
US,
and
we
work
on
this
feature
called
resource
over
subscription.
So
I
will
just
briefly
introduce
the
feature,
so
the
feature
is
actually
about.
How
do
you
manage
resources
between
high-priority
jobs,
which
we
call
as
latency
critical
jobs
and
best-effort
jobs
which
don't
require
any
guarantee
to
run?
B
So
we
have
actually
taken
this
as
an
inspiration
from
a
couple
of
papers.
That
I
will
let
us
share
in
the
slides
going
ahead.
So
so
there
are
some
existing
proposals
in
the
community
that
we
saw,
which
try
to
address
the
same
problem
of
how
do
you
share
resources
between
between
these
two
different
types
of
jobs
and
and
we
still
try
to
achieve
the
s
loz
of
the
three
jobs
and
still
fight
well
achieve
more
efficiency
of
utilization
of
the
resources.
B
So
we
saw
that
there
are
a
couple
of
problems
in
the
in
in
such
community
proposals.
One
such
proposal
is
highlighted
in
the
slide
here,
so
I
will
not
go
through
the
proposal,
but
I
will
just
discuss
some
points
that
we
we
thought
we're
not
covered
in
this
proposal
like
how
do
you,
how
do
we
actually
measure
the
performance
of
an
application?
And
how
do
you
determine
how
do
how
to
Evek
best-effort
jobs?
B
How
do
you
place
a
best-effort
job
without
knowing
it
so
request
amount,
because
currently
we
only
care
about
primary
jobs,
which
can
request
for
certain
resources
for
best
effort
jobs?
We
don't
care
about
what
what
other
choices
like
a
request,
so
some
drawbacks
of
this
designer
that,
as
I
said,
the
best
of
her
jobs
can't
specify
any
requests
or
limits.
B
So,
as
I
said,
we
took
some
inspiration
from
some
papers.
One
of
those
was
heckles
which,
which
was
by
crystal
tennis
team,
and
they
they
actually
mentioned.
How
do
you
actually
achieve
sharing
of
resources
like
CPU
or
memory
or
network
bandwidth
between
different
types
of
jobs,
so
as
to
improve
the
efficiency
or
utilization
of
the
resources?
So
so
this
was
one
of
the
inspiration
for
our
work
on
the
next
slide.
I
talked
about
meso
serenity,
which
is
again
an
open
source
project
which
which
actually
try
to
implement
resource
oversubscription
using
a
QoS
controller
pipeline.
B
And
this
information
is
passed
on
to
the
slave,
which
is
percolated
back
to
the
master
and
and
then
the
framework
index
which
tries
to
launch
a
particular
job.
We
take
into
account
these
types
of
statistics,
of
of
which
resources
are
reliable
and
can
be
used
by
low
priority
jobs
and
which
could
be
necessarily
taken
back
or
rebuilt
back.
B
If
we
observe
that
the
latency
of
the
high
priority
job
is
going
above
the
threshold,
so
it
tries
to
calculate
and
it
tries
to
consider
all
these
factors
and
then
it
tries
to
you
know,
deploy
both
kind
of
jobs
on
to
the
slave
on
to
the
same
slave
nodes.
So
so
this
is
another
inspiration
that
we
took
and
we
try
to
implement
a
similar
mechanism
in
kubernetes.
So
so
this
is
the
high
level
system
overview
of
the
various
interactions
that
happen
between
the
components.
B
So
when
I
mean
a
primary
part,
it
is
a
latency
critical
job
like
a
database,
server
or
other
web
server
which,
which
has
to
be
run
continuously
without
any
SLO,
is
being
affected
so
assume
that
we
have
a
pod
p1,
which
is
already
running
on
the
slave
node
right
and
we
have
a
new
component
called
as
the
estimator
okay,
which
tries
to
basically
measure
the
the
amount
of
resources
consumed
by
different
parts
on
this
particular
slave
node.
Okay.
B
So
this
guy
actually
tries
to
report
report
back
the
reclaim
able
resources
which
is
from
power
p1
so
say
if
pod
v1
requests
for
4
GB
of
memory
and
and
if
it
is
not
at
its
peak
load,
it
might
be
just
consuming
2gb
of
its
memory.
So
the
rest
2gb
is
still
reachable
which
can
be
reused,
so
it
will
report
this
kind
of
reclaim
able
resources
back
to
our
master
and
on
the
master
node
when
we,
when
we
get
a
job
actually
right.
So
so
we
we
made
some
changes
to
the
scheduler
all
as
well.
B
So
so
so
what
the
scheduler
will
do
is
it
will
try
to
basically
find
out
a
particular
node
which
is
having
reclinable
resources
for
P
or
P
to
run
and
regular
resources
for
p2
to
run.
Okay,
so
now
assume
that
it
tries
to
schedule
both
of
these
pot
onto
the
same
node.
Assuming
that
we
have
sufficient
reclaim
able
resources
for
for
p1
and
regular
resources
for
us
for
a
different
part,
p2,
okay,
so
so
now
everything
is
running.
Fine.
B
All
the
three
parts
are
running
fine,
and
now
we
also
that
part
even
see
usage
is
going
beyond
the
threshold.
The
usage
could
be
related
to
memory
or
network
I/o
bandwidth
or
any
other
resource
for
our
project.
We
just
considered
metrics
like
memory
and
network
bandwidth
for
our
technology
closets,
but
ideally
it
could
be
any
resource
like
it
could
be:
l2
cache
or
l3
cache
or
any
other
shared
resource.
Okay,
so
you.
A
B
E
A
B
B
B
Correct
correct
yeah,
so
yeah
continuing
ahead
like
now,
we
assume
now
we
find
out
that
the
part
evens
usage
goes
beyond
its
threshold.
So
now
we
need
to
take
some
action.
Okay,
so
what
could
be
reactions?
The
actions
would
be.
We
could
free
up
some
memory
by
explicitly
killing
a
particular
by
Stepford
job
or
if
we
want
to
take
back
the
network
bandwidth,
what
we
could
do.
We
could
just
pause
a
particular
vegetable
part,
because
that
would
stop
doing
a
network
I/o.
So
these
are
the
two
possible
actions
that
we
consider
in
our
framework.
B
C
E
B
Correct
yeah,
so
overall,
our
algorithm
actually
is
quite
primitive
in
the
fact
that
we
try
to
actually
maintain
the
last
10
samples
from
C
advisor
for
a
particular
kind
of
a
metric
likes
in
memory
or
network
bandwidth,
and
we
try
to
predict
in
the
next
interval
that
we
are
trying
to
monitor
what
could
be
the
rate
of
increase
in
a
particular
kind
of
a
resource.
So
we
have
a
very
primitive
formula
that
we
will
try
to
calculate
this,
but
of
course
we
could
try
to
improve
upon.
C
So
there's
two
there's
two
problems
with
their
sampling
sizes:
you're
doing
a
window
to
cross
the
sea
advisor
samples,
but
the
sea
advisor
samples
have
the
whole
window
too.
So
you
could
have
you
can
reverse
them
between
those
windows
and
you
have
no
idea
so
I'm.
Just
trying
to
like
point
out.
You
know
sort
of
the
catch-22
engines
that
you're
having
you
but
I,
see
what
you're
trying
to
do.
B
G
C
C
It
yeah
the
problem
was
that
pod
one
was
approaching
a
threshold
we
gave.
We
gave
some
of
pod
one
away
right,
so
we
could
run
pod
three,
but
there
was
no.
That
was
Table,
one
actually
needed
its
original
request,
or
some
value
higher
or
some
value.
That
was
within
its
original
margin
right
and
we
needed
to
interrupt
the
pod
three
and
go
back
and
give
that
space
back
upon
one,
but
that
sampling
interval
was
the
problem.
So
we
basically
punishing
the
person
who
had
guaranteed.
G
E
Yeah
I
mean
maybe
yeah.
We
have.
E
Know
we
have
a
somewhat
similar
question
here:
I
guess
for
certain
priorities,
for
example,
for
a
job
which
is
latency
critical.
You
should
never
take
resources
away,
even
if
your
resource
estimator
believes
that
it
uses
like
half
of
the
resource
settings
it
has
asked
for.
If
you
never
take
your
sources
away
from
certain
types
of
workloads,
we
don't
have
the
concept
of
priority
yet
in
communities,
but
we
were
trying
to
add
it
so
probably
depending
on
a
priority
of
the
pod.
Sometimes
you
make
you
may
or
may
not
want
to
take
a
survey.
A
I'm,
not
a
nurse
the
first
I'm,
not
everybody
seems
to
agree
that
you
wouldn't
want
to
take
resources
away
from
high
party
pod,
but
I'm,
not
understanding.
Why
can't
you
just
kill
or
throttle
the
opportunistic
or
low
priority
one
when
you
get
into
situation
where
the
high
priority
one
something
we
need
the
resources.
E
You
can
what
the
problem
is
that,
if
your,
if
your
workload
is
latency
critical,
you
may
not
be
able
to
do
it
quickly.
So
if,
for
example,
my
job
suddenly
wants
to
burst
on
CPU
or
memory
for
I
know
a
fraction
of
a
second
or
a
millisecond,
you
may
not
be
able
to
kill
other
jobs
quickly
and
give
resources
back
to
the
water.
Imagine.
A
But
I
mean
so
if
you
need
more
memory,
won't
the
system.
In
the
worst
case,
I
mean
we
have
a
user
user
States
out
of
resource
killer,
but
even
if
that
didn't
exist
like
like,
wouldn't
the
system
in
killer
kill
based
on
like
the
scores,
and
you
could
set
those
scores
such
that
the
high
priority
parts
would
not
be
killed
or
would
be
killed,
blast.
E
E
G
If
you
look
at
the
Heracles
paper,
what
they
did
was
they
tried
to
maintain
Headroom
across
a
number
of
hardware
resource
sectors,
including
things
as
low
level
as
like
voltage
across
the
chip
and
stuff,
like
that,
there's
based
heavily
on
specific
tuning
data
from
the
legacy
application,
they're,
protecting
yeah
I,
say
not
not
easy,
but
you
definitely
need
something
faster
than
the
qubits
sync
whoo.
So
you
need
to
be
able
to
act
on
the
node
IC.
A
A
E
B
No
problem,
thank
you,
so
I
think
we
covered
like
most
of
the
most
other
things
as
I'll
just
rush
through
the
slides,
so
yeah.
So
this
is
the
gesture
overview
of
the
resource.
Estimator
function.
It
collects
the
matrix
from
C
advisor
and
there
is
a
data
snooping
and
forecasting
mechanism,
and
then
it
tries
to
update
the
node
status
with
the
direct
limited
resources.
B
F
Also,
I'd
saw
that
in
a
previous
of
the
diagram,
you
do
have
a
nova
new,
safe
forecast,
the
resources,
so
you
are
taking
into
account
the
threshold
number.
The
Headroom,
which
somebody
just
mentioned
as
part
of
the
day,
the
the
smoothening
and
the
forecasting
thing
I'm,
assuming
you're,
going
to
take
care
of
that
the
threshold
for
that
lower
latency
critical
as
well,
though,
isn't
it
because,
yes,
because,
let's
say
you've
you're
to.
F
H
H
Then
why
not
start
with
2gb
and
I
mean
I
am
trying
to
say
like
how
to
how
it
might
be
better?
To
put
this
problem
friendly
and
NS
a
Jitsu
require
resource
requirement
increases.
Then.
If
there
are
other
ports,
the
that
amount
of
high
priority,
then,
instead
of
procurement,
reclaim
this
or
chisel,
we
should
give
it
more
resources
and
because
it's
a
high
priority,
we
should
a
miracle
other
lower
priority,
but.
F
We
know
the
thing:
is
the
user
user
specifically
mentioned
that
you
know
the
my
my
I
want
to
reserve
for
CPUs,
so
he
doesn't.
The
user
doesn't
know
that.
Is
he
so,
as
you
find
out,
by
do
constantly
monitoring
that
application
that
you
know
ye,
even
though
he
deserve
four
CPUs,
but
in
last
10
samples
or
50
samples
or
whatever
the
exponential
smoothing
or
whatever
you
find
out
that
the
usual
is
using
only
tools,
two
CPUs,
so
we're
going
to
take
two
CPU,
the
wave
of
a
PC.
F
So
this
problem
is
a
resource
estimation
versus
the
reservation
issue
you
see
or
you
either
upfront.
Do
you
know
that
even
though
user
is
asking
for
soup
for
CPUs,
but
actually
you
know
based
on
this
application
type,
he
may
only
need
two.
So
that's
called
a
resource
estimation
thing
so
yeah.
So
that's
another
approach,
but
that's
not
something
we
are
doing
well.
H
Yeah,
my
question
is
like,
instead
of
approaching
I
mean
instead
of
flow,
initially
assigning
the
upper
bound
or
or
whatever
about
is
better.
While
we
don't
approach
it
that
we
start
with
something,
and
then
we
increase
or
decrease
depending
upon
the
requirement
of
the
high
priority
box,
and
then
we
take
X
and
so
on.
The
other
boats
lowers
our
defaults.
E
C
C
F
Resource
estimation
thing
the
quads
are
paper
is
that
those
guys
have
done
in
so
you
cannot
instead
of
doing
a
resource
reservation.
You
can
estimate
the
resources
and
you
do
the
allocation
and
assignment
that
that's
another
approach,
but
that's
something
different
though
I
think
what
is
suggesting,
but
how
would
you
know
the
lower
bound?
How
would
you
know
how.
H
Okay,
so
it's
like,
we
only
the
only
are
aware
of
upper
bound
because
the
wavy
like
we
have
in
cubed,
that
is
be
always
like
requests
and
limits.
So
request
is
like
a
lower
bound
like
we
make
available
for
pause,
so
X,
so
like
I
was
suggesting
like
the
but
concept
beyond
earlier
concept
of
request,
and
so
we
start
from
there.
H
D
B
So
in
our
in
our
implementation,
we
have
actually
accounted
for
the
requested
resource
itself
and
and
not
the
and
not
the
limit,
because
in
case
of
the
limits
right,
the
if
you
have,
if
you
have
this
om
killer
or
the
eviction
manager
right,
it
would
anyways
take
care
of
killing
pods
if
they
go
beyond
their
limits
right.
So
so
we
just
take.
We
just
consider
the
requested
resources
and
we
have
some
thresholds
around
those
and
not
the
limits.
Yeah,
maybe
not.
H
Only
thing
like
what
I'm
trying
to
say
like
if
we
start
from
the
I
mean
I
I,
understand
you're,
saying
the
user
desert
does
not
know
about
the
minimum,
but
I
dated
like
equity
was
something
called
like
request
and
we
are
starting
from
there.
So
my
point
is
it
of
reclaiming
resource
chaser
I
mean
we
don't
have
to
go.
We
don't
have
to
care
about
reclaiming
resources.
B
B
Yeah,
so
the
corrective
actions
include
killing
or
killing
a
pod.
By
doing
a
copper,
stop
occurring
thing
provides
the
way
to
kill
a
container
or
if
we
observe
the
network
band,
which
is
going
beyond
our
threshold
for
a
primary
pod.
We
try
to
freeze
a
particular
second
record
by
doing
another
pause
and
if
we
find
that
the
del
Del
T
job
that
has
not
elect
is
now
well
within
control
for
its
network
bandwidth,
we
try
to
do
a
docker
and
pause
and
try
to
unfreeze
particular
frozen
container
so
that
it
can
resume
its
family
operation.
E
B
E
E
G
G
G
A
C
F
F
C
F
Either
the
old
restore
time
there's
a
lot
of
research
there.
The
paper
came
out
that
go
center
guys
they
did
the
active
memory
thing
actually,
so
they
don't
bring
back
the
whole
memory.
So
the
restore
process
is
pretty
fast.
So
there's
a
lot
of
work
going
on
there
as
well,
but
this
was
kind
of
a
starting
point.
You
can
do,
freeze
and
freeze
and
then
potentially
you
can
do
checkpoint
restart
as
well.
F
E
E
A
B
Just
run
to
the
rest
of
the
slides,
so
this
is
overall
the
overall
framework.
So
what
we
have
is
we
have
an
initialization
from
a
JSON
file,
basically
config
file,
which
will
have
some
thresholds
for
different
types
of
resources,
and
there
is
also
a
flag
where
you
can
enable
or
disable
this
particular
feature.
You
could
also
pass
it
as
a
flag
to
the
cubelet
when
it
starts
up,
but
for
simplicity.
B
We
have
kept
it
in
the
JSON
file
and
the
first
step
is
we
acquire
matrix
from
ski
advisor
and
then
we
actually
try
to
run
through
a
series
of
controllers
and
each
controller
just
focuses
on
one
type
of
resource.
So,
if
you
have
a
memory
controller,
it
will
try
to
find
out
if
there
are
any
existing
secondary
parts
which
can
be
killed.
B
If,
if
the
LC
jobs
memory
is
about
the
threshold,
and
it
will
simply
build
a
list
of
actions
of
which
part
to
kill
our
in
in
case
of
a
networking
controller,
Network
I
control,
it
will
say
it
will
tell
which
part
to
freeze
or
increase-
and
we
haven't
built
this
particular
shared
resource
controller,
which
accounts
for
CPU
cache.
And
there
is
another
controller
which
we
built
for
SLA
SLA
based
Anakin's.
Very,
we
try
to
measure
the
application
performance
latency
of
LC
jobs
and
then
try
to
take
corrective
actions
just.
C
There's
a
big
leg
for
me
asking
questions
in
life,
the
audio!
So
just
out
of
curiosity
is
there
a
heavy
thought
about
just
taking
si
advisor
out
of
the
mix,
because
there's
a
ton
of
low
latency
utilities
for
monitoring
to
do
smarter,
because
you're
almost
running
these
as
daemon
sense
right,
uh-huh.
G
B
A
I
had
a
quick
question:
I
didn't
see
like
CPU
I
thought,
CPU,
cache
and
bandwidth,
but
not
CPU.
Is
there
reason
and
also
you
mentioned
networking
I,
don't
think
there
is
a
way
to
reserve
like
we
don't
have
network
bandwidth
as
the
first-class
resource
in
kubernetes.
So
where
were
you
getting
like
the
network
requests
for
doing
it
with
networking.
B
A
A
B
Yeah
yeah
so
explicitly
a
user
doesn't
the
request
for
any
network
bandwidth,
but
what
we
observe
is
over
the
last
again
n
samples.
What
is
the
network
I/o
usage
for
a
particular
part
and
based
on
that?
We
try
to
do
some
actions,
so
it
is
like
internal
and
not
exposed
to
the
end
user,
but
he
can.
He
can
say
that
he
would
like
to
reuse
some
available
network
bandwidth
if,
if
possible.
So
it's
all
data
from
C
advisor
that
we
rely
upon
ok
and
what.
A
B
I
think
this
is
CPU,
or
we
explicitly
don't
or
try
to
control
right.
It
should
be
taken
care
by,
as
we
discussed
in
some
in
current
discussion
in
this
meeting
right.
It
should
be
taken
care
by
kuben
out
of
itself
like
if
that
particular
part
is
not
using
that
particular
CPU,
which
it
should
be
anyways,
be
usable
by
the
pause
or
something
right.
B
Yeah,
okay,
fine,
the
typical
process
of
a
controller
would
be
again
like
take
the
action
list
from
the
previous
controller
and
then
try
to
check
the
thresholds
for
for
the
particular
type
of
resource
and
then,
if
it
exceeds
the
threshold
we
would
it
be
good
salt,
basically
by
descending
order
of
usage
for
a
secondary
part,
because
there
could
be
multiple
secondary
parts.
That
would
be
there
in
a
node.
B
So
I
would
just
skip
this
table
of
changes.
But
but
just
to
note,
we
took
this
from
the
TOEFL
paper
where
it
tries
to
actually
shed
you
will
short
jobs,
short
jobs.
We
are
jobs
which
could
actually
execute
in
a
very
quick
amount
of
time
or
a
short
span
of
time,
and
we
could
actually
use
some
revocable
resources
to
execute
this
kind
of
jobs.
So
I
would
I
would
not
go
into
the
details
of
the
flow
chart
over
time.
B
So
just
an
example:
we
would
like
have
this
kind
of
a
pod
which,
which
would
request
for
revocable
resources
and
if
you
are
able
to
successfully
schedule
that
they
will
annotate
it.
It
has
like
this
is
secondary
h2
in
its
annotations.
So
so,
in
our
controllers,
we
will
check
whether
a
party,
the
secondary
pod
or
not
and
then
take
corrective
actions.
Our.
C
D
A
B
D
A
Okay,
thanks
I
sorry
I
thought
that
I
saw
at
one
point.
The
number
of
slides
was
much
larger
than
we
should
so
I
I'm,
not
rushing
you
I
guess
we
have
a
little
bit
of
time.
If
people
want
to
ask
questions,
if
you
just
save
ten
minutes
at
the
end
for
the
other
topics,
but
if
people
had
any
questions
about
it,
I
think
we
have
time
for
that.
I
did.
C
C
B
D
Well,
we,
this
is
how
we
do
the
experiment.
I,
don't
have
the
exact
number
with
me
right
now
that
we
do
the
experiment
in
this
way.
So
we
use
the
Google
cluster
data
and
use
that
data
to
create
something
static,
workload
so
from
different
type
of
workload,
and
then
we
before
we
and
then
we
start
all
those
workload
and
the
measure
for
some
latency
critical
applications.
We
measure
the
measured
latencies
and
then
we
start
with
and
we
enable
the
resource
recognition,
and
so
it
allows
us
to.
D
It
allows
us
to
deploy
more
work
out
on
to
the
ignition
cluster,
and
then
we
measure
that
extra.
We
see
how
much
percentage
of
the
work
of
the
can
we
can
we
deploy
on
to
the
cluster,
and
that
is
basically
we
got
that
from
which
we
try
to
deploy
more
workload
into
a
cluster
with
the
sacrifice
latency
from
the
quick.
Perhaps.
A
F
Know
in
the
past,
but
we
wanted
to
kind
of
find
out
I
think
it
will
hit
mention
that
as
well.
This
is
debug
actually
that
there
is
already
some
PR
that
thing
to
which
we
think
there's
some
issues
with
that.
So
we
wanted
to
kind
of
find
out.
You
know
where
this
energy
you
know
is
that
something
we
should
kick
start
the
project
process
or
is
there
something
we
should
collaborate
with
the
existing
effort?
So
that's
what
that's
what
we
need
to
find
out?
Actually,
yes,.
F
The
whole
thing
we
can,
we
kind
of
we
wanted
to
demonstrate.
Essentially
what
we've
done
is
we
implemented
the
higher
oculus
paper
so
and
obviously
we
didn't
implement
the
way,
but
the
some
of
the
pieces
are
missing.
So
that's
something
we
can
could
be
a
starting
point
or
is
there
something
we
should
work
with
the
existing
issue
to
three
great
you.
A
E
A
G
F
G
In
terms
of
having
some
sort
of
like
a
externally
provided
of
isolation
policy
that
an
ongoing
discussion,
I
think
it's,
you
know
that
topic
came
up
in
the
resource
management
worker
face-to-face
last
week,
and
currently
it's
not
on
the
agenda
for
anybody
to
work
on
well.
What
is
on
the
agenda
is
now
trying
to
differentiate
what
the
different
QoS
classes
mean
and
providing
some
more
incentive
to.
You
know
to
be
a
guaranteed
pod,
and
so
there's
going
to
be
some
work
coming
out
of
that
around
CPU
management
at
least.
G
F
Say
the
thing
is
from
our
perspective,
the
code
we
already
have
a
code,
the
basic
code
of
the
implementation
of
it
our
place.
So
we
don't
mind
I
mean,
are
we
we
would
love
to
do
that
I?
We
can,
you
know,
start
a
PR
or
whatever
and
then
maybe
start
the
process
and
other
people
can
start
contributing
towards
that
anomalous.
To
make
sense,
I
wonder.
A
C
G
G
C
A
F
B
E
A
F
A
E
A
A
D
E
B
F
F
A
A
G
G
Want
to
stand
in
the
way
of
anyone,
you
know
doing
work
so
I,
you
know,
I
I
would
block
it.
I
don't
have
the
authority
to
do
so
anyway,
but
yeah
edges
see.
Is
that
some
some
folks
in
signal.
A
A
I
think
the
doctor
paused
thing
they
probably
would
not
not
be
happy
about,
but
I
mean
the
retina
that
the
rest
of
the
stuff
might
be
might
be
reasonable.
It
didn't
sound
like
there
was
much
change
to
cue
bullet
but
yeah,
so
maybe
they
should
present
this
at
the
sig
note
meeting
or
all
this
personal
matter.
A
That's
some.
A
E
A
F
A
F
F
D
A
E
A
B
A
Does
anybody
have
an
opinion
or
object
to
that
I
mean
the
advantages.
Are
that
it's
on
the
East
Coast
a
little
more
convenient,
because
in
the
meeting
would
end
at
5:00
p.m.
instead
of
ending
at
6:00
p.m.
and
also
it
would
be
a
little
more
and
nobody
from
Europe
seems
to
ever
come
to
these
meetings,
but
it
would
be
more
feasible
for
them,
because
then
it
would
end
at
10:00
p.m.
instead
of
11:00
p.m.
and
so
maybe
maybe
somebody
from
Europe
would
come.
Although
nobody
has
never
really
expressed
interest.
A
F
A
That
would
be
great
thanks,
and
so
I
doesn't
sound
like
anybody
objects
to
moving
the
meeting
an
hour
earlier,
so
I
think
we
should
provisionally
say
we
will
do
that.
I'll
check
again
on
the
mailing
list
to
make
sure
nobody
objects
and
then
then
we
can
confirm
it
for
the
next
for
the
next
week
and
we
will
be
recording
the
meetings
religiously.
A
We
did
last
time
and
it
seems
to
work
and
I'm
doing
it
this
time.
So
folks
at
camp
attend,
can
watch
the
video
and
I'll
upload
those
to
YouTube
on
the
following.
Whatever
the
procedure
was
that
we're
supposed
to
be
using
Tim,
we
have
two
minutes
left.
Do
you
want
to
say
something
about
the
test
failures
that
you
mentioned
the
beginning?
No.
C
Just
that
I
think
you
know
it's
like
a
point
of
parliamentary
procedure
like
we
should
probably
outline
them
every
time
we
have
a
meeting
that
way.
Someone
can
go
fix
them
in
case
they
were
paying
attention
to
it
because
last
time
we
knew
we
do
that
there
were
issues
and
they
just
kind
of
hang
around
for
a
long
time.
Okay,.