►
From YouTube: OKD Working Group Meeting Oct 15 2019
Description
Full meeting recording form Oct 15 2019
co-chairs: Diane Mueller and Christian Glombeck
links to meeting agenda and resource here:
https://github.com/openshift/community/projects/1
Next meeting will be Oct 29 2019 details here
https://github.com/openshift/community/blob/master/README.md
Join the mailing list here: https://groups.google.com/forum/#!forum/okd-wg
A
Hello,
everybody
and
welcome
again
to
the
okd
working
group
meeting
and
we'll
give
everybody
a
few
minutes
to
join,
and
the
link
to
the
meeting
notes
is
in
the
chat
and
I'm
sharing
my
screen
so
I'm
going
to.
Let
you
see
the
notes
as
we
go
through
and
pull
the
agenda
from
a
few
things
here.
So
give
everybody
a
few
minutes
and
see.
A
A
A
Anyone
can
propose
a
topic
and,
as
we
can
see,
if
you
put
in
an
issue
in
the
community
repo
weekend,
these
were
the
proposed
topics.
From
last
week,
Charo
reached
out
and
said
that
he
wasn't
gonna
be
able
to
come
today.
Sam
is,
is
coming
and
is
going
to
talk
a
little
bit
about
I
think
it's
are
you.
They
yeah
Sam
is
going
to
talk
a
little
bit
about
self
cluster
self
management
today
with
that
CD.
A
But
if
you
have
a
topic
in
going
forward,
if
you
want
to
get
it
on
the
agenda,
just
make
an
issue
and
tag
it
with
the
ok.
D
working
group
propose
topic
label
and
it
will
appear
magically
in
the
community
project
and
we
will
set
it
up
that
way
going
forward
as
well
as
post
other
meetings,
and
it's
been
used
successfully
by
a
few
other
working
group,
SIG's
and
I'd
like
to
try
that
for
this.
If
that's,
okay
with
folks
any
feedback,
any
objection
to
trying
this
out.
A
If
Brian,
if
you
want
upsell
from
you,
if
everybody
could
just
self
mute,
then
when
you
want
to
talk
just
add
yourself
in
that
would
be
great,
so
I'm
gonna,
let
our
fearless
other
co-chair
Christian,
give
an
uptight
and
I
think
you.
The
first
thing
you
wanted
to
talk
about
was
the
okay
D
roadmap
CFA.
B
Yes,
sir
I
mean
I
was
I,
just
came
back
from
holidays,
so
I
don't
really
have
a
huge
update
on
the
F
course
site.
However,
I
sent
out
the
okay
D
roadmap
as
a
call
for
agreement.
They
I
think
it's
yeah,
it's
it's
the
roadmap.
We
already
agreed
on
preliminarily
in
the
working
group
and
now
just
to
make
a
formal
agreement
on
it,
yeah
so
that
that's
on
on
the
community,
repos,
well,
I,
think
it's
PR
number
55
or
something
I
sent
out
an
email
to
all
the
mailing
lists
about
this
as
well.
B
It's
the
same
roadmap
that
we
all
should
already
be
familiar
with,
so
not
a
huge
update
on
that
side,
either.
Yeah
just
formalizing
this
on
the
F
course
side,
I'm
still
working
on
getting
the
CI
up
for
four
builds
of
okay
D
on
Medora,
core
OS
and
for
that
I'll.
Currently,
writing
in
an
enhancement
proposal
to
get
that
all
done
in
a
very
formal
way
as
well
and
reviewed
and
and
accepted
by
the
entire
organization.
So
we
can
really
get
some
buy-in
from
all
the
teams
as
well
on
this.
A
Not
hearing
any
so
I
put
the
link
to
the
pull
request
in
here
and,
as
it
says
in
here,
if
you
can
indicate
your
vote
as
to
accepting
and
rejecting
it,
that
would
be
great,
and
hopefully
we
can
get
this
bit
of
work
done
in
the
not-too-distant
future,
the
sooner
the
better.
So
on
today's,
let's
see,
if
I
can
actually
use
this
community
thing
that
I've
set
up.
F
F
B
F
It
seemed
like
the
discussions
around
this
we're
kind
of
bifurcating,
a
little
I,
think
I,
don't
think
anybody's
in
disagreement,
or
at
least
I
didn't
hear
anyone
in
disagreement
about
CRC
being
great
for
kind
of
development
work,
but
I
had
seen
some
discussion
here
and
I
guess
also
internally
at
red.
Add
about
you
know
a
true
all-in-one
deployment
kind
of
similar
to
what
we
had
with
open
shift,
3,
so
I
just
kind
of
curious,
where
we
still
thinking
about
trying
to
push
in
that
direction,
or
we
just
kind
of
kind
of
follow.
B
Exactly
so
yeah
I
don't
know
who
can
take
the
answer.
The
question
yeah
I.
A
Mean
I'm
attend
think
we
can
talk
to
Charo
about
what's
going
on
with
the
code
ready
thing
nice
and
he
agreed
to
speak
at
the
next
working
group
meeting
about
that
out
and
maybe
bring
it
up.
There
I
think
code
Rick.
This
is
my
interpretation
of
every
all.
The
smoke
signals
that
I've
seen
is
that
it's
it's
going
to
come
from
using
the
code
ready
containers.
F
B
F
A
So
I
think
that
maybe
what
we
could
do
is
move
on
in
Sam
if
you
that
I'm
gonna
kill
your
name
that
chill
it.
That's
pretty
good!
A
C
So
my
name
is
Sam
Bachelet
I'm,
the
at
CD
team
lead
at
Red,
Hat
and
I
just
wanted
to
take
a
little
bit
of
time
to
go
over
kind
of
the
evolution
of
@cd
in
regards
to
cluster
self-management.
Basically,
the
things
I'm
going
to
hit
on
are
kind
of
the
core
functionality.
Bits
that
are
utilized
in
one
in
four
to
kind
of
you
know,
show
the
positives
and
negatives
of
those
and
then
how
we
address
them.
C
Moving
forward,
as
some
of
you
may
know,
a
cluster
@cd
operator
is
slated
for
four
three
we're
pushing
hard
to
meet
that.
So
so
that's
the
gist
and
I
get
started.
The
first
thing
that
I
wanted
to
talk
about
is
it's
basically
how
@cd
bootstraps
today,
so
today
we
use
SRV
discovery
to
basically
facilitate
the
at
CD
server,
knowing
who
its
peers
are
allowing
it
to
to
to
cluster
in
that
way.
C
So
this
is
an
an
internal
mechanism
in
at
CD
server
and
basically,
as
far
as
the
bootstrap
process
goes,
I'm
assuming
you
are,
you
are
familiar
with
with
how
that
works,
but
I'll
touch
on
some
key
points.
So,
basically,
we
have
blue
cube,
which
is
running
on
the
bootstrap
node
and
basically
in
order
for
us
to
progress
and
to
deploy
the
CVO.
We
are
waiting
for
the
sed
cluster
to
bootstrap,
so
we're
so
there's
a
fair
amount
of
time
for
the
nodes
that
come
up.
C
Mco
lays
down
configurations,
you
know
at
CD,
bootstraps
and
then
we'll
progress
so
that
the
rest
of
the
operators
can
can
continue
in
and
if
you,
if,
you've
ever
Blake
tailed
the
law
that
journal
in
on
the
bootstrap
now
you'll
see
the
waiting
for
at
CD
cluster
and
basically
we're
doing
now.
That's
a
tea
cuddle
call
to
took
cluster
Health
and
endpoint
health
for
for
all
three
members,
and
once
those
returns
success
we
progressed
so
so
SRV
is
great
right.
It's.
C
It
allows
us
to
know
a
lot
about
our
surroundings
from
basically
just
a
domain,
so
this
is
an
example,
my
closer
to
I/o.
If
we
were
to
do
an
SR,
V
lookup,
it's
going
to
return
the
endpoints
for
the
cluster,
and
this
is
great-
it
works
fantastic
in
clouds.
Generally,
you
know
when
we're
using
route,
53
etc
is
very
reliable.
But
you
know
there:
there
is
a
situation
that
can
exist
where
maybe
for
whatever
reason
DNS
is
lagging,
we
don't
have
all
the
endpoints.
C
This
is
a
failure
case
that
you
know
the
it's
unfortunate
you
know
and
something
that
we
wanted
to
address
moving
forward.
So
in
the
future
one
way
to
move
around
this
is
we.
We
no
longer
rely
on
SRV
internally,
a
net
CD
server
for
discovery
of
the
cluster.
We
actually
are
going
to
scale
the
cluster
from
the
bootstrap
node,
so
one
distinct
difference
between
four
one,
four
two
and
proposed
four
three
would
be
that
we
started
at
CD
instance
on
the
bootstrap
node.
C
We
were
just
speaking
with,
like
the
bare
metal
guys-
and
you
know
some
other
IP
I
type
installs.
Basically,
this
we're
hopeful
that
this
kind
of
circumvents
having
to
wait
on
the
bootstrap
now
and
because
we
can
basically
scale
up
as
soon
as
the
records
are
available
on
a
per
node
basis.
So
if
one's
a
little
slower,
it
could
be
the
last
one
to
come
up
I'm.
So,
instead
of
waiting
at
boot,
strap
we
you
know
we
wait
later
in
the
process
of
hopefully
this
from
our
test.
C
So
the
second
part
I
want
to
talk
about
is,
is
certs
right
now,
because
sed
has
its
separate
chain
of
trust
for
peer
and
server
and
then
also
an
additional
separate
chain
for
metrics,
which
allows
us
to
isolate
the
k
vs
the
kv
store
away
from
from
users
directly.
So
basically
are.
We
only
want
the
API
server
communicating
directly
with
sed.
C
Basically,
today
that
City
search
signers
stood
up
on
the
bootstrap
node
and
we're
basically
spoofing
certificate,
signing
request.
So
the
v1
beta
and
point
on
the
bootstrap
node
before
the
API
server
is
available
and
and
then
basically,
as
we
scale
up
during
bootstrap,
we're
making
those
calls
to
the
bootstrap
node
and
once
that's
complete,
we
tear
that
down
the
API
server
comes
up
because
it
also
has
this
same
endpoint.
There
would
be
conflict
we're
actually
using
the
API
server
port
as
well
just
a
couple
other
things
to
hit
on.
C
There
is
basically
if
we
have
a
single
member
failure
and
unfortunately
we
can
see
that
we
just
recently
had
a
bug
in
sed
which,
on
the
G
RPC
layer
that
could
cause
corruption
on
the
data
store,
so
basically
would
wipe
out
could
wipe
out
a
single
member
on
a
reboot
and
currently
that
D.
That
would
be
a
disaster
recovery
process
to
to
return
that
member
to
the
cluster
in
for
one
and
for
two
not
really
elegant
and
and
basically
day
to
scaling,
would
be
a
dr
op
operation.
C
We
don't
really
support
larger
than
three
nodes
currently,
but
if
you
wanted
to,
for
some
use
case
scale
to
five,
for
example,
it's
not
something
that
we
we
could
do
without
manual
intervention
against
the
cluster
today,
there's
just
kind
of
a
quick
view
of
how
that
works.
This
is
kind
of
a
visual
of
how
the
during
bootstrap
the
master
node
sends
a
CSR
request
to
the
at
CD
search,
signer,
again
listening
on
that
being
one
beta,
CSR
endpoint
and
then
returns
the
certs
for
for
the
peer
server
and
metrics,
and
this
works.
C
C
C
So
in
the
future
the
operator
handles
signing
you
know
as
far
as
search
generation.
So
if
there
was
a
failure
or
restart
of
a
node,
ip's
change,
etc.
We
we
spin
up
those
certs
on
the
fly.
Also
scale
up,
scaled-down,
is
managed
by
by
the
operator
and
one
of
the
coolest
things
that
we've
we've
kind
of
we
just
recently
had
finalized
is
a
single-member
failure
resolution.
C
So
if,
let's
say,
for
example,
we
had
a
single
for
failure,
three
note
clusters,
so
we
were
degraded
to
to
the
operator,
sees
that
that
pot
is
in
credit
city
member
is
in
crash
loop.
It
will
actually
remove
remove
it
from
the
sed
cluster
restart
the
static
pod.
Basically,
we
pull
the
the
the
spec
from
M
cos
config
and
we
so
we
delete
disk
stopping.
C
That's
it's
pretty
exciting
and
then
I
also
just
wanted
to
hit
on
a
few
kind
of
roadmap
items
for
at
CD
itself
upstream.
So
currently
we
are
at
3.3.
Ten
there's
been
a
lot
of
change
come
in
since
3.14,
basically,
3.14
3.15,
which
was
adopted
upstream
for
cube,
basically
addresses
a
couple.
Big
problems
that
we've
had.
One
of
them
is
the
clients,
client
balancer.
We
really
had
a
horrible
problem
dealing
with
the
sed
client,
balancer
and
and
failover
it's
not
really
working
well.
C
We
actually
ended
up
having
to
do
like
wildcard
certs
for
the
peers,
so
that
so
that
this
would
work,
and
it
really
just
makes
a
mess
of
logs,
etc.
Also,
because
of
the
old
gr
PC
version
there
was
that
bug
that
I
spoke
of
where
you
can.
We
can
basically
lose
a
net
CD
member
through
through
a
bug
in
gr
PC,
and
then
we
had
really
nice
performance
but
was
happening
on
Bebo,
which
is
our
kvs
for
for
at
CD.
C
So
Alibaba
actually
has
a
fork
of
that
CD
which
they
run
their
database
like
over
40
gigs
right,
so
sed
is
hard
limit.
Right
now
is
8
gigs,
but
they're
they're,
running
40,
50
60
K
clusters,
and
to
do
that
they've
done
some,
some
optimizations
on
the
kts
and
some
other
areas
using
different
backends,
but
this
was
something
that
they
submitted
upstream.
They
were
really
excited
about
and
and
it'll
allow
us
in
the
future
to
to
support
larger
databases
for
sed,
and
that
comes
in
three
dot
3.17,
which
I'm
gonna
be
rolling
into
4.3.
C
We
were
pinning
3.3.5
teen,
but
we're
gonna
do
3.3
at
17,
so
we'll
get
all
of
these
fixes
in
there
3.4
we
have
some
exciting
stuff
coming
in.
This
is
not
been
adopted
upstream
yet,
but
what
we're
pushing
for
that
so
fully
concurrent
reads
right
now,
if
we
have
large
reads
against
the
store,
basically,
we
can
end
up
taking
locks
against
right
I'm.
So
if
we
have
a
lot
of
action
going
on
on
the
read
end,
they
can
block
right.
C
You
know,
obviously
a
problem
under
stress:
it's
something:
we've
been
trying
to
fix
for
a
long
time
and
that
is
fixed
in
3.4
and
then
also
we
implemented
raft
learners.
I'm
not
going
to
get
into
all
the
raft
learner
details,
but
basically
it's
a
non-voting
member
that
does
not
work
against
core
does
not
count
against
quorum,
really
helps
the
cluster
in
dealing
with
situations
where
we
want
a
scale,
but
we
don't
want
to
cause
a
disruption
that
a
current
member
add
would
cause
so
for
just
a
high-level.
C
C
The
3.5
is
unreleased,
but
Ralph
Lerner,
failover
mode,
and
this
will
be
huge
for
openshift
at
City
operator
because
we
can
fail
over
to
learner.
So
if,
basically,
if
we
have
a
family
member,
we
can
just
bring
up
one
of
these
learners
that
we
can
have
running
in
the
background
will
automatically
failover
to
it
so
that,
but
that
losing
quorum
can
be
a
lot
harder
thing
to
achieve,
but
we
can
self
heal
much
quicker
with
less
disruption
and
we
can
prepare
for
that
earlier
in
the
process,
so
some
exciting
stuff.
A
Awesome
are
there
any
questions
for
Sam
now
that
we
have
him
here,
and
we
are
grateful
for
this.
C
G
C
C
Well,
I
mean
it's
a
good
point.
You
know
it's
it's!
It's
like
an
upstream
downstream
cycles
thing
as
far
as
you
know,
our
ability
to
be
able
to
to
do
all
of
this
testing
against
cube.
So,
unfortunately,
at
this
point
we
really
rely
on
cube
itself
to
test
these
these
pairings.
Obviously,
when
open
as
far
as
open
ship
is
concerned,
we've
run
tremendous
amount
of
cycles
in
the
CI
testing.
This
you
know
when
we
we
bring
in
like,
for
example,
1.16,
then
we're
bringing
in
24.3.
C
It's
just
a
cycles
thing,
you
know
it's
it's
physical
manpower
I
mean
we.
We
try
our
best
to
to
work
with
with
Google
and
Amazon.
So
Joe
Batson
is
a
maintainer.
You
Gillie.
Those
guys
are
both
involved
pretty
heavily
in
in
pushing
those
along.
You
know
we're
at
City
team.
Just
went
from
one
being
myself
to
three,
so
you
know
I
mean
we
will
in
the
future,
try
to
be
more
involved
in
that.
But
you
know
it
takes
time.
C
A
Open
up
for
a
second
I
think
I
have
people
coming
in
from
another
meeting
and
so
I'm
gonna.
If
people
do
I'm
just
going
to
drop
them
and
give
them
another
an
another
room
to
enter,
so
apologies
for
that
I'll
get
that
going
in
the
background
here
didn't
realize
they
were
using
my
my
my
room
for
that.
So
keep
going
and
I
had
one.
If
there's
no
more
questions
for
Sam
are
there
more
questions
for
him.
A
I'm
looking
quickly
here-
and
it
is
I
feel
like
Christian-
is
the
bottleneck
for
the
f
cause
work
or
the
that
we
need
for
okay,
D
and
I
was
wanted
to
talk
about
how
we
can
help
him
get
forward,
move
forward
a
little
bit
more
quickly.
I
know
you
were
on
PTO,
so
that
was
not
that
you
know
that
wasn't
that
you
weren't
working
it
just
you
weren't
working
last
week,
but
maybe
if
you
could
talk
about
what
what
your
requested.
E
A
B
So
I'm
right
there
right
now
there's
a
few
different
places.
We
need
to
work
on
to
get
the
okay
de
journal
for
out
and
I
was
thinking
it
might
be
because
yeah
I'm
the
only
one
working
full
time
on
this.
Essentially
we
have
Steve
green
who's.
I
was
working
on
that
part
time.
He
he
was
the
he
did.
An
internship
on
the
MCO
team
and
now
he's
back
at
uni,
but
still
does
work
part-time
for
us
yeah
other
than
that
we'll.
What
does
what
I
was
thinking
was.
G
B
B
So
as
we're
not
there
yet
I
haven't
really
thought
about
how
we
will
approach
this.
One
thing
we
could
do
is
just
not
promote
the
rpms
into
fedora
properly
and
just
still
use
them
for
our
composes,
which
isn't
really
the
prettiest
of
ways,
but
it
might
just
work
in
the
longer
term.
We
definitely
want
to
find
a
way
or
that
that's
what
I
want
to
find
to
continuously
build
rpms
from
upstream
repos
and
have
them
automatically
pushed
into
Fedora,
repos
or
semi
automatically.
B
B
I
poked
them,
but
apparently
that
isn't
isn't
that
important
anymore
at
this
stage,
so
the
sort
of
stopped
working
on
on
adding
a
a
koji
build
right.
Now,
it's
it's
very
useful
for
doing
a
copper
to
the
eyeballs
or
RPM
builds
on
copper
continuously.
But
if
we
want
to
do
it
in
Koji,
that's
not
really
possible.
You
have
with
pakka,
don't
they
yeah
they?
They
don't
put
a
lot
of
focus
on
it
right
now.
As
far
as
I
know,.
A
H
Can't
do
that
with
a
copper.
Build
I
haven't
actually
used
Paquette
myself,
but
it
would
be
good
if
we
take
an
action
item
there
to
gain
some
clarity
around
it
because
yeah.
My
understanding
is
that
the
whole
point
of
it
is
to
be
able
to
essentially
go
quicker
from
upstream
to
rpm,
build
to
update
to
into
fedora
and
hopefully
have
automated
tests
along
the
way
so
that
we
remove
manual
steps.
H
B
G
Potentially
because
because
when
I
met
the
team
at
flock
and
before
that
at
Red
Hat
summit,
they
were
very
much
about
like
pushing
things
into
disk
it
getting
it
to
automatically
push
and
make
a
Bodi
update.
Have
it
run
tests
and
if
everything
is
all
gravy
automatically
get
from
OD
to
push
out
to
release
so
like
the
whole
point
of
it
was
to
make
it
so
that
it
is
as
uninvolved
as
possible
to
push
new
releases
into
into
fedora
proper.
So
I
don't
know.
What's
going
on
here,
yeah.
B
H
B
I
just
quickly
put
a
link
in
the
chat,
I
think
that's
the
irrelevant
issue
here
upstream
on
the
packet
service.
What
repo
so
yeah!
That's
definitely
one
thing
and
then
so
and
also
what
once
we
have.
This
we'll
need
some
someone
to
sort
of
feel
responsible
for
for
the
RPMs
and
being
to
to
it
that
they
always
build
successfully
and
everything.
So
that's
why
I'm
thinking
that
might
be.
B
It
might
be
good
to
have
a
Fedora
core
OS
liaison
in
the
okd
working
group
as
well,
and
then
there
is
the
Installer
team
and
the
Machine
config
operator
team,
which
I
would
love
to
see
liaison
engineers
from
as
well,
because
there's
just
a
lot
of
work.
Panco,
that's
going
to
happen!
That's
that
that
will
need
to
happen
there
in
order
to
support
ignition
spec
version.
3.
A
Alright,
so
I
don't
know
anybody
on
the
packet
team
to
ask
Neil,
though,
if
you
can
perhaps
log
an
issue
in
the
community
repo
when
linked
that
packet
issue
into
that
and
track
down
somebody
for
the
29th
or
just
start
an
email
thread
and
get
them
on
the
ok
D
working
group
list.
That
would
probably
be
very
helpful.
I
can.
D
A
A
Hearing
anyone's
yelling
out
the
the
only
other
curiosity
thing
that
happened
while
we
were
between
meetings,
was
the
announcement
around
Santa,
West
stream
and
I.
Don't
think
that
has
any
impact
on
what
we're
doing
here,
but
you
know
we'll
see
where
that
goes.
Yeah,
Santa,
Wes
stream
and
I
can
add
the
links
to
some
of
the
announcements
there.
A
stream
yeah,
not
Steam
stream,
and
you
see,
if
there's
any
additional
resources
being
added
to
that
to
do
any
work
around
CentOS
but
I,
don't
think
so.
I.
B
Don't
think
we
need
to
put
any
work
into
that
right
now.
I
have
been
told
from
from
some
community
members,
though,
that
they
would
like
to
see
okd
on
CentOS
stream,
essentially
on
build
with
Santos
stream
packages,
which
is
a
an
interesting
idea.
I
think,
and
we
should
definitely
look
into
that
again
in
phase
2.
After
our
initial
release
and.
A
The
people
who
have
come
up
to
and
mention
that
that's
exactly
what
I've
been
the
message
that
I've
been
giving
them
is
that
the
F
caused
work
will
give
them
a
wonderful
pattern
and
a
good
resource,
and
you
and
others
to
figure
out
how
to
do
that
for
sent
OS,
because
there
is
some
and
still
interest
in
ok,
d4
on
the
Centaurs,
so
I
don't
want
to
ignore
them.
Definitely.
B
And
I
think
it
shouldn't
be
like
once
we
have
everything
working
on
top
of
fedora
and
when
also
a
redhead
car,
OS
and
OCP
moved
to
ignition
spec
3.
It
shouldn't
really
be
hard
to
adapt
the
depth
to
the
Centaurs
packages,
because
it's
essentially
all
the
same
so
I
think
it
makes
much
more
sense
to
to
just
to
the
to
the
grind
groundwork
on
on
spec
3
and
the
RPMs
right
now
and
then
later
on.
I
have
a
look
at
Santos
yeah.
B
Well,
does
that
for
OCP
for
rented
core
OS
right
now,
but
it's
yeah,
it's
a
patched
version
of
cosa
and
we're
definitely
moving
to
spec
3
as
well
with
OCP
I
think
by
version
4.4,
its
client
right
now.
So
that's
definitely
coming
and
yeah
I,
don't
think
the
the
F
cost
team
would
like
to
to
work
on
cosa
to
make
it
to
make
it
backwards.
Comfortable.
With
respect
to
right,
I
mean.
G
Yeah,
but
between
the
ignition
changes
and
the
and
all
the
other
things
that
are
going
on
like
rel
core
OS
right
now
has
a
weird
snapshot
of
ignition
to
support.
Spec
two
with
tiny
bits
of
spec
3,
add
ons
to
just
be
enough
to
work
so,
and
that
is
not
in
a
released
version
of
cosa.
That's
patched
for
their
use.
It's
not
easy,
feasible
or
potentially
maintainable
to
do
it
that
way,
so
either
we
back
port
ignition
and
Act
III
to
Santos
and
then
start
from
there.
G
A
So
I
can
give
you
all
back
15
minutes
of
your
time
and
I'm,
proposing
that
our
next
meeting
is
October
29th
at
the
same
time,
and
I
will
done
that
update
and
the
notes
and
the
recording
of
this
out
back
out
to
the
mailing
list
tomorrow
when
I
land
back
at
my
home
office
and
have
some
bandwidth
and
if
that's
everything
we
need
to
talk
about
Christian
anything
else.
We
need
to
bring
up
today.
H
G
H
G
In
two
weeks,
I
mean
I
can
tolerate
missing
one
more
stand-up
and
then
then,
when
November
comes
around
and
we
flip
back
to
Eastern
Standard
Time,
then
it's
back
to
me
having
to
show
up
during
lunch
for
these
things,
which
is
fine,
but
once
we
flip
back
to
daylight
savings
time
in
what
she
may
I
think.
Then
it's
going
to
be
a
little
bit
more
troublesome
unless
I,
somehow
reschedule
everything
again,
I'm,
not
sure
I'm
going
to
be
that
lucky
I.
H
Thinking
a
lot
of
people
are
saying
that
sticking
with
UTC
when
the
daylight
savings
switch
happens,
is
best
for
most
people
but
less
confusing.
But
you
know
that's
what
we
do
in
the
Fedora
core
OS
group
and
what
we
did
with
atomic
hose.
But
it's
not
a
not
a
solution
for
everybody.
A
A
Well,
I'm
gonna,
let
you
all
go
and
I
will
post
all
of
this.
These
notes
and
I
will
listen
to
the
recording,
so
I
can
hear
where
I
dropped
off
and
we
will
meet
again
in
two
weeks
and
hopefully
keep
moving
this
all
forward.
So
thank
you
all
for
coming
and
please
do
reach
out
if
you
need
anything
or
have
other
topics.