►
From YouTube: Kubernetes Community Meeting 20151001
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
This week included demos of new tooling for deployment, kubectl edit and kubectl apply.
A
B
A
A
So
here
we
can
see
it
created
this
RC
and
it
named
it
deployment
I
see
and
a
hash,
and
it
said
the
replica
strictly
and
it
soon
will
see
new
pods
will
come
against
my
reservation.
We're
not
running
for
not
watching
no
sir
I'm,
not
watching
it's
pullin
will
change
it
to
that.
So
now
we
see
it
says
that
I
have
a
pretty
terrific
I've
three
updated
replicants.
So
this
was
simple.
D
C
A
Now
we
see
that
it
created
a
new
IC,
for
it
is,
and
it's
set
to
0
replicas
right
now
and
the
word
one
is
actually
so
now
it
will
start
start
scaling
up
the
new
IC
and
scaling
down
the
road
I
see
so
the
way
it
works
is
there
are
two
main
parameters
back
sergeant
max
and
the
will
you
be
G
which
are
won
by
default.
Sorry,
Joe's.
A
A
B
D
A
B
Yes,
so
I
there,
a
lot
of
things
for.
H
J
So
this
is
a
game
here
from
the
day,
so
I
got
a
question.
Are
the
deployments
on
designed
to
handle
like
multiple
canary?
So
if
we
want
to
just
you
drop
something
yeah,
you
know
an
RC,
that's
going
to
handle
ten
percent
of
traffic
and
see
how
it
goes
and
then
kind
of
can
it
you
know
or
juice,
maybe
on
it
is
that
it
is
that
kind
of
use
case
was
really
just
transitioning
from
one
hour
see
to
another.
A
So
we
do
but
I
know
it
doesn't
support
Kennedy,
but
we
do
want
to
support
next
to
be
landed
in
the
van
Meter
that
it
actually
pauses
after
after
creating
updating
of
like
a
percentage
of
those
sports,
and
then
you
can
do
your
thing
like
evil.
I
clarify
a
few
things
and
then
continue
their
deployment.
Lee
yeah.
B
So
there
are
really
two
different
modes
that
people
tend
to
use.
One
is
what
Nakheel
described
where
you
think
you
want
to
roll
it
out,
but
want
some
period
of
sort
of
testing
it
on
a
subset.
The
other
is
just
having
multiple
released
tracks
indefinitely,
like
having
one
track
that
revs
every
time
you
get
a
green
deal,
then
another
track
that
it
is
the
bulk
of
your
capacity
that
is
a
more
stable
running,
more
stable
ruiz.
D
B
I
B
G
B
We,
that
is
a
topic
transition
plan
for
graduating
api's
from
experimental,
is
a
topic
of
discussion
later
today.
I
think
we
do
actually
have
a
proposal
that
we're
gonna
discuss
the
hash
out
a
little
bit
on,
in
particular,
we're
gonna
discuss
how
it
impacts
gke,
but
you
know
definitely
the
proposal
is
being
discussed
in
it
issue
on
github,
so
I
can
pick
up
the
number.
B
B
L
L
A
L
G
D
Said,
or
does
something
to
figure
out
that
it
hasn't
changed,
that
the
data
hasn't
changed
and
it
stops
doing
it
will
each
one
thing:
is
it
like
pushes
the
same,
a
copy
of
the
same
thing,
but
it's
another
thing:
if
it
like
imagine,
someone
else
will
do
the
same
room.
Imagine
I
get
like
long
way
in
sorry,
conceivable
thing
or
I:
do
lots
and
lots
of
stuff
with
it?
Okay,
I
can't
actually
bring
it
back,
and
so,
if
I
quit
without
saving
balance,
then.
L
B
In
the
future,
openshift
actually
has
an
operation
called
export,
which
clears
a
bunch
of
fields.
We're
probably
going
to
need
to
implement
export.
They
put
in
the
client
will
probably
in
Flint
it
in
the
server
and
then
use
that
to
to
populate
this.
What
what
happens
if
you
change
something
you
shouldn't
change
like
like
I,
don't
know.
E
J
L
Hello,
they're,
fine.
D
D
D
D
N
D
In
my
edit
uuid
I
can
make
uuid
be
fubar
right
and
I
think
that
I've
changed
it
at
some
level
right,
hi,
dr.
pro
and
it's
going
to
lab,
is
going
to
go
to
the
server
the
server
gonna
say:
yep
good
job,
and
it's
not
going
to
actually
change
anything.
And
so
it's
that's
going
to
violate
like
ass
who's,
doing
up
the
same
problem
like
them.
Just
because
sorry,
I.
B
If
you,
if
you
leave
it
blank
at
least
you're,
not
motivated
to
change
it,
so
so
it's
actually
totally
fine
to
leave
things
blink,
because
what
this
is
actually
doing
is
a
patch
right.
So
you
know
the
right
thing
to
do
would
be
to
strip
out
everything
that
the
user
can't
set
and
shouldn't
care
about.
B
D
Then
ones
they
also
thinking
about
this,
oh
I,
think
that
this
was
discussed
in
a
separate
bug.
We
should
extend
the
ya
moul
codec
to
generate
comments
based
on
the
description
field
that
is
in
the
goal.
I
tagged.
It
would
not
be
that
hard
to
do
so
that,
because
we
already
have
the
like,
when
you
do,
this
is
useful
to
know
like
what
these
things
mean.
Right
and
the
animal
supports
comments
right,
and
so
we
should.
We
can
grab
that
filled
out
of
the
tag
out
of
the
go
object,
tag
and
then
just
shove.
B
E
D
I
think
that's
a
good
idea
and
because
concretely
I
would
actually
like
to
integrate
edit
with
Q
control,
run
and
Q
control
exposed
so
that
you
can
say
Q
run
dox
edit
and
it'll
do
its
all
Jen.
It's
doing
this
hole
generator
thing,
but
then,
instead
of
just
shoving
it
to
the
server
it'll,
kick
it
up
in
an
editor
for
you
to
manually.
We.
B
D
D
Anything
else
cool
there
anything
else
in
each
other
jack
has
a
dope
all
right
jack.
You.
O
H
Yeah
I
just
wanted
to
bring
up
the
topic
of
sort
of
reporting
status
from
the
cigs,
because
we've
been
holding
regular
meetings
on
the
scaling
stuff
and
just
was
wondering
if
we
want
to
start
a
pattern
or
a
schedule
of
you
know
active
cigs,
giving
updates.
That
would
be
great.
How
about
we
do
that?
I,
don't
know
how
long
it's
going
to
take
you
to
set
up
jack
Oh
five
minutes.
Okay,.
H
O
D
H
Dog,
alright,
so
the
the
scale
cigs
I
got
my
notes
up
so
Quentin's
going
to
be
recording
some
stuff
that'll
be
shared
inside
a
Google
will
figure
out
if
we
can
actually
record
those
meetings
and
share
that
stuff,
more
publicly
I
think
we've
been
using
hang
out
still,
but
we
switched
to
a
google
hosted
hangout,
which
is
a
higher
limit.
H
So
the
Samsung
guys,
you
know
and
I
know
Bob's
I,
think
Bob's
on
the
line
here
have
been
running
large
scale
tests
and
a
hundred
and
a
thousand
nodes
and
they've
been
seeing
seeing
some
performance
issues
that
the
Google
folks
haven't
been
seeing,
and
so
a
lot
of
the
focus
has
been
on
sort
of
capturing
the
state
of
a
config
and
whether
it's
conformance
to
some
sort
of
definition
of
conformant
or
not,
and
so
a
lot
of
that
a
lot
of
effort
has
been
going
into
trying
and
verify
that
it's
a
that.
H
H
So
you
go
up
with
a
cluster
and
say:
give
me
everything
you
know
about
this
cluster
in
some
sort
of
like
you
know,
stick
it
into
gist
or
or
be
able
to
dip
it
so
that
you
can
figure
out,
and
this
is
sort
of
a
half
step
to
getting
to
like
one
config
file
for
the
whole
cluster
as
input
right
being
able
to
at
least
gets.
You
know
all
the
command
line,
flags,
all
the
things.
H
That's
that's
useful
is
going
to
be
useful
for
debugging
a
cluster
and
being
able
to
figure
out
if
there's
problems,
and
then
the
Samsung
guys
have
been
running
density
tests
and
they've
been
looking
at
this
sort
of
the
startup
time
for
the
density
test
to
bring
up
5-1030
pods
per
no,
but
at
the
hundred
and
thousand
node
sizes,
the
Samsung
tests
have
been
running
on
AWS,
so
that
might
be
some
of
the
difference.
H
I
know
that
the
the
Red
Hat
folks
have
been
running
some
scale
tests
also
and
haven't
seen
some
of
the
same
issues
here
so
we're
trying
to
to
get
to
the
bottom
here,
I
in
V,
106
they're,
seen
about
10
pods,
a
second
scheduled.
The
theory
is
that
you're
not
going
to
get
any
faster
than
that
without
moving
the
schedule
to
be
less.
H
You
realize
daniel
has
some
ideas
there,
but
we
want
to
get
some
other
issues
figured
out
before
we
start
really
tuning
that
there
might
be
some
rate
limits
and
they're
also
with
head,
though
we're
seeing
things
be
significantly
slower.
I
don't
have
the
numbers,
but
the
pattern
looks
like
the
pods
are
getting
submitted.
H
Fine
and
once
the
scheduler
binds
them
to
a
node
that
cube
lid
is
picking
it
up
and
applying
that
fine,
but
something's,
going
on
in
the
Samsung
set
up
that
that
isn't
being
seen
with
with
GCE
head,
where
it
looks
like
the
schedulers
working
in
batches
and
taking
some
some
big
pauses
in
between
and
so
the
graphs
there
look
like,
they're
stair-stepping.
So
you
may
see
some
chatter
about
stair-stepping
and
that's
what
we're
talking
about
there.
Some
discussion
on
sort
of
like
is
ten
pods
a
second.
You
know
good
enough
or
not.
H
H
It
depends
on
sort
of
how
fast
we
can
get
without
making
some
significant
changes
to
the
scheduler
and
then
yep,
and
so
that's
that's
sort
of
a
summary
of
where
some
of
these
things
are
at
the
idea
here
is
that
once
we
understand
what's
going
on
with
the
Samsung
cluster,
we
can
start.
H
You
know
tuning
stuff
for
a
large
cluster
figure
out
where
the
bugs
are,
and
eventually
perhaps
you
know,
post
something
with
real
numbers
that
were
confident
that
that's
you
know,
numbers
that
we're
proud
of
you
know
ask
after
we
get
some
of
these
issues
figured
out
so
quick
summer
there,
hopefully
you're
ready
jack
I'm.
H
My
understand
is
that
the
thing
that
why
tech
did
was
cubelets
after
they
actually
start
a
pod
running
having
some
latency
and
reporting
that
back
the
stuff
the
Samsung
guys
have
seen
it
have
seen,
is
actually
latency
in
terms
of
the
scheduler
taking
a
pod.
That's
been
submitted,
scheduling
it
and
turn
it
into
a
bound.
So
it's
a
different
part
of
the
pipeline.
H
D
K
K
N
G
C
H
We're
so
now
to
be
clear:
Samsung's
also
hitting
issues
as
they
go
above
30
na
pods
per
node
armed
with
an
idea
of
getting
to
higher
and
higher.
You
know
density
on
a
particular
node
red
hats,
not
hitting
some
of
these
same
issues
and
so
and
the
failure
mode.
There
is
one
of
seeing
some
random
eofs
and
things
seem
to
really
fall
over
in
a
sort
of
bad
way.
H
So
at
some
point
it
makes
sense
to
try
and
make
sure
that
there's
some
sort
of
saying
failure
mode
when
you
try
and
cram
too
many
pods
on
a
single
note,
I
think
you
know
a
there's
indication
that
something
gets
overloaded
starts:
timing
out
and
sort
of
everything
kind
of
falls
apart.
Well,.
H
G
M
I'll
just
say
this:
this
is
Bob
lab.
Angry
we've
just
been
a
little
bit
cautious
about
opening
things
like
this,
just
because
we
want
to
make
sure
that
we
have
all
of
our
ducks
in
a
row
and
how
all
the
conformance
test
working.
We
haven't
done
something
foolish
on
the
way,
so
that
that's
the
only
reason
we.
D
M
Maybe
we
could
just
use
some
advice,
just
sort
of
expectation
setting
about
what
should
we
be
aggressive
about
opening
these
kinds
of
thickets?
Even
if
we're
not
like
a
hundred
percent
lined
up
on
our
internal
reproducibility
and
so
forth,
or.
D
Going
to
take
it
seriously
and
they're
going
to
try
and
investigate
it,
and
so
so
that's
why
we
dietary
same
time
like
if
Don
me
don,
has
a
strong
Koreans
here
right,
and
so
she
can
help
and
say
like
have
you
tried
looking
at
these
four
things
and,
and
it
may
help
you
get
over
a
hurdle,
that's
worth
doing
right
rather
than
having
you
relearn
everything
issues.
She
already
knows.
Okay,.
M
O
O
O
H
O
Thank
you
for
that
reminder.
Yes,
I'm
going
to
show
you
something
called
diffent
patch,
otherwise
known
as
apply
so
far.
All
I've
done
is
create
a
replication
controller.
What
I'm
going
to
do
is
apply
a
patch
to
it
by
doing
a
diff
with
a
Newton
big,
and
the
whole
idea
here
is
that
the
user
is
keeping
around
a
bunch
of
configuration
files
and
they
want
to
be
able
to
just
edit
those
files
and
push
the
changes
in
those
files
into
the
cluster.
They
don't
want
to
have
to
manually
construct
a
patch
right.
O
O
I'll.
Take
that
as
a
yes,
okay,
one
of
the
interesting
things
about
this
is
that
it
has
to
know
what
the
previous
configuration
was
in
order
to
come
up
with
that
diff.
So
here
I've
just
created
a
replication
controller
and
if
I
go
and
fetch
it
you'll
notice
that
there's
an
annotation
on
it.
That's
this
annotation
right
here.
O
What
happens
is
when
you
make
a
change
to
the
configuration
of
a
resource
via
whether
that's
via
create
yeah,
replace
or
edits
or
run
or
expose,
or
any
of
those
pathways.
This
annotation
will
be
placed
on
the
object,
capturing
its
current
user
specified
config.
This
allows
us
to
diff
with
what
the
user
asked
for
the
previous
time,
so
we
can
determine
what
they've
removed,
what
they've
added
what
they
change.
O
O
Well,
this
is
the
patch
you
can
see
here
that
there's
a
role,
master
label,
that's
been
removed
right
and
a
new
label.
That's
been
put
in
its
place
test
key
test
value,
so
we
should
see
the
removal
of
the
existing
role
master
and
the
addition
of
the
test
key
test
value
label
in
all
three
locations.
When
we
apply
this
new
configuration,
any
questions
there,
okay,
so
what
we'll
do
is
go
ahead
and
apply
this,
and
the
syntax
for
reply
is
just
like
the
syntax
for
gate
to
do
a
cube.
O
Control
applied
a
chef
like
from
a
file
or
you
could
do
standard
in
and
it
simply
takes.
The
new
configuration
diff
sit
with
the
existing
one
by
pulling
the
object
from
the
server
pulling
out
the
annotation
and
doing
the
different
computing
to
patch,
and
then
it
sends
that
patch
over
the
wire
to
update
the
object.
So
at
this
point,
we've
now
configured
this
replication
controller
with
the
new
config,
and
so
now,
if
we
go
and
fetch
it
again,
we'll
see
that
its
labels
have
been
updating
as
affected
right
and
the
annotation
has
been
updated.
O
B
Currently,
this
uses
pat
patch
and
currently
patch
does
not
have
a
resource
version
based
precondition
in
the
API
server.
Ok,
we
have
sporadic
random
failures
because
behind
the
scenes
API
server
would
you
get?
It
is
faceless.
I
mean
yes,
yeah
yeah,
so
anyway,
I
mean
synaptics.
Is
this
pointless
to
cleaners,
but
it
doesn't
fix
the
middle
well,
he
just
has
five
precondition,
but
it's
also
going
to
break
this
right.
B
In
general,
if
people,
if
you
have
multiple
people
changing
stuff
with
patch,
yet
it's
problematic.
B
So
we
have
an
issue
filed
about
figure
out
what
user
users
are
going
to
want
with
respect
to
conflict
detection
right
right,
so
my
take
is,
if
that
happens,
we
should
have
a
way
of
detecting
it
and
convey
some
carrots
either
almost
certainly
it's
a
screw-up
and
the
user
wants
the
config
to
cook
to
actually
look
like
their
file
right.
On
the
other
hand,
if
someone
else
made
a
change
which
conflicts
with
that,
then
that's
most.
O
Yeah,
sorry,
that's
a
fairly
low
hanging
fruit
from
here,
because
we
already
are
in
a
position
where
we
can
dip
the
existing
version
of
the
object
with
the
it
with
the
patch
being
applied
and
detect
conflicts.
So
currently
the
way
this
is
implemented.
There
is
a
force
flag.
It
doesn't
do
anything
because
it
always
forces
but
dropping
in
the
conflict.
Detection
should
be
a
fairly
straightforward
operation
from
here.
B
B
D
B
So
Jack
actually
went
through
and
also
so
let's
say
you
specified
it
in
your
file
and
goes
and
does
scale
I
believe
Jack
made
a
change,
the
update,
sanitation,
there's
still
the
issue.
You
need
to
go
down
an
update
you
with
the
file.
Otherwise
it's
going
to
sure
what
put
it
out
don't,
but
if
you
don't
specify
yourself
first
and.
D
O
O
B
N
O
O
E
O
B
That
doesn't
solve
the
problem
either,
because
if
you
have
things
which
are
directly
modifying
the
resources,
then
they
have
the
same
problem.
So
fundamentally,
the
way
this
is
designed
is
using.
The
user
has
to
decide
what
scope,
what
things
they
actually
want
to
configure
using
this
mechanism
and
they
specify
toes
and
everything
else,
is
reserved
to
be
updated
by
other
systems
or
tools
right.
So
that
is
so.
B
You
know
if
you
want
to
use
scale
to
scale
where
you
want
to
use
your
auto
scale
at
scale
or
you
want
a
vertical,
auto
sizer
or
you
want
to
have
your
own
custom
scheduler
or
to
set
the
node
killed
or
whatever
I'll
do
a
sailor.
Oh
yeah
go
specify
them
in
your
control
and
whether
it's
through
apply
or
through
you
know
something
like
deployment
manager,
then,
whatever
we
matter,
it's
the
same
problem.
That's.
O
D
B
By
gas
food
we
need
to,
we
have
a
section
of
the
user
guide
that
discusses
how
adopting
stock
and
yes,
right
now,
it
says
in
this
situation,
use
replaced
in
this
situation.
Use
patch
in
this
situation
use
rolling,
update
the
butte.
The
reason
we
did
all
these
demos
together
is
because
together
they
make
a
pretty
compelling
user
story
about
how
you
can
update
everything
declaratively.
B
Okay,
so
you
can
specify
your
deployment
using
the
deployment
in
api
and
then
you
can,
if
you
don't
want
to
keep
a
pile
of
files
and
get
or
whatever
for
your
configuration,
you
can
just
use
that
and
to
update
it
and
it
just
happens
and
it
does
the
rolling
update.
You
don't
need
to
think
about
doing
different
kinds
of
update.
You
can
always
just
you
set
it.
B
D
D
H
Would
imagine
that
that
you
know
if
we
imagine
that
there's
three
different
ways
that
you
can
do
this?
We
write
a
chapter
that
says:
hey.
If
you
want
to
manage
your
configuration,
here's
the
three
different
ways
that
you
can
manage
this
configuration
if
you're
doing
it
with
this
style.
You
use
these
tools
if
you're
doing
with
this
dial
use.
These
tools
here
is
how
you
convert
from
one
to
the
other.
H
If
you've
decided
that
you
want
to
upgrade
like
that
very
cookbook
sort
of
like
mama
dumbass,
what
do
I
need
to
do
to
get
started
here?
Getting
that
documentation
would
would,
I
think,
really
helped,
and
it
might
suggest
better
names
for
this
better
ways
to
actually
present
this
in
the
tools
that
type
of
now
so.
B
H
What
I'm
suggesting
here
Brian,
though,
is
that
before
stuff
like
this
gets
checked
in,
somebody
actually
goes
through
and
thinks
about
this
from
the
point
of
view
of
what
is
the
user
experience
around
this
stuff?
Just
you
know,
throwing
some
stuff
into
cube
control
with
a
couple
of
paragraphs
of
help.
Inside
of
cube
control
is
not
you
know,
going
to
get
Cooper
Nettie's
where
it
needs
to
be
in
terms
of
user
experience.
H
O
O
They
agreed
agreed
regarding
the
documentation
Joe
and
to
Brian's
point.
You
know
this
should
go
into
the
the
user
guide
in
the
chapter.
That's
already
there.
I
just
want
to
also
make
a
footnote
for
those
that
don't
know
that
we've
been
working
on
an
open
source
version
of
gcp
deployment
manager,
and
we
have
that
now
running
in
qu
benetti's,
it's
written
go
going
python
and
that
allows
you
to
have
parameterised
templates
and
to
define
types
in
terms
of
templates
and
it's
pretty
sweet.
O
D
Also
just
encourage,
I
think,
that
we
don't
have
room
to
experiment
inside
of
cube
control
right
now
and
I.
Think
that
just
like
maybe
I
know,
we've
talked
about
this,
but
we
should
probably
be
able
to
have
experimental
commands
courses.
D
With
Joe
that
I
don't
think
we
should
block
building
stuff
on
right
in
a
box.
I
do
think,
be
able
to
put
up
expenses
that
tell
people
when
they're
off
in
know
there
might
be
dragons
here,
lad.
Yeah.
B
D
B
K
But
there
might
be
a
few
others.
I
don't
stop
my
head,
but
generally
it's
been
where
we
have
an
intern
that
joins
the
project
and
we
want
them
to
go,
write
something
and
she
we
say,
give
it
an
experimental
first
and
then
we
would
give
it
a
closer
look
afterwards,
but
it
often
times
it's
a
good
place
to
just
see
something.
That's
been
an
issue
and
try
to
try
to
prove
it
out
right.
C
B
It
was
moved
out
of
experimental
right,
there's
nothing.
So
that
is
the
reason
to
start
with
experimental
know
and.
D
Also
concretely
from
a
ga
standpoint
and
GC
and
general
standpoint,
we
need
to
get
the
tab
completions
that
that
we've
done
so
much
work
on
getting
working
actually
working,
so
I
shouldn't,
say
me
and
done
so
much.
A
red
hat
has
done
so
much
work,
because
that
also
help
with
people
trying
to
explore.
N
Other
concern:
hey
Brendon.
This
is
paul.
I
wanted
to
give
a
brief
update
on
volume,
security
stuff
that
we've
been
doing
okay
on,
so
we,
the
driver
for
this
work,
has
been
mostly
making
volumes
work
correctly
for
pods
that
are
running
containers
and
run
as
a
non
zero
or
non-root
uid,
and
we
have
three
proposals
for
this
one.
N
The
first
one
is
about
aligning
the
fields
of
the
container
level
security
context
into
a
new
pod
security
context
that
holds
attributes
that
necessarily
apply
to
all
containers
in
a
pod
like
do,
I
want
to
use
the
host
Network
namespace
do
I
want
to
use
the
hose
IPC,
namespace
and
then
also
holding
container
level
attributes
that
we
want
to
apply
to
the
entire
or
to
all
containers
in
a
pod.
The
second
is
about
using
introducing
a
new
FS
group
field
to
the
pod
security
context.
N
That
will
be
the
group
that
owns
all
of
the
volumes
that
support
ownership
management
from
the
client
side.
So
that's
that's
everything
derived
from
MP
here,
like
secrets,
downward
API,
git,
repo,
etc
and
then
D
block
device
file
systems
when
they're
used
in
a
exclusive
that
I
think
the
mode
is
actually
called
rewrite
once
and
the
third
is
about
generalizing
support
for
selinux,
so
that
volumes
that
support
it
can
be
isolated
from
one
another
using
selinux.
N
So,
as
opposed
to
the
current
support
that
we
have
four
SQ
Linux,
which
is
basically
if
selinux
is
enforcing
on
a
node,
will
attempt
to
give
empty
beer
volumes
only
right
now,
a
usable
selinux
context
from
a
docker
container,
but
we
won't
isolate
containers
volumes
from
another.
This
third
proposal
is
about
adding
that
selinux
isolation
from
between
volumes
that
belong
to
different
pods.
So
two
of
those,
the
first
two
landed
that
week,
Tim
and
Brian
have
spent
a
lot
of
time.
Reviewing
that
and
I.
N
Thank
you
guys,
but
for
that
the
last
remaining
one
is
the
selinux
one
which
I'm
hoping
will
be
entering
lgtm
status
shortly
and
then
myself
and
Sam
Sammy
at
on
the
Red
Hat
storage
team
are
going
to
be
working
on
implementing
this.
So
so
I
will
put
the
the
numbers
into
the
chat,
because
I've
been
told
recently
that
discussing
things
by
their
PR
number
is
not
the
best
user
experience,
but
their
128
23
is
the
first
one.
M
You
want
to
say,
and
then
few
minutes
late.
So
if
you
covered
its
beginning,
I
apologize
but
I,
think,
and
perhaps
we
don't
have
enough
time,
but
what's
going
on
with
1.1
and
walking
issues,
oh
by
the
way,
I'll
just
parenthetically
suggest
that
maybe
a
release
sig
might
be
a
good
forum.
Just
a
thought.
Okay,.
D
I,
so
I
would
say
we're
not:
the
status
is
what
it
was
before.
It's
been
cut
we're
getting
ed-e
on
it,
we're
going
to
let
it
soak.
There
have
been
a
few
cherry-picks
to
it.
There
will
continue
to
be
cherry
picks
to
it.
That's
been
not
much
status.
I
guess
is
beyond
that.
Oh
don't
still
on
track
for
mid-october.