►
Description
Now that the project is fragmented across almost 50 subgroups, it can be hard to see the bigger trends shaping the project. These are some trends we see that the project needs to adapt to in order to meet the needs of its users.
Presenter: Brian Grant, Google
A
A
A
The
historical
notes
are
in
the
repo
yeah,
so
late
2014
was
the
first
contributor
summit.
Is
a
small
tight
group
thin
ten
more
than
ten
times
bigger
now
for
sure,
so
it's
amazing
to
see
how
this
is
grown
and
progressed.
A
A
We
really
need
to
improve
the
focus
on
code
and
release
quality
across
the
entire
project.
You
know
everybody
has
their
features
they
want
to
get
in,
and
some
of
those
features
are
really
critical
to
things
like
usability.
The
bottom
bullet
they're
reducing
friction
for
users,
but
we're
at
a
stage
where
improving
the
quality
of
what
we
have
is
more
important
than
adding
new
capabilities
to
this
system.
So
you
know
they're
a
bunch
of
topics
on
here,
but
basic
things
like
test
coverage
and
clear
test
signal.
A
Every
time
we
try
to
cut
a
release,
there
are
failing
tests,
for
example,
the
upgrade
tests
aren't
or
ously
pretty
much
always
failing.
We
need
to
figure
out
how
we
can
change
that,
so
we
can
keep
kubernetes
releasable
at
any
point
in
time
there
are
discussions
about
LTS
releases
that
are
discussions
about.
Maybe
we
need
more
frequent
releases.
Every
release
has
a
lot
of
patches.
You
know,
I,
wouldn't
use
the
dot
zero
of
any
kubernetes
release
in
production.
I.
Think
a
lot
of
people
know
that,
but
we
should
think
about.
Well,
is
that
desirable?
A
You
know
making
sure
that
the
patch,
the
patch
releases
that
we
do
have
only
contain
critical
fixes.
I
looked
at
the
patch
patches
on
1.11
and
1.11
dot.
One
had
about
120
commits
in
it
right.
So
how
can
we
reduce
that?
To
a
smaller
set
of
more
targeted
fixes
compatibility,
I
think
we
broke
in
compatibility
every
release
for
the
past
year?
A
We
really
need
to
improve
that
where
we
are
not
breaking
our
users,
you
know
scalability
reliability,
observability,
you
know
people
are
operating
kubernetes,
a
very
large
scale
in
production,
and
we
need
to
keep
that
in
mind.
So
one
thing
stick
architecture
has
on
its
plate
is
to
divine
a
release:
quality
bar
together
with
the
other
SIG's
like
so
releasing
testing,
sick
docks
and
so
on,
so
that
everybody
understands
what
we
expect
of
the
changes
and
the
features
that
go
in
to
the
to
the
system.
A
A
We
have
some
processes
that
copy
code
out
of
the
kubernetes
kubernetes
repository-
that's
called
staging,
so
it
can
be
consumed
more
easily
by
other
other
repos
and
other
projects.
But
all
this
has
kind
of
grown
organically
and
it's
confusing
and
hard
to
understand.
So
we
really
need
a
team
to
own
this
problem
and
make
sure
that
our
vending
practices
are
saying
and
secure.
We're
really
worried
about
vulnerabilities
sneaking
into
our
code.
A
Having
documented
clear
process
for
other
projects
to
consume
parts
of
the
kubernetes
codebase
that
they
need
figure
out
how
we
can
make
all
of
our
code
repositories
have
consistent
quality
and
make
it
easy
to
compose
a
release
out
of
non-working
pieces
and
that's
going
to
be
a
complicated
problem.
So
it
really
needs
dedicated
focus
of
people
working
on
solving
it.
A
A
So
last
year,
at
Q,
con
I
presented
in
a
keynote
different
ways
that
the
kubernetes
platform
could
be
used
can
be
used
as
a
container
platform.
It
can
be
used
to
distribute
configuration
that
we
can
be
treated
as
a
whole
portable
cloud
platform
and
a
way
to
factor
out
down
the
kubernetes
api
is
in
two
different
layers
according
to
what
level
of
plug
ability
or
compatibility
to
across
kubernetes
distributions.
A
But
as
I've
looked
at
evolving,
the
system
I
now
have
developed
a
different
way
of
looking
at
it,
which
is
more
in
terms
of
how
users
are
using
the
platform
or
pieces
of
the
platform.
So
some
of
these
emerging
use
cases.
We
have
a
lot
of
folks
interested
in
hybrid
and
multi
cloud
and
use
cases.
Multi
cluster
multi
zone
multi-region
use
cases,
especially
with
respect
to
service
discovery
and
load
balancing,
if
you're
running
your
application
in
kubernetes
clusters
over
over
region
or
over
multiple
regions,
and
you
want
traffic
to
reach
the
nearest
deployment.
A
For
example,
how
do
you
do
that?
I
chatted
with
someone
at
the
gathering
last
night,
who's
doing
exactly
that
and
kubernetes
wasn't
really
designed
from
the
beginning
with
that
in
mind.
But
clearly
lots
of
people
want
to
do
that
with
it.
How
can
we
enable
that
people
are
using
service
mesh
with
kubernetes
people
are
running
server
lists
or
these
functions
on
kubernetes?
A
A
A
What
is
the
kubernetes
control
plane?
Increasingly
I've
been
thinking
of
the
community's
control
plane
as
a
resource
management
platform.
So
what
does
that
mean?
So
the
highest
level
view
I've
so
far
come
up
with
for
how
to
describe
the
community
system.
Is
this
I
actually
found
this
diagram
on
the
left
in
an
interesting
blog,
but
you
know
treating
kubernetes
effectively
as
an
operational
database
or
just
as
a
repository
for
a
number
of
resources
that
match
what
I
call
the
kubernetes
resource
model.
A
So
what
parts
of
kubernetes
make
up
the
resource
management
platform?
So
principally
it
sees
api's
at
the
bottom.
So
there's
API
service,
which
is
used
for
registering
aggregated
API
servers,
custom
resource
definitions,
namespaces
than
their
their
authentication
and
authorization
hooks
that
you
need
to
control
access
to
these
resources
and
admission
control
hooks
for
validating
and
defaulting
values
in
those
resources,
then,
if
you
want
to
actually
run
the
control
plane
itself,
there
are
some
operational
primitives
that
may
come
in
handy,
such
as
secrets
and
config
maps
and
endpoints.
A
So
with
this
is
just
a
subset
of
that
whole
picture
of
sixty
api's.
But
if
you
think
about
just
these
primitives,
you
can
build
a
control
plane
for
pretty
much
anything.
There
are
also
some
non
resource
api.
It's
like
the
discovery.
Api
is
the
open,
API,
endpoints
healthy
and
some
things
that
are
not
yet
exposed.
As
api's
and
controllers
like
for
garbage
collection,
they
would
comprise
a
resource
management
platform.
So
you
know,
as
the
use
of
the
kubernetes
control
planes
grows
to
these
other
use
cases
we'll
be
looking
at.
A
How
can
we
factor
out
just
these
bits,
so
they
can
be
consumed
more
easily
by
these
other
projects,
which
may
or
may
not
be
running
in
kubernetes
clusters
themselves
and
I'll.
Talk
about
some
other
use
cases
for
that
in
a
bit.
There
are
some
things
we
needed
do
to
make
the
resource
management
platform
more
consistent,
more
generic
and
more
dynamic
for
these
use
cases.
Things
like
finishing
CRD
work
to
make
API
version.
A
A
As
just
one
example,
a
recent
one
that
has
come
up
and
we've
found
that
the
discussions
are
somewhat
undirected
because
we
don't
actually
have
any
principles
for
how
policies
should
be
expressed
in
the
system.
We
had
some
early
policy
mechanisms
that
were
developed
like
limit
range
and
resource
quota,
and
other
policies
were
implemented
as
hard-coded
in
mission
controllers.
But
as
we
move
to
more
pluggable
policies,
how
do
we
want
policy
to
be
expressed
across
the
system?
So
there
are
a
lot
of
questions
that
come
up
about
what
should
they
be
built
in
or
extensions?
A
What
does
this
mean
so
service
mesh,
I'm
thinking
of
systems
like
envoy
sto,
which
provides
a
control
plane
around
systems
like
envoy,
linker,
D,
and
so
on-
that
allow
really
facilitate
dynamic
traffic
management
between
different
services
and
these
service
meshes
are
evolving
fast.
They
provide
service
discovery,
load,
balancing
routing
observability,
so
you
can
see
what
how
much
traffic
is
going
where
often
identity,
so
that
you
actually
can
authenticate
services
that
are
talking
to
each
other
for
kind
of
application,
level
of
firewalls
and
other
kinds
of
policy
enforcement.
A
Kubernetes
provides
a
lot
of
this
too.
We
have
you,
know,
services
and
ingress.
We
have
custom
metrics
and
the
metrics
pipeline,
where
folks
are
working
on
pata
identity
and
the
primitives
in
understanding
the
primitives
in
kubernetes
and
the
primitives
and
service
messages
and
how
they
relate
to
one
another
and
which
one
user
should
choose
is
creating
some
confusion
amongst
users
and
in
the
community.
A
Then
it's
clear
that
this
can't
just
be
a
single
cluster
notion
or
a
single
cluster
primitive.
So
we're
really
at
the
point
where
we
need
to
rethink
the
primitives
that
we
have
in
kubernetes,
and
this
you
might
think.
Well,
this
is
this
just
a
signet
we're
concern
the
reason
I
bring
it
up
here
is
because
it's
it's
a
major
part
of
kubernetes
kubernetes
has
always
been
about
managing
containers
and
services.
A
Service
was
one
of
the
four
api's
communities
had
at
the
very
beginning,
but
it's
also
an
example
of
a
potential
for
a
really
major
change
coming
to
the
system,
and
how
do
we
as
a
project,
accommodate
that
kind
of
change
in
points
in
particular?
Has
a
lot
of
issues
I'm
not
going
through
all
these,
but
endpoints
is
basically
at
a
point
where
it
needs
a
reboot.
It
needs
a
redesign
in
order
that
has
some
fundamental
problems
that
can't
be
addressed
in
a
backward
compatible
way.
A
A
So
this
is
something
that
you
know
whether
we're
pushing
on
scalability
or
or
multi
cluster,
or
how
to
integrate
service.
Mesh
endpoints
is
kind
of
at
the
center
of
those
all
those
discussions.
Similarly,
service
and
ingress
service
has
a
bunch
of
issues
and
quirks
like
I
said
it
was
one
of
the
original
for
API,
so
it
had
some
functionality
that
treated
organically
over
time.
A
Ingress
has
even
more
issues.
You
know
it's
l7
routing
is
something
that
everybody
needs,
but
we
don't
have
built-in
ingress
controllers.
It's
not
GA.
We
don't
really
have
a
plan
for
how
to
get
a
GA
quite
yet,
but
we
we
also
need
l7
load
balancing,
not
just
for
ingress,
but
internal
l7
load
balancing.
How
do
we
do
that?
A
So
you
know,
as
we
think
about
these
issues
and
with
the
emergence
of
service
mesh,
you
know,
I
feel
it's
it's
time
to
take
a
step
back
and
really
think
about
how
we
want
to
evolve
the
system
beyond
just
sort
of
incrementally
or
and
organically.
But
you
know,
obviously
that's
a
big
challenge
as
significant
compatibility
considerations.
So
you
know
kubernetes.
This
is
actually
the
fifth
year
of
kubernetes
and
you
know
if
we
want
to
think
another
five
years.
Where
do
we
want
to
be.
A
So,
infrastructure,
abstraction
and
orchestration
is
one
of
these
use
cases
that
people
are
using
the
kubernetes
resource
model,
for
we
actually
have
some
principles
but
they're
not
documented
for
how
the
project
abstracts
infrastructure,
as
people
run
kubernetes
and
more
and
more
environments,
consistency
across
those
environments
and
workload
portability
are
increasingly
key
concerns.
That's
why
we
have
the
conformance
program,
for
example,
to
test
portability
of
kubernetes
and
its
workloads.
A
One
pattern
that
we
have
you
can
see
in
the
storage
API
is
I.
Think
is
one
of
the
better
examples.
It
also
exists
with
pods
and
nodes
and
new
abstractions
that
we're
building
like
runtime
class,
but
the
storage
abstractions
have
existed
for
quite
a
while.
The
application
concerns
are
expressed
in
a
resource
called
persistent
volume
claim
and
that
just
expresses
the
user
intent
of
the
resource
they
need,
for
example,
any
storage
with
this
many
bytes.
A
The
infrastructure
concerns
are
in
a
separate
resource,
persistent
volume,
which
represents
the
existence
of
storage
with
a
certain
amount
of
capacity
and
the
provider
concerns
are
in
a
third
resource
called
storage
class
which
detail
the
provider
specific
attributes
of
that
storage
class.
So
this
is
a
pattern
that
seems
to
work
pretty
well
to
separate
these
three
concerns.
So
this
is
something
we
need
to
document
and
apply
more
consistently
across
the
infrastructure
that
we
abstract.
A
Another
pattern
is
application,
driven
provisioning
and
configuration.
So
with
again
using
storage
is
the
example
the
user
expresses.
They
have
a
persistent
volume
claim,
for
example,
in
a
stateful
set.
If
you
need
instance,
of
a
volume
per
pod
and
that
stateful
set,
you
can
express
that
stamp
out.
The
persistent
volume
claims
now,
if
volumes
don't
exist,
to
satisfy
those
claims
they
get
provisioned
automatically,
so
the
user
doesn't
have
to
do
that.
A
So
that
general
pattern,
if
we
think
about
users
trying
to
deploy
their
applications
on
to
kubernetes,
they
don't
want
to
have
to
worry
about
the
infrastructure
they
want
that
to
be
managed
automatically
for
them.
This
is
the
pattern
that
exists
in
the
system
that
we
should
be
following,
and
you
know
finally,
something
that's
emerged
more
recently.
I
we
knew
from
very
early
on
that
this
would
come
up
is
how
to
handle
topology
as
people
run
kubernetes
as
I
said,
in
more
environments
or
multi
zone
and
multi
region.
A
The
scheduler
has
had
some
mechanisms
to
you
deal
with
topology
for
a
while
spreading
across
multiple
again
zones
is
the
typical
case.
Those
are
just
modeled
as
labels
on
nodes
and
on
the
infrastructure,
more
generally,
in
the
system
that
has
also
been
added
to
volumes.
So,
if
you're
provisioning
a
volume
on
many
providers,
the
the
volume
storage
is
constrained
to
of
a
single
zone
of
that
infrastructure
provider.
A
So
you
need
to
provision
the
storage
in
the
same
zone
that
the
pod
will
schedule
in,
for
example,
so
that
support
exists
in
the
system.
Now
similar
literally,
the
support
is
getting
added
to
workload
controllers
so
that
they
understand
when
they
are
disrupting
pods,
doing
a
rolling
update,
for
example,
and
to
services,
so
they
can
route
requests,
preferably,
for
example,
within
the
the
same
zone
or
topological,
whatever
the
topological
unit
is
so
increasingly,
you
know,
as
these
capabilities
get
added
to
the
system
again,
we
want
to
model
them
in
a
consistent
way.
A
So
the
infrastructure
orchestration
is
at
this
point,
moving
to
the
next
level,
instead
of
just
modeling
the
infrastructure
that
the
applications
will
schedule
onto
people
are
now
using
the
kubernetes
control
plane
to
manage
the
infrastructure
itself.
With
things
like
the
cluster
API
effort
and
sig
cluster
lifecycle,
there
are
a
number
of
other
projects
in
the
ecosystem
that
are
doing
similar
things,
and
this
is
a
case
where
the
control
plane
may
not
even
have
a
cluster
or
need
a
cluster.
It
just
needs
to
you
model.
A
The
clusters
that
it's
managing-
and
this
is
this
kind
of
automated
management-
is
one
of
the
things
that's
enabling
users
to
not
care
about
the
infrastructure.
So,
for
a
long
time
there
have
been
kubernetes
distributions
in
the
ecosystem
that
make
nodes
in
the
infrastructure.
More
generally,
just
an
administrative
concern,
especially
in
multi
tenant
clusters,
where
the
typical
application
operator
or
application
developer
can't
access.
A
For
example,
the
node
API.
If
you
do
kubernetes
get
nodes
and
they
just
report
not
allowed.
So
that
is
one
way
to
prevent
users
from
having
to
care
about
the
noses
they
can't
even
see
the
nodes.
The
node
scaling
and
auto
provisioning
I
mentioned
is
another
thing
that
scaling
is
working
on,
so
the
users
don't
need
to
think
about
the
infrastructure.
And
finally,
you
know
a
topic
that
it's
gonna
be.
Definitely
a
topic
discussed
in
2019
is
making
kubernetes
adopt
itself
to
other
container
platforms
as
backends
if
you
will
to
execute
pods.
A
So
all
these
have
similar
goals.
How
can
we
make
the
model
and
kubernetes
consistent
understandable?
Whichever
of
these
models
is
being
used?
I
think
is
an
interesting
question.
We
need
to
answer,
and
you
know
beyond
just
nodes
and
volumes,
we're
seeing
more
and
more
again
using
kubernetes
to
manage
all
the
things
all
the
infrastructure,
including
external
DNS
certificates,
other
infrastructure
resources.
A
So
you
know,
as
this
ecosystem
grows
documenting
and
establishing
these
clear
principles
about
how
these
things
should
be
done,
will
help
it
all
fit
together
well
and
make
it
more
understandable
to
users.
So
this
was
a
quick
sort
of
whirlwind
tour
of
some
things
that
I
see
affecting
multiple
SIG's
in
the
projects.
A
The
discussions
will
be
happening
in
Sigma
architecture
and
in
many
other
SIG's,
like
cig
networking
is
to
cluster
life
cycle.
You
know,
as
I
said,
I
will
share
this
share
this
deck
and
try
to
get
start
more
documenting
some
of
these
principles,
for
example
as
part
of
the
sig
architecture
effort.
If
you
want
to
help
with
that,
please
join
us
inside
architecture.
I
guess
I
have
time
for
questions.
Do
there
any
questions?
Bob.
C
B
A
A
There
is
a
feature
branch
experiment
underway
so
between
all
the
the
branches
and
the
repos
and
the
repos
vendor
din
and
the
repos
vendor.
Doubt
if
you
will,
where
do
we
actually
want
to
go
with
all
this
and
how
do
we
manage
it
and
then
how
do
we
create
quality,
releasable
artifacts
out
of
all
that
is.
A
D
So
so
like,
if
you,
if
you
care
about
death
or
NGO
modules,
or
why
do
we
have
these
crazy
staging
repos?
You
probably
have
opinions
or
complaints
about
it,
but
what
we
need
you
to
do
is
turn
those
opinions
and
complaints
into
constructive
criticism
in
the
form
of
continued
effort
to
improve
the
problem.
D
So,
while
I
have
the
microphone,
I'll
ask
my
other
question
and
hand
it
off
to
Brian,
which
is
I.
Think
a
lot
of
these
sound
like
really
big
tasks.
If
you
are
interested
in
contributing
to
these
I
feel
like
there
has
been
some
concern
over
goalposts
moving
or
it
being
unclear
how
you
contribute.
I
can
certainly
say,
as
a
member
of
the
steering
committee,
that
it's
not
like
very
clear
how
to
how
to
do
this
so
like
what
would
Brian
suggestion
before
today.
A
Great
question
so
I'll
pick
on
DIMMs
as
a
good
example,
so
in
the
project,
the
way
you
usually
start
with
something
like
this
is
to
dig
in
the
problem
and
understand
it
and
be
able
to
explain
it
to
other
people,
and
then
you
need
to
find
a
set
of
people.
You
need
to
explain
it
to
you,
so
it's
really
easy
to
get
on
the
agenda.
It's
Sagarika
texture,
which
has
a
set
of
technical,
leads
and
people
have
been
involved
in
the
project
for
a
long
time
across
many
SIG's
of
the
project.
A
So
you
get
a
pretty
broad
cross-section
of
people
usually,
and
you
can
bounce
ideas
off
there
once
you
have
a
clear
idea
of
what
you
want
to
do,
you
can
create
a
kept,
which
is
a
kubernetes
enhancement
proposal
and
the
caps
have
just
been
moved
to
the
enhancements,
repo
I
believe
and
we're
trying
to
really
establish
caps.
As
the
way
proposals
happen
in
the
project,
we've
had
a
cargo
cult
proposal
project
in
the
community
repo
for
a
long
time.
A
It
originally
was
in
the
kubernetes
repo
repository
and
we're
trying
to
formalize
that
and
make
it
more
the
process.
More
clear
and
what
information
is
expected
more
clear
by
consolidating
on
the
kept
process,
you
can
also
build
a
prototype
to
inform
the
proposal
if
it
doesn't
take
too
much
effort,
because
you
might
have
to
make
considerable
changes
based
on
the
feedback.
So
the
reason
I
mentioned
dims
is
because
dims
recently
had
a
proposal
to
introduce
a
new
logging
library
across
the
entire
system.
Kellog
right-
and
this
is
a
pretty
pervasive
change.
A
Obviously
a
lot
of
people
had
opinions
about
it.
It
took
some
discussion,
you
know,
but
if
you
stick
to
it-
and
you
make
your
case
and
you
actually,
you
know,
implement
and
carry
it
through
and
implement
all
the
tests
that
are
needed
and
so
on.
Then
people
will
be
more
than
happy
to
trust
you
to
do
it
right.
You
demonstrate
and
understanding
the
problem
demonstrate
the
consequences
of
the
solution.
You
demonstrate
that
you're
willing
to
iterate
on
it
and
resolve
the
issues
with
it.
That's
what
we're
really
looking
for.
A
So
you
know
just
reach
out
to
you:
Sagarika
texture,
if
you're
not
sure
who
the
right
people
are
to
talk
to
you,
criminais
Sagarika
texture
mailing
list
should
work
well.
Not
all
of
us
can
monitor
slack
all
the
time,
but
you
can
try
there
as
well
and
you
can
ask
to
get
on
the
agenda
for
for
a
meeting
and
we
can
discuss
it
there.
We
have
meetings
every
week
on
Thursdays
at
11:00,
Pacific.
B
Has
there
been
any
thought
to
doing
some
sort
of
nuke
like
I'm,
not
a
contributor
yet,
but
I
want
to
be
sort
of
rotating
workshop,
that
people
could
go
to
and
take
like
a
day
and
just
sit
with
some
mentors
and
get
walked
through
some
of
this
technical
action
and
some
of
the
new
contributor,
like
you,
know,
here's
the
hard
parts
of
getting
your
first
line
of
code
in
because
I
think
in
some
way
that
would
make
things
a
lot
easier
and
kind
of
the
graduation
from
that
would
be.
Okay.
B
E
Yes,
that
that
is
being
worked
on,
I
actually
need
more
help.
I've
been
trying
to
work
out
this
concept
of
the
one-on-one
hour
for
quite
some
time
now.
Unfortunately,
our
community
is
so
large
that
I
can't
work
on
99
things,
but
I
do
welcome
your
help
with
that
the
one-on-one
hours
going
to
save
us
time
with
mentoring,
mentorians
time
and
this
one-on-one
hours
sort
of
a
ambassador
level
into
into
the
project.
You
can
pair
a
program.
E
A
Believe,
there's
also
an
API
machinery
walkthrough
at
the
conference,
yep
walking
through
the
code-
and
there
have
been
a
few-
those
done
and
posted
to
YouTube
as
well.
Yeah
we'll
try
to
find
more
cigs
interested
in
doing
that.
Yeah,
let's
say
for
for
the
architecture
related
projects
specifically
they're,
not
so
friendly
to
new
contributors,
because
there
are
more
complicated
and
cross-cutting
and
take
a
lot
more
time.
You
know,
may
take
a
year
or
more
for
some
of
these
changes
to
be
implemented.
A
C
There
is
a
lot
of
copy-paste
in
the
communities
minoriko
and
I
think
that
partially
it
could
be
related
to
the
limited
limitations
of
the
go
language,
but
also
I
have
noticed
that
sometimes
there
is
an
inconsistency
between
like
using
plurals
versus
singulars,
and
sometimes
it
could
be
confusing
and
not
easy
to
refactor.
Is
there
any
plan
to
like
improve
that
part
of
the
curve
race
plan.
A
Is
a
strong
word
so
I
think
that
would
be
a
great
kind
of
thing
for
a
new
contributor
to
tackle.
Honestly,
you
encounter
a
friction
in
the
codebase.
You
find
a
certain
library,
for
example.
Do
you
think
could
be
duplicated
or
made
more
consistent
with
some
other
part
of
the
codebase
cleaning
that
stuff
up
mechanically
will
help
you
work
through
the
development
process
and
the
CI,
and
all
that
without
being
you
know,
change
actually
changing
the
semantics
of
what
happens
and
will
build
karma
and
trust
with
the
reviewers
and
the
rest
of
the
community.
A
So
you
know
getting
I.
Think
that
aligns
with
with
this
slide,
making
our
code
more
understandable
will
make
it
easier
to
make
the
code
higher
quality
and
more
stable
and
more
reliable,
more
correct.
So
those
kinds
of
changes
are:
are
generally
beneficial,
so
yeah
go
for
it.
Okay!
Well,
thanks
a
lot.