►
From YouTube: Community Meeting, October 26, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
kcp
community
meeting
october
26
2021.
We
have
some
items
on
the
agenda.
I
think
many
carried
over
from
last
week.
The
discussion
last
week
was
very
good
and
thorough
and
wide-ranging,
but
I
forgot
to
mention
a
very
exciting
topic
that
was
near
the
end
of
the
agenda,
which
is
welcome.
Andy
andy
goldstein
is
here
and
has
joined
our
kcp
team
at
red
hat.
I
assume
you've
prepared
a
speech
I
assume
you've
prepared.
Perhaps
in
song
form
I'm
not
sure
you
said
you
were
song.
B
And
dance
yes,
thanks
jason
nice
to
meet
everybody
that
I
haven't
met
before
I'm
nctc
on
github.
I've
been
involved
in
openshift
and
kubernetes
for
a
while
now
at
red
hat
and
heptio
and
vmware
worked
on
kubernetes,
proper
and
valero
for
backup
and
recovery
and
cluster
api.
So
I've
kind
of
been
around
to
lots
of
different
avenues
and
kcp
is
really
exciting
to
me.
I'm
super
happy
to
be
back
at
red
hat
working
on
this.
So
please
reach
out.
B
If
you
want
to
chat
and
looking
forward
to
getting
to
know
this
community.
A
Great,
it's
it's
it's
good
to
have
you
it's
nice
to
have
more
people
who
know
where
all
the
bodies
are
buried
to
help
us
bury
some
more
that
was
sort
of
morbid,
I'm
sorry!
So
last
week
we
did
a
lot
of
discussion
about
the
two-phase
transformation
and
sinking
concept
and
I'm
not
sure
if
we
ever
like
finished
it
or
just
ran
out
of
time.
But
that
being
said,
I
think
david
is
not
here
david's
on
pto
this
week
and
I
think
clayton
is
not
here
yet.
A
Last
week
we
also
had
some
impromptu
discussions
about
sharding
and
we've
actually
had
some
conversations
even
earlier
today
about
sharding.
I
don't
know
if
we
want
to.
A
I
feel
like
we
need
to
document
this
better,
but
I
don't
know
that
it
makes
sense
to
document
it
now,
while
it's
still
so
relatively
in
flux
or
it
feels
sort
of
influx.
I
don't
know
if
steve,
you
feel
the
same
about
the
flux
fluxitude
of
current
sharding
ideas,
but
I
feel
like
this
is
going
to
be
a
really
important
aspect
of
like.
A
If
kcp
is
a
minimal
api
server,
it
doesn't
even
matter
for
multi-cluster
stuff.
It's
it's
literally
just
if
it's
going
to
be
a
performance,
scalable
resilient
api
server
for
anything
sharding
is
going
to
play
a
big
part
of
that,
and
so
yeah
do.
I
don't
know
if
we
want
to
go
over
some
of
the
recent
discussions.
Current
blockers
like
stuff
that
we're
thinking
about.
C
C
What's
the
point
of
a
shard,
and
so
I
think,
there's
like
a
mostly
finished
doc
that
tries
to
put
all
the
conversation
that
happened,
answering
the,
why
into
words,
and
then
that
helped
us
sort
of
like
narrow
down
into
some
specific
client
flows
and
use
cases
that
like
require
a
sharded
api
or
I
require
a
sharded
api
to
be
exposed
under
101,
endpoint
versus
being
able
to
you
know,
do
some
sort
of
client-side
stuff.
C
I
don't
know
that
we're
I
mean
I
think
we
should
just
make
that
document.
Public,
yeah,
okay,
yeah
cleveland,
is
about
to
do
that
and
then
the
other
one
is.
I
was
digging
into
like
okay.
Now
we
have
like
a
minimal
end
point
like
what
does
this
actually
look
like
to
use
and
what
are
the
implications
and
that's
like
very
much
still
in
flux,
but
I
think
I
don't
know
my
biggest
takeaways
right
now
are.
A
The
the
the
real
core
underlying
issue
is
that
in
order
to
well
our
current
approach
for
making
sharding
work
is
to
is
to
exploit
the
opacity
of
resource
version
strings
like
assuming
people
assuming
clients,
don't
try
to
derive
any
useful
information
from
that.
We
can
put
whatever
we
want
in
it
and
we
can
derive
important
information.
Have
our
shards
derive
important
routing
information
from
that?
I
believe
that
resource
versions
are
already
supposed
to
be
considered
opaque.
A
D
They
were
opaque
and
then
you
know,
given
a
large
enough
user
base,
someone
will
will
not
read
the
docs
because
who
reads
docs
as
a
developer
and
also
people
needed
things
that
the
auto
incrementing
integer
provided
yeah.
There
was
no
formal
solution
for
in
cube,
and
so
we
did
the
classic.
You
know
well,
this
is
probably
okay,
but
don't
don't
do
what
we
say,
not
what
we
do
and
so
now,
at
this
point
the
risk
is,
we
don't
understand
the
scope
of
what
it
means
in
the
client
space.
D
This
some
of
the
implications
of
this
are
called
the
consistent
list.
Watchdog,
like
you,
need
these
properties.
We've
come
up
with
a
few
examples
in
cube
where
we
need
things
that
are
similar
to
it.
We're
still
kind
of
just
exploring
and
roughly
like
sharding
implies.
If
you
have
multiple
things,
you
need
to
build
a
consistent
history,
we're
kind
of
exploring
right
now.
This
was
like
this
is
the
discussion
that
prompted
it
was
like
we're,
basically
weighing
two
options,
which
is
to
try
to
do
hard,
sharding
or
just
make
it
somebody
else's
problem.
D
Like
you
know,
you
could
say
if
you
replace
ncd
with
postgres,
you
lose
all
the
entity
operational
characteristics,
but
you
gain
a
scale
characteristic,
but
then
you
still
need
to
go
to
a
sharding
or
geographic
mechanism,
we've
kind
of
set
as
an
axiom.
We
need
a
geographic
mechanism
and
so
either
we're
implementing
it
or
we're
delegating
and
the
delegating
constraints.
So
here
it's
yeah,
it's
one
of
the
client
semantics.
We
need
to
deal
with
sharding,
getting
it
all.
D
Written
down
is
a
very
useful
exercise,
because
nobody,
we
haven't,
really
tackled
it
in
cube,
and
this
is
maybe,
as
far
as
I
know,
this
is
like.
There's
a
few
people
who've
touched
on
it
in
various
different
dimensions.
This
is
like
we're
actually
trying
to
write
it
all
down
in
one
spot.
This
is
like
this
is
what
sharding
cube
would
look
like
whether
it's.
A
So
I
had
assumed-
and
I
believe
and
it
sounds
like
I
am
correct
in
my
assumption-
that
there
is
some
document
somewhere.
That's
a
a
public
like
kubernetes
document
that
says,
resource
version
is
supposed
to
be
considered
opaque
in
practice.
D
D
We
didn't
actually
define
what
that
meant:
basically,
dan
smith,
daniel
smith
and-
and
I
and
a
couple
other
folks
in
the
early
days,
we
were
just
like-
let's
bound
it
so
that
we
can
go,
and
I
think
beta
had
to
weigh
in
on
this
is
like
very
early
on
like
first
six
months
of
cube,
we're
like
hey,
let's
put
some
boundaries
in
case.
We
want
to
shard
and
break
the
problem
up.
Eventually,
we
did
different
back-end
storage
for
events,
because
at
the
time
events
were
high,
write
rate.
D
Then,
eventually
we
were
like.
Oh,
we
added
extending
apis,
so
we
added
both
crds,
which
use
the
same
fcd
and
then
aggregated
api
servers,
which
we
were
kind
of
started,
with
the
assumption
that
people
were
going
to
use
different
ncds,
but
then
operationally
that
turned
out
to
be
such
a
giant
disaster
for
a
cluster
operator
that,
in
practice,
all
the
all
the
aggregated
apis
that
I
never
want
to
support
would
use
the
underlying
ncd
of
the
api
server.
So
that
doesn't
help
you
go
right
right
and
then
we
basically
said
hey.
D
Everybody
should
just
control
the
right
rate,
so
we
added
priority
and
fairness,
and
so
here
we
are
seven
years
later
and
we
don't
really
like
people
do
exploit
the
property
of
it
in
a
few
cases.
There's
a
few
problems
in
cube
that
are
unsolved
today,
like
restoring
from
an
etsy
backup
controllers,
do
not
automatically
detect
and
re-synchronize
when
they
experience
a
state.
D
Reversion,
there's
no
easy
way
to
do
that,
except
detecting
from
a
changed
object
that,
like
oh,
you
know
like
this
object,
went
back
on
generation
because
generation
is
an
integer
generation,
doesn't
cover
the
full
object
and
then
resource
version.
If
you
can't
compare
a
resource
version,
you
have
no,
you
know
you
have
no
idea.
So
again.
You
know
people
have
tried
to
think
through.
D
Workarounds
I
feel
like
this
is
all
stuff
that
should
show
up
in
the
docs
is
like
a
guarantees
that
controllers
clients
expect,
but
we
haven't
done
the
and
we
probably
have
a
work
item
that
should
go
under
there,
which
is
like
canvas.
The
ecosystem,
like
someone
should
go,
spend
time
digging
through
the
ecosystem
who
abuses
this.
D
There
was
an
effort
about
two
years
ago
to
make
them
opaque,
which
we
stopped,
because
we
were
already
hitting
problems
in
our
own
code
base
and
it
was
we
were
making
them
opaque
without
really
understanding
what
properties
we
would
need.
So
that's
kind
of
the
current
state
of
resource
version.
It
is
the.
C
D
The
currency
yeah,
and
so
I
went
and
reviewed
some
of
those
because
I
needed
to
physically
change
it
and
though
some
of
those
were
actually
wrong
in
the
sense
of
we
were
trying
to
do
something
that
might
be
better
solved
a
different
way.
One
of
them
was
like
comparing
a
cache
in
like
the
cubelet
has
a
cache.
D
When
you
make
a
change.
There
was
a
theory
early
on
that.
We
would
use
right
through
caches
in
controllers,
which
the
cube
is
a
controller
where
we'd
be
like
hey,
we'll
make
the
change
record
it
in
our
own
cache,
and
then
you
know
we're
waiting
to
see,
but
you
can't
do
that
without
being
able
to
compare
an
incoming
event
to
an
object.
You
just
changed,
and
so
a
few
of
the
places
using
it
in
the
cubelet
that
I'm
aware
of
that
may
be
different.
D
D
And
we
did
change
the
informer
to
compare
zero
as
a
string
because
we
defined
zero
on
the
request
side
as
having
a
specific
meaning.
So
we
actually
formalized
it,
and
we
said
if
you
pass
the
string
zero,
then
the
end
result
is
that
we
ignore
all
caches,
and
then
we
violate
one
of
the
consistency
guarantees
that
cubelet
actually
needs.
So
that's
the
that's
the
bug
I
referenced
where
right.
C
I
think
I
think
it's
like
much
less
and
clients,
I
guess
they're,
probably
using
it
for
generation,
but
I
do
think
it's
interesting
that
you
mentioned
that
it
was
supposed
to
be
opaque
or
sorry.
It
was
comparable
within
a
namespace.
D
We
knew
that
we
would
eventually
want
to
consider
sharding,
and
so
we
were
leaving
ourselves
room
to
maneuver,
but
then
we
didn't
actually
do
anything
to
defend
ourselves
and
in
the
meantime,
we
abused
it
elsewhere
because
it
was
convenient
and
we
didn't
do
a
cap.
That
said,
let's
clarify
this
because
everybody
was
like
caps
are
hard,
we'll
just
fix
this
as
well,
but
I.
C
The
interesting
part
of
that
statement
good
because
also
to
be
clear,
like
the
documentation,
is
written
today,
it
doesn't
make
that
distinction
right.
Does.
C
No,
but
I
think
I
think
that's
like
basically
the
conversation
we
were
having
yesterday,
which
is
if
we
don't
enforce
some
sort
of
total
order
between
events
between
different
shards
or,
I
guess,
moving
back
one
step
so
like
the
client
that,
as
the
client
that's
going
to
benefit
the
most
from
a
started
list
watch
is
the
type
of
client
that's
trying
to
do
some
sort
of
aggregate
computation
across
like
multiple
like
data.
C
That's
you
know
in
multiple
cities,
and
I
think
we
brought
up
quota
is
like
one
of
these
examples
and
so
in
any
like
writer
context,
without
having
an
ordering
between
items
that
are
in
different
cds.
C
C
D
Most
people
don't
actually
test
their
controllers
with
real
latency,
and
so
the
moment
they
actually
get
to
a
point
where
they
have
real
latency.
They
find
subtle
race
conditions
that
just
show
up
like
until
you
get
to
about
100
milliseconds
150
milliseconds
of
latency,
most
people's
controllers.
Don't
it's
uncommon
to
hit
that,
but
when
it
does
like
every
controller,
I
have
ever
seen
that's
trying
to
do
anything.
C
Well-
and
I
guess
like
I
think
where
I
was
going-
it's
like
this
idea
of
resource
versions
being
utilized
within
a
namespace,
so
we
have
an
analog
like
we
have
an
analogous
implication
on
shards
that
you
really
can't
expect
any
sort
of
comparison
or
ordering
between
shards
or
really
between
workspaces,
because
workspaces
can
move
around.
C
And
so
I
think
that's
like
a
generalization
of
that
name.
Space
concept,
and
I
think
it's
probably
worth
it
to
do
an
investigation
of
like
how
many
controllers
today
really
fall
apart
when
you
don't
have
a
consistent
ordering
between
like
sort
of
independent
or
unrelated
events,
or
are
most
people
building
controllers
that
have
like
slowly
but
eventually
convergent
behavior
like
quota,
because
if
they're
doing
that,
then
it
doesn't
really
matter
that
we're
only
providing
a
partial
order.
Quota.
D
Is
one
that
I
wanted
to
like
set
aside
for
a
second,
because
the
way
we
implemented
quota
in
cube
was
we
were
like
well
at
the
time
ncd
didn't
have
transactions,
and
so
we
chose
to
build
an
on
top
system
that
may
not
be
the
best
design
for
quota
because
it
actually
it.
It
was
a
lot
of
work
to
get
to
the
point
where
it
kind
of
worked,
and
it's
not
a
generalizable
system
for
like
true
database
quota.
D
Every
true
quota
system
usually
gets
closer
to
the
data
store
because
there's
certain
guarantees
that,
like
you,
can't
provide
the
way
that
it's
implemented
today
or
much
much
harder
so
that
that's
one
I
wanted
to
separate
like
quota
is
kind
of
a
fundamental
property
of
a
database,
usually,
as
is
like
ingress,
control
or
admission
control,
in
the
sense
that
most
databases
use
emission
control
is
all
about
making
sure
that
you
have
hard
boundaries
on
how
many
resources
you
consume,
so
that
it's
not
trivially
easy
to
blow
up
a
server
cube.
D
If
you've
had
like,
we
didn't
have
a
lot
of
extension
in
the
early
days,
and
so
it
was
all
the
use
cases
brought
by
cube
now
we're
getting
to
the
point
where
priority
fairness
is
real,
that,
like
going
back
reassessing
it
so
setting
a
side
quota,
I
suspect
there
aren't
many
total
ordering,
but
I
agree:
a
canvas
would
be
useful
and
even
in
a
basic,
even
in
some
basic
exploration,
most
ordering
problems
basically
boil
down
to
it's
better
when
you
give
up
on
ordering
as
a
client
and
don't
try
to
order
events
and
converge.
D
A
So
stepping
back
like
fundamentally,
we
will
need
to
walk
back
all
these
clients
that
have
been
deriving
semantic
information
from
the
resource
version,
because
we
are
going
to
completely
break
them
when
we,
when
we
make
them
opaque
again
or
when
we
make
them
like
useful
to
us
but
opaque
to
end
users
right
so.
D
D
If
you
say
that,
like
we
want
most
controllers
to
work,
unmodified
and
most
clients
to
work
on
modified,
we
do
have
the
escape
patch,
which
is:
is
this
less
than
five
percent
of
controllers
that
are
like
this?
We're
still
within
our
95
error
budget
for
client
compatibility?
I
suspect
just
based
on
what
we
know
today.
If
it's
not
required
in
client
go
and
our
existing
controllers
work,
and
we
don't
believe
that
property
is
any
worse
than
existing
cube
like
backup
or
whatever
like
restoring
from
backup.
D
Then
I
would
probably
say
we're
already
in
the
we're
still
95
compatible.
What
would
be
our
recommendation
for
them
when
they
have
a
problem,
we're
trying
to
anticipate
the
problems
that
they'll
have
that
we
may
need
to
solve
and
the
implications,
because
today
we
are,
everybody
gets
total
ordering,
so
you
don't
know
who's
dependent
on
it.
The
worry
would
be
more
things
are
relying
on
total
ordering
will
break
them.
People
who
are
using
resource
version
to
compare
have
explicitly
gone
the
extra
step
to
call
out
why
they're
doing
it.
D
D
Is
and
it
well
it's
easy,
but
it's
hard
to
get
generation
right,
because
it
involves
comparison
of
the
object
in
a
semantic
way
on
spec.
Arguably-
and
this
is
another
thing
in
cube-
is
if
we
really
thought
about
it,
we
should
have
just
added
an
in
a
spec
index
or
a
sorry,
an
overall
object
index.
We
were
leveraging
we're
kind
of
like
we
don't
have
one.
We
didn't
really
need
a
status
generation
or
the
overall
generation.
D
There's
some
other
problems
there
with.
When
you
compare
an
object,
the
property
we
were
going
for
with
cube
objects
is,
if
you
write
them
and
there's
no
change,
you
get
a
you,
don't
actually
execute
the
database
change
subtly
putting
things
in
the
object
that
change
when
they're
changed
can
like.
You
have
to
reason
through
that.
So
I
think
a
crd
author
absolutely
could
they
would
just
probably
hit
a
lot
of
the
same
problems.
There
is
an
argument-
and
I
was
advancing
this
with
dan
smith,
which
was
maybe
we
should
have
an
overall
generation.
D
If
I
don't
own
a
field
or
someone
else
has
taken
ownership
of
it,
it's
kind
of
like
blind,
put
and
again
like
there's
a
lot
of
subtleties
and
why
blind
put
doesn't
work
and
all
that,
but
I
think
it's
possible
a
serie
author
could
do
it
today.
We
would
have
a
recommendation
for
them.
D
It's
possible
that
I,
I
don't
have
an
anticipation,
because
all
the
places
we
used
it
in
cube
were
subtle,
like
the
right
through
cache
is
the
most
obvious
example
of
you
need
to
know
and
again,
like
people
may
have
implemented
this
and
not
realize
that
they
have
an
inconsistency
right
like
again.
This
is
why
distributed
systems
are
hard
like
it's
really
hard
to
reason
about
it's
like
threat.
You
thought
threads
were
hard
or
you
thought.
D
D
There's
that's
functionally
no
different
from
being
correct,
so
you
know
this
is
like
the
we
accept.
The
probability
that
things
are
broken
makes
the
world
a
lot
easier,
so
we're
basically
trading
on
that
in
a
lot
of
use
cases.
So
the
doc
has
a
couple
of
these
notes.
It
was
extremely
useful
to
have
the
discussion
and
I'll
get
that
doc
out
as
soon
as
possible.
It
doesn't
go
into
all
the
implications
we're
talking
here.
We
should
get
those
documented
as
well,
so
this
is
like
kind
of
our.
How
do
you
write
controllers.
D
Summarization,
what
are
the
problems?
We
believe
controller
authors
face
today
that
aren't
covered
by
cube
that
we
can
improve
in
cube
and
then,
if
we
were
to
shard
it,
which
problems
and
properties
do
we
need,
and
so
it's
kind
of
we're
trying
to
understand
requirements
and
not
break
anything
or
determine
whether
it's
just
a
gap
in
cube
versus
the
alternative
of
we
don't
try
to
do
sharding.
We
do.
D
We
rely
on
a
data
store
that
scales
better
than
ncd,
which
is
a
completely
different
set
of
problem,
and
it
was
useful
to
have
the
discussion
because
we
were
presenting
this
as
an
exploration
and
it's
weighing
two
at
least
two
alternatives
right
now.
Don't
try
to
fundamentally
solve
sharding
just
make
that
someone
else's
problem
deal
with
the
consequences
of
that
which
may
be
just
as
expensive
right,
because
that's
emulating
everything
that
a
controller
would
need.
How
do
we
test
that?
We've
emulated
everything?
A
controller
would
need
globally.
A
A
Are
like
it
sounds
like
we
are
talking
to
a
lot
of
people,
but
like
whether
it's
presenting
the
you
know
halfway
through
exploration
and
the
two
ideas
were
juggling
or
at
the
end
of
the
exploration
like
summarizing.
That
somewhere
to
some
external
audience
would
be
super
helpful,
because
I
think
I
mean
probably
more
helpful.
If
we
do
it
in
terms
of
here's,
where
we
are
currently
thinking
we
haven't
decided
because
then
we
can
still
get
feedback.
D
And
and
not
even
that,
but
it's
we're
trying
to
kind
of
do
a
survey
of
how
controllers
are
succeeding.
Yeah.
A
A
Yeah
andy,
you
have
raised
your
hand.
B
Thanks,
I
also
think
that
a
lot
of
these
problems
are
probably
not
faced
by
some
or
all
of
the
controller
developers
out
there,
because
at
least
like
with
cluster
api,
for
example,
we
just
said
we
know,
there's
going
to
be
some
scale
limit
like
a
single
cluster.
That's
managing
other
clusters
and
machines
can
only
manage
so
many.
We
don't
know
what.
That
number
is
because
we
haven't
done
the
tests.
B
But
basically,
if
you
hit
a
limit,
just
go
create
another
management
cluster
and
then
you
manually
like
in
your
head
or
on
a
wiki
page
or
something
do
the
sharding,
and
so
you
say
like
if
you
need
this
management
cluster
you
go
over
here.
If
you
need
that
management
cluster,
you
go
over
there
and
I
think
that
what
we're
talking
about
is
we
don't
want
people
to
have
to
do
that.
B
A
D
You
would
you're
you're
talking
about
like
a
lot
of
the
mindset
for
controllers
carries
over,
but
it
is
crossing
into
a
new
domain,
that's
novel
in
scale
and
desire
to
present
more
approachable
like
now
like
because
in
cube
we
had
no
idea
whether
controller
pattern
would
actually
resonate
with
cost
problems.
Cube
benefits
from
the
fact
that
a
lot
of
those
problems
are
closely
related.
Like
you
got
a
bunch
of
crap
machines,
you
got
a
bunch
of
crap
cloud
apis
that
were
were
eventually
consistent.
D
We
may
keep
some
of
that,
but
we
may
actually
start
caring
more
about
some
of
the
other
properties
scale,
and
can
someone
actually
break
the
problem
up
into
tractable
things
because
again,
like
andy,
I
think
your
point
is
sharding.
Is
the
problem
of
breaking
up
into
co-located
failure,
domains,
we've
kind
of
said
we're
trying
to
help
people
if
we're
going
to
bring
everything
together?
D
We
know
that
you'll
want
to
break
them
apart?
Can
we
bring?
Can
we
bring
all
the
people
together,
but
then
let
them
break
the
problem
up
and
that's
kind
of
in
a
nutshell,
a
hard
problem,
but
it
might
be
worth
solving
at
this
particular
time
because
we
are
all
just
doing
it
via
the
wiki
and
bash
implementation
approach
and
get
ops
right,
get
up
system.
A
A
Yeah,
I
think
I
think,
you're
definitely
right
to
call
out
that
the
scale
we're
talking
about
is
not
the
scale
that
most
people
ever
like
ever
hit
or
are
hitting
currently
and
that
usually
the
solution
is
one
more
band-aid
and
then
about
the
time
you
have
15
band-aids
and
25
clusters
in
your
wiki
and
bash
scripts
to
generate
bash
scripts.
You're
like
this
is
terrible.
A
I
wish
I
had
built
this
from
the
start
the
right
way,
and
it
seems
like
what
we're
doing
is
trying
to
give
an
idea
of
how
to
build
it
right
from
from
the
start,
even
though
most
users
will
not
need
it
now.
Hopefully,.
D
So
jason
that
that
just
reminds
us
that
we
do
need
to
be
can
like.
We
need
to
call
out
the
set
of
like
canvassing
that
we
need
to
be
doing
so.
There's
one
for
controller
problems,
controller,
how
people
are
using
controllers,
searching
for
the
the
constraints
that
existing
cube
controllers,
assume
or
examples
of
problems
that,
with
consistency
that
are
just
hard
to
reason
about
so
that
controllers
in
general
get
more
effective
the
one
the
other
one
is.
We
have
a
hypothesis
that
everybody's
got
15
band-aids
it's
backed
by
a
lot
of
observational
expertise,
experience.
D
We
want
to
make
sure
that
we
have
a
good
system
for
talking
with
those
consumers
and
assessing
it,
which
I
think
the
project
was
intended
to
be
a
point
of
discussion
where
we
could
go
find
people
we've
had
a
few
inbound.
We
probably
need
to
call
out
as
an
explicit
task,
which
is
we
need
to
do
a
lot
more
outbound
communication
with
you
know:
high
scale,
multi-cluster
users
in
a
number
of
dimensions.
What
are
the
layers
that
you're
building?
What
are
the
persistent
problems?
You're
heading?
Do
any
of
these
solutions
appeal
to
you.
A
A
Compare
resource
version
comparison
function,
you
know
amnesty
for
one
day,
while
we,
while
we
fix
that,
because
I
think
I
think
fundamentally
we're
gonna
need
to
strengthen
the
language
in
the
docs
and
even
purposefully
break
the
the
functionality
early
before
people
come
to
kcp
right
like
if,
if
there
was
some
flag
in
in
regular
kubernetes
to
make
resource
versions,
non-deterministic
and
random
and
break
that,
then
it
would.
It
would
flush
out
a
lot
of
these
use
cases
really
quickly.
Like
oh,
my
controller
broke,
because
I
needed
that.
A
Well
now,
you
know
now:
we've
learned
that
you
needed
that
like
come
to
us
and
we'll
help
you
fix
it
so
that
you
can.
You
know,
step
up
into
kcp
when
you
need
that
scale.
A
I
don't
think
the
exploration
is
done,
but
I
think
the
time
to
present
it
to
folks
to
get
their
feedback
on
it
is
before
it's
done
right
like
I
don't
want
to
present
our
our
solution
to
the
problem
and
then
find
out.
There
are
20
other
problems
with
it.
D
If
you've
built
12
band-aids,
you
may
not
be
ready
to
pull
any
of
them
off
if
you're
at
20,
band-aids
you're,
probably
like
man
where's
all
this
blood
coming
from,
and
so
what
we're
looking
for
is
we
want
to
find
people
in
the
spectrum
of
from
five
to
20
band-aids
and
look
at
what
overlapping
the
problem
is
and
just
verify
that
as
well
as
going
and
finding
some
people
who
are
getting
to
14
band-aids
who'd
be
invested
in
you
know,
switching
out
some
of
those
band-aids,
maybe
even
looking
for
areas
where
we
could
go
like
do
one
or
two
band-aid
kind
of
removals,
and
then
the
20s
are
the
ones
most
incentivized
to
change,
but
they
may
have
the
most
requirements
because
they
may
have
to
solve
that
problem
now
so
like
we
are
trying
to.
D
A
Can
overlap
steve's
comment
was
go
to
sig
api
machinery
and
present
the
idea
to
opt
into
a
more
opaque
rb.
I
was
going
to
propose
that
steve.
Do
that
exact
thing,
but
instead
he
beat
me
to
it
in
the
chat.
So
I
will.
A
A
To
do
this
so
that
we
can
magically
do
some
things
I
think,
like.
I
think
I
think
the
I'm
I
haven't
committed
to
writing
a
cap,
yet
I
think
we
could
say,
like
resource
versions
have
always
supposed
to
be,
were
always
supposed
to
be
opaque.
Unfortunately,
they
are
not,
and
people
use
that
some
some
things
inside
of
kubernetes
use.
This
like
we
are.
We
are
ourselves
bad,
but
other
controllers.
You
came
to
rely
on
that
behavior.
A
It
prevents
sharding
from
being
possible,
or
at
least
starting
the
way
we'd
like
to
do
it
in
order
to
assess
the
scope
of
this
problem
like
assess
the
the
scope
of
people
depending
on
resource
version
being
transparent
to
their
non-opaque
and
maybe
just
to
get
a
night
like
to
be
the
first
step
of
a
migration
path
away
from
meaningful
rvs,
we
could
have
a
flag
that
that
you
can
enable
on
your
cluster
to
make
them
completely
opaque,
completely
random,
completely
non-comparable.
D
So
my
only
hesitation
there
is-
is
that
a
good
enough
reason
to
take
up
the
time
of
sig
api
machinery
with
a
cap
or
is
it
frame?
The
cap
as
opaque
rv
would
be
helpful
in
these
scenarios,
but
we're
still
early
in
the
discovery
process.
I
guess
the
question
is
like
what
makes
it
the
most
important
problem.
Like
writing.
A
cap
is
effectively
taking
someone's
attention
so
like
sick
api,
machinery's
default
answer
to
new
caps
like
this
would
be
like.
Can
you
convince
a
couple
of
people
at
least
that
it's
worth
them
paying
attention?
D
A
D
My
worry
would
be
do
we
need
to
couple
the
other
parts
of
the
problem
to
it,
which
are
you
know,
time,
travel,
detection
or
approach
for
backup
which
we
may
or
may
not?
I
guess
it
would
be
like
framing
and
drafting
a
cap
would
be
great,
but
then
I
think
it'd
be
like,
but
is
the?
Why
worth
the
effort?
D
And
if
we
go
down
this
path,
do
we
understand
the
problem
enough
that
we
wouldn't
back
ourselves
in
like?
If
we
do
opacity
without
comparability?
Do
we
need
to
know
whether
comparability
there
just
drafting
the
cap
and
saying
like
hey?
We
would
like
opacity.
We
think
we
might
want
comparability,
here's
five
examples
of
controllers
that
need
comparability
and
here's
the
problems
that
may
not
be
fully
required
to
start
the
cap,
but
it
might
be
required
to
to
move
the
cap
to
implementable
or
something
yeah.
It's
fine
to
start
the
process.
It's
just.
D
I
don't
know
what,
where
I
feel
like
we're
a
little
bit.
I
feel
like
that
process
of
going
and
figuring
out.
Does
anyone
need
comparability,
was
kind
of
what
we
were
talking
about
with
canvas
the
community
kept
starting
at
cap
and
finding
participants
is
one
way
starting.
A
draft
cap
as
a
google
doc
and
trying
to
find
going
into
the
sega
pair
machinery
going
to
the
controller
communities
is
kind
of
what
you
were
saying
before
that
may
be
just
as
effective.
D
Impacts
kind,
but
I
don't
know
how
serious
kind
is
for
anybody,
except
for
k3s
right
now,
so
every
k3s
user
who's
implicitly
using
kind,
is
probably
okay.
I
haven't
seen
serious
large
scale
use
so
asking
asking
kind
community
like
going
and
searching
through
kind
community
findings
using
it
like
alibaba,
I
think,
had
some
large-scale
stuff.
Someone
needs
to
do
an
assessment
of
kind
ultimately,
for
the
other
branch
of
the
strategy
tree
for
sharding
in
just
a
quick
glance,
it's
being
used
and
there
are
still
some
gotchas
like
I'm,
not
they're.
Not.
D
D
A
Yeah,
so
it
sounds
like
a
real
actual
cap
is
probably
heavier
weight
than
we
need
at
least
for
now,
but
it
will
a
draft
kept
will
at
least
get
people's
attention
and
at
least
get
people's
feedback,
which
is
really
survey.
D
The
community
we're
looking
for
people
using
these
problems,
yep
yeah
and
I'm
I
am
not
trying
to
say,
don't
go
talk
to
sick
of
your
machinery.
I
don't
know
stefan's
on
the
call
like
I'm
always
like
in
sick
ap
machinery
like
it's
a
whole
bunch
of
people
who,
like
have
been
like
in
this
for
seven
years
and
they're,
just
like
tired
and
like
everything's
hard
so
like
we're,
barely
able
to
get
like.
I
haven't
gone
and
gotten
the
limit
stuff
to
ga.
D
It
took
us
like
two
years
to
get
like
the
table
formatter
to
ga
yeah.
We
implemented
delivered
it.
So
it's
definitely
a
it
operates
on
glacial
time
spans
because
there's
a
lot
of
key
problems
that
any
problem
really
has
to
be
urgent
or
someone
has
to
really
commit
to
long
term.
I
think
we're
kind
of
saying
we're
committing
long
term
to
getting
this
movement
in
place,
but
we
need
to.
We
need
to
provide
the
motivating
force
and
be
like
hey.
Don't
panic
or
you
know
we're
looking
for
people
who
are
going
through
these
problems?
A
I
think
I
think
it
will
be
a
difficult
balance
between
like
I
also
I
don't
want
to
show
up
in
two
years
with
a
fully
formed
design
saying
this
is
all
the
stuff
we're
going
to
do
to
you,
but
I
don't.
A
D
Over
communicating
a
few
of
these
ideas
with
daniel,
but
it's
been
like
you
know,
like
we've,
brought
this
up
a
couple
times
over
the
last
couple
years:
jordan
and
I've
chatted
about
like
resource
version
and
guarantees
we're
all
just
busy
people
dancing
around
each
other's
like
the
the
slices
of
attention
pulling
it
together
and
making
a
coherent
case
for
it.
Steve,
I
think,
is
a
great
like
starting
to
like
pull
it
together
and
going
and
fetching
the
water
and
chopping
the
wood
is
always
appreciated.
D
Far,
as
I
know,
google
is
deploying
with
events
stored
in
a
separate
lcd.
Openshift
has
never
done
that
because
it
was
like
operational
complexity
was
too
high.
We
just
said
fix
right
rate
of
events,
so
derek
wenton
did
a
lot
of
the
event
throttling
with
priority
and
fairness
like
we've
got
some
controls
on
right
rate,
but
yeah
like
joe
betts,
definitely
has
been
dealing
with.
You
know.
Jobets
has
always
kind
of
had
that.
D
Concern
about
the
ncd
as
a
store
does
not
have
an
effective
admission
mechanism,
one
of
the
trade-offs
between
option
one
and
option
two
is:
if
we
need
an
effective
quota
and
admission
thing,
there
are
systems
in
modern
databases
that
are
like
it
takes
a
good
couple
of
years
to
develop.
Like
you
know,
it's
probably
like
5
10
person
years
of
work
to
develop
a
good
admission
and
quota
system
in
a
database.
Ncd
doesn't
have
it
don't
see
it
happening
soon.
That's
another
factor
which
is
like.
D
D
That's
the
bigger
one!
That's
like
how
large-scale
deployers
like
how
are
you
hitting
those
band-aid
problems?
The
smaller
one
is
the
controller
patterns
and
problems.
Andy
and
steve
are
gonna
chase
that
andy
andy's
gonna
gather
input
from
twitter
and
steve
will
put
pull
together
some
docs
and
gather
from
and
work
with
api,
or
at
least
mention
api
machinery
and
take
it
in
the
community.
A
Yeah
I
mean
the
the
the
problem
of
of
corralling
api
machinery's
collective
attention
and
or
corralling
it
and
managing
it,
and
not
over
oversubscribing
it
paired
with
the
idea
that
some
folks
are
already
just
doing
this
in
kind.
D
And
and
we're
trying
to
say
like
we
want
to
solve,
sharding
and
failure.
Domain
store
is
a
failure.
Domain.
Backup,
incrementality
is
a
failure
domain.
Is
that
important
enough?
So
that's
like
I,
I
got
a
few
of
that
copied
in
the
this
was
like
some
of
the
questions
was
like
what
is
our?
What
is
it
important
enough
to
allow
someone
to
be
like
yeah?
Of
course,
I
want
to
run
I'm
going
to
run
a
control.
Plane
like
this
is
like
the
broader
one
control
plane
can
fail.
So
then
you
add
more
control.
D
D
Is
there
something
in
between
where
you
have
like
kind
of
a
broad
set
of
control
plane,
that's
kind
of
like
generic,
and
then
you
have
more
specific
exact
clusters
and
then
do
you
have
even
other
types
of
control,
plane,
problems
that
are
relevant
or
is
there
value
in
just
trying
to
get
to
two
layers
and
be
like
you
got
clusters
and
a
higher
level,
and
that's
it,
and
if
you
use
this,
it
simplifies
all
your
other.
Problems
is
having
to
have
a
chartable
data
store.
D
A
Yeah,
and
even
aside
from
talking
to
other
groups,
which
we
can
do
now
soon
anytime
on
twitter,
you
can
respond
to
andy's
tweet
right
now.
I
think
this
is
also
a
really
interesting
topic
to
propose
a
talk
for
at
kubecon
next
year
in
valencia,
which
leads
me
to
my
other
point
that
the
cfp
for
that
is
open
now
and
closes
december
15th.
A
So
if
you
are,
if
you
meaning
you
steve
or
you
andy
or
you
clayton
or
anyone
else,
who's
following
this
and
interested
in
presenting
it
is
interested
in
presenting
it,
then
I
think
this
would
be
a
really
good
thing
to
propose,
because
I
think
it
sort
of
it
has
two
goals.
One
is
look
at
how
smart
we
are
look
at
all
the
stuff
we've
thought
of
and
a
lightning
rod
for.
A
This
will
break
me
in
a
way
that
you
know
you
haven't
thought
of
so
I
feel
like
having
a
public
declaration
of
this
conversation
will
be
useful
both
for
advancing
the
state
of
the
art
in
what
we
all
think
of
this,
and
also
getting
people
to
to
throw
pies
and
in
throwing
their
pies
tell
us
what
we're
doing
wrong
in
a
useful
way.
A
I
don't
know
if
that
is
at
all
interesting
or
or
enticing
to
you,
steve
or
andy,
or
anybody
or
me
or
maybe
I'll
do
it,
but
I
think
it's
an
interesting
topic
and
I
think
we
should
share
it.
Yeah
call
for
questions.
I.
D
Also
did
a
third
one
which
would
be
like.
Maybe
this
is
like
another
thought,
but
I
don't
know
who's
done.
That
is
like
there's
kind
of
a
general
problem
of
like,
and
I
was
trying
as
I
was
going
through
this.
I
was
like
what
are
all
the
design
patterns
for
controllers
like
we
have
some
record
of
like
how
to
write
good
controllers.
Are
we
doing
enough?
Can
we
review
with
the
people
who
are
kind
of
driving
that
ecosystem?
D
You
know
early
on
brian
and
brian
and
brandon
and
brendan
and
and
dan
and
tim,
I
think,
did
the
paper.
That
was
like
the
controller
pattern
in
kubernetes
it's
been
six
seven
years
and
like
it
really
deserves
a
rehash.
So
there's
maybe
like
a
broader.
D
If
anybody's
done
this
or
is
doing
something
adjacent
to
this,
can
we
go
and
assess
where
they
are
and
if
not,
maybe
we
should
seed
an
effort
in
one
of
the
groups
to
like
you
know,
are
we
doing
a
good
enough
job
of
boiling
controller
design
patterns
down
into
actionable
code,
documentation
and
experience,
because
even
qualifying
that
then
helps
us
say
like
well,
which
of
the
problems
are
just
harder
to
solve.
A
Yeah,
I
know
that
like
when,
when
I
started
writing
controllers,
there
was
a
an
absolute
like
lack
of
documentation
of
best
practices.
There
like
there
were
some
talks
and
some
blog
posts
and
some
like
little
little
nuggets
buried
out
there.
But
but
it's
it's
a
lot
of
lore
and
cargo
culting,
and
you
know
copying
some
snippet
that
you
cut
that
you
see
from
some
other
place
like
it
is
not
a
a
well
defined
practice.
I
think
we
want.
D
Controllers
to
be
the
rails
of
distributed
systems
for
the
purposes
of
managing
the
crap
we
have,
if
that's.
A
Not
achievable,
we
should
know,
there's
there's
a
lot
to
do
to
make
it
as
good
as
to
make
it
as
good
as
that.
D
And
we
have
the
experience
now
like
we're.
Seven
years
in,
like
you
know,
rails
came
about
six
years
into
you
know
web.
You
know
small,
consulting
teams,
building
data
driven
apple
enterprise,
what
we
would
have
called
the
two-tier
app
or
three-tier
app
at
the
time.
You
know
building
that
and
just
like
kind
of
being
able
to
stamp
them
out
kind
of
what
the
value
of
a
like
the
band-aids.
D
You
know,
discussion
is
if
we
get
enough
of
the
band-aids
all
at
once,
we
can
potentially
help
you
know
medium
to
large-sized
organizations.
Do
this
and
then
we
could
provide
integrators
ways
of
because
we're
trying
to
get
at
that
like
hey,
it
should
just
be
easier
to
build
and
deploy
and
orchestrate
all
this
other
crap,
like
infrastructure
of
code,
has
had
its
chance.
Configuration
management
has
had
its
chance.
D
How
many
need
to
do
that
for
cloud
functions?
How
many
need
to
do
that
for
on-premise
environments,
etc?
That
motivation
should
be,
is
controllers
the
pattern
or
is
controllers,
not
the
pattern
or
is
it
a
part
of
the
larger
pattern
like?
Is
it
controllers,
plus
ad
hoc
automation,
plus
bash
scripts?
Plus,
you
know
a
bunch
of
config
files
and
get
ops.
You
know
like
the
there's.
We
shouldn't
discount
the
possibility
that
it's
not
the
framework
for
building
these
distributed
systems,
things
that
makes
integration
easier.
D
So
that's
the
that's
a
separate
thread,
but
it's
a
subthread.
A
Yeah
sorry,
just
going
back
through
the
chat,
yeah
steve,
I'm
not
sure
as
steve
said,
I'm
not
sure
what
end
state
finished
product
would.
This
would
be
necessary
to
actually
create
a
cohesive,
interesting
presentation.
I
don't
think
we
have
to.
A
I
don't
think
we
have
to
present
here
is
the
final
finished
solution
that
we
came
up
with.
Just
like
you
know
the
journey
of
like
experienced
a
problem
thought
about
it
solved
it.
I
think
it's
like
experiencing
a
problem.
Thinking
about
it.
Here's
one
possibility
question
mark
like
it.
You
know
it
doesn't.
A
I
definitely
don't
want
to
present
it
as
we
have
solved
the
problem
of
distributed
applications
because
I
mean
unless
we
have,
unless
we
have
by
december,
in
which
case
you
know
that'd
be
great.
Let's
do
that
instead,
but
just
like
advancing
the
state
of
the
art
of
like
here's,
the
problem
we're
having
here's,
here's,
how
we
think,
here's
the
problem
we
think
you're
having
like
this,
isn't
us.
We
think
you
you
are
having
this
and
we
can.
A
We
can
help
you
with
it.
All
we
have
to
do
is
price
semantically,
useful
resource
versions
out
of
your
cold
dead
hands
and
make
controller
writing
easier
and
shard
stuff,
and
you
know
easy
stuff
like
that,
but
yeah
I
think
that
would
be.
I
would
find
it
to
be
a
compelling
talk
to
listen
to
and
maybe
even
a
compelling
talk
to
give
so
yeah.
A
I
don't
know
if
there's
anything
else
anybody
is
is,
is
burning
to
talk
about
we've
taken
up
again,
most
of
the
time
with
this
discussion,
but
I
think
it
was
a
useful
one
and
a
good
one.
I'm
going
to
take
an
ai
to
volunteer
steve
and
myself
to
go
to
sig
api
machinery
and
talk
about
this
problem,
whether
or
not
that's
in
terms
of
a
draft
cap
or
in
terms
of
a
like
here's,
the
thing
we're
thinking
about
help
us
think
about
it
better.
A
I
think
I
think
it's
becoming
something
we
can
start
to
present,
like
I
think
a
month
ago.
This
wasn't
something
we
could
even
like
talk
about
coherently,
whether
or
not
we're
talking
about
it
coherently.
Now
you
know
tbd,
but
yeah.
So
with
with
the
last
few
minutes.
If
anybody
has
anything
to
discuss.
A
If
nothing,
I'm
more
than
happy
to
take
five
minutes
before
my
next
thing,.
D
Are
there
any?
Is
there
anybody
who
joined
the
call,
who
has
a
topic
they
didn't
feel
comfortable
putting
on
the
list
but
want
to
bring
it
up
now
or
call
for
I,
I
kind
of
you
know
one
of
the
things
I
was
wondering
myself:
jason
is
where
we
we
kind
of
we
run
through
the
topics
and
it's
it's
a
lot
of
very
specific
topics.
Maybe
one
of
the
things
we
should
do
is
if
people
join,
you
know
what
they're
interested
in.
D
If
you,
if
you
feel
like
you,
don't
you
want
to,
we
might
want
to
create
a
space
or
a
few
minutes
before
or
after
where
we
kind
of
go
through
open
topic,
probably
before
so.
If
anybody
on
the
call
has
a
topic
they'd
like
to
bring
up,
please
do
and
if
not,
you
know,
please
comment
on
the
issue
and
we'll.
A
E
I
noticed
that
dave
is
not
here,
but
I
talked
to
david
last
week
around
some
of
the
his
proposal
from
last
week
around
the
transformations
and
the
security
boundaries
of
the
sinker,
and
I
had
a
proposal
that
I
wanted
to
like.
I
went
on
ptf,
so
I
didn't
get
a
chance
to
write
it
up
but
wanted.
If
we
have
time
now.
One
thing
that
I
was
thinking
about
is
adding
the
ability
to
store
diffs.
E
So
you
would
have
your
primary
object.
You'd
have
a
set
of
dips
diffs
for
the
underlying.
E
As
like
a
resource
and
then
when
the
cluster
sinker
would
ask
for
that
resource,
it
would
get
the
primary
resource
plus
the
diff
and
then
be
able
to
apply
it
in
a
set
of
like
you
would
get
the
security
boundary
that
you
were
talking
about.
But
it
seems,
like
you
maybe
have
thought
of
that
claim.
D
This
was
actually
proposed
very,
very
early
on
in
cube
when
we
were
talking
about
what
eventually
became
server
side
apply,
which
was
you
know,
you
have
the
concept
that
you
have
a
schema
and
then
different
perspectives.
Different
personas
have
a
set
of
changes
that
they
want
to
apply
to
that
schema,
and
you
know
to
a
given
object
that
may
belong
to
different
people,
so
we
actually
early
on.
We
discussed
something
similar,
probably
not
exactly
identical
of.
When
you
create
a
cube
object.
D
Could
you
also
go
and
create
diffs
alongside
it
that
then
get
merged
together?
So
we
did
explore
some
of
those
trade-offs
so
there's
we
could
probably
follow
up
and
discuss
some
of
what
did
or
didn't
work
there.
One
of
the
most
obvious
ones
was.
It
was
just
really
complex
and
we
didn't
know
whether
we
wanted
that
complexity
so
like
there
is
a
this
was-
and
this
was
my
comment-
I
don't
know
if
I
made
it
to
david
last
time.
D
It
was
like
all
clever
ideas
are
worth
exploring,
but
the
some
of
the
best
ideas
in
cube
were
like
this
is
so
stupid
that
there's
no
chance.
That
would
actually
work
that
ended
up
being
like
because
you
didn't
need
a
second
concept:
it
really
reduced
the
problem
space.
That
was
the
other
one,
which
was
like
that's
how
we
kind
of
weighted
those
which
was
well.
You
know
we
all
like.
We
were
worried
about
multiple
people,
editing
the
same
object,
but
in
practice
it
didn't
matter.
D
Server
side
apply
eventually
came
up
with
field
ownership,
which
was
like
a
subset
of
the
problem
and
field
ownership
was
enough
to
make
server
side
apply,
handle
it,
although
we're
six
years
in
and
only
a
few
clients
have
adopted
server
side
apply.
Controllers
are
just
now
starting
to
think
about
it.
So
it's
kind
of
an
example
of
like
it
was
a
good
idea,
but
it
didn't
pass
the
too
much
novel
complexity.
D
So
that
would
be
like
another
factor
we
should
think
about.
We
shouldn't
be
afraid
of
generating
those
ideas.
I
mean
I
can
go
like
we
can
chat
offline
or
you
know
some
discussion
and
come
back
on
it.
E
D
The
the
biggest
cube
lesson,
which
was
in
theory
separating
parts
of
data
out
in
an
object
so
that
you
clearly
like
you
know
every
field
could
have
multiplicity
or
you
know,
even
david's
suggestion
of
like
having
two
objects:
a
private
and
a
public
version.
D
The
advantage
was:
is
that
that's
like
the
that's
like
aiming
up
here
and
what
really
worked
was
like
the
dumber,
like
it's
just
an
object
and
we
have
some
basic
spec
status,
that's
kind
of
there
and
that
works
for
like
99
problems.
Syncer
might
have
special
stuff.
This
is
the
first
time
that
we're
talking,
like
cube,
didn't
really
go
after
things
that
all
objects
had,
except
for
object,
metadata
object.
D
Metadata
is
the
place
where
we
put
things
that
all
objects
should
have
certainly
k-native
in
the
duct
typing
stuff,
and
the
traits
discussion
all
basically
boiled
down
to.
D
Is
that
a
general
pattern
that
we
can
be
okay
with,
or
do
we
actually
need
to
to
go
up
a
level
and
honestly,
like
annotations,
might
be
enough?
Is
we
we
actually?
Some
of
the
things
we
discussed
in
cuba's
annotations
were
enough
for
apply
for
the
first
five
years
and
yeah
sure
they
had
problems,
but
they
worked
don't
be
afraid
to
abuse.
Annotations
is
kind
of
one
of
the
lessons
I
took
from
early
cube
until
you
really
know
what
the
general
problem
looks
like.