►
From YouTube: Kubernetes SIG Apps 20180205
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
the
February
5th
2018
kubernetes
sig
apps,
my
name
is
Matt
Farina
and
so
to
get
everybody
going.
I'll
drop
the
agenda
into
chat
here,
I
think
that's
the
agenda
document
for
our
agenda
and
notes.
The
one
thing
that
we
have
as
an
announcement
is
again
I'll
remind
folks.
We
have
the
Helms
summit
coming
up.
It's
coming
up
in
just
a
couple
weeks.
So
if
you
want
to
go
book
your
tickets,
you
can
go
there
and
get
tickets,
get
hotel
information.
A
A
B
B
Cool
you
see
my
screen,
hi
sure
can
okay
great,
so
thanks
everyone
for
coming
this
talk
is
gonna,
be
pretty
quick,
so
I'm
gonna
jump
right
in.
We
think
data
operations
like
backup
protection
migration
have
Trisha
traditionally
been
performed
at
the
infrastructure
level,
for
example
by
taking
individual
volume
level
snapshots.
This
infrastructure
level
approach
we
think,
is
wrong
for
stateful
applications
in
the
cloud
native
world.
It
doesn't
allow
developers
to
reason
about
application
level
semantics
when
performing
data
operations.
B
Instead,
we
strongly
believe
that
these
operations
should
be
performed
at
the
application
level,
since
it
is
simpler,
more
flexible
and,
most
importantly,
preserves
application
semantics.
An
application
first
approach,
leverages
battle-tested
tools
that
understand
and
handle
application,
specific
details.
It
also
supports
more
complex,
distribute
applications
as
well
as
several
other
use
cases.
So
with
that
today,
I'm
presenting
canister
an
open
source
framework,
we've
developed
based
on
user
feedback
that
allows
developers
and
operators
to
codify
application
level
data
operations
and
make
those
operations
easy
to
perform.
B
We
hope
that
this
will
resonate
with
some
of
the
other
ongoing
discussions
as
a
Gatz
I
look
forward
to
interacting
further
around
this.
So,
first
a
little
bit
about
me
and
why
I've
been
in
the
cloud
news
space
for
some
time.
My
first
job
was
at
a
startup
called
imagine
addicts
where
we
worked
on
the
cloud-based
file
system
that
was
acquired
by
EMC
after
the
acquisition
I
went
to
Dropbox,
where
I
was
on
the
core
team
for
databases,
service
platform
doing
many
data
operations.
This
is
one
of
the
top
five
customers
of
my
sequel.
B
So
there
are
several
applications
that
are
only
possible
when
working
at
yeah
sorry,
some
use
cases
are
only
possible
when
working
at
the
application
level.
When
your
infrastructure
at
gnostic,
you
can
easily
move
data
between
data,
centers
or
even
clean
different
cloud
providers
want
to
move
data
from
from
production
for
testing
with,
at
the
application
level,
you're
able
to
perform
in-flight
transformations
of
your
data,
such
as
masking
a
PII
or
downsampling,
to
reduce
your
overall
test
set
size.
B
And
finally,
many
complex
applications
require
specific
tools
to
perform
data
protection,
so
you
can
think
of
like
big
distributed
applications.
So
you
have
point-in-time
consistency.
You
need
to
have
semantic
understanding
of
the
data,
so
cancer
tries
to
address
all
these
use
cases.
We
follow
the
operator
pattern,
which
means
there's
a
controller
and
C
RDS
or
custom
resource
definitions.
So
there
are
two
types
of
CR
DS
in
canister.
One
is
a
blueprint
which
encodes
specific
instructions
for
performing
operations.
The
other
is
an
action
set.
B
The
action
set
collects
all
the
inputs
to
an
operation
and
then
once
it's
created,
the
controller
will
go
and
perform
those
operations.
So
here
I'm
going
to
kind
of
just
walk
through
what
that
looks.
Like
let's
say
you
have
a
canoe
fluster
set
up
the
kind
of
application
we've
deployed,
the
cancer
controller
we've
already
installed
the
blueprint
if
I
want
to
perform
an
operation.
I
can
just
use
cout
cuddle
to
create
an
action
set
so
because
it's
a
CR
D,
it
looks
just
like
another.
Any
other
Korean
ease,
API
object.
B
In
this
case,
we're
gonna
run
qu
exact,
which
is
a
candidate
or
function
similar
to
cuddle
exec.
This
means
that
we
can
go
run
any
operation
inside
containers
that
are
currently
running.
Alternatively,
can
store,
has
a
function
called
coop
tasks
which
will
let
us
create
a
new
pod
back
and
then
you
know
form
operations
like
you
know,
filtering
or
ambassador,
that
kind
of
thing.
B
So,
finally,
when
this
action
is
done,
the
cancer
controller
will
go
update
the
action
set
with
the
resulting
artifacts
and
the
status
of
the
job,
so
that's
kind
of
the
tappable
diagram,
but
what
I
think
is
really
just
a
real
demo,
so
I'm
going
to
switch
over
to
this
is
pre-recorded,
but
these
were
these
were
live
actions
run
so
let's
get
started.
So,
let's
see
what's
in
our
class
right
now,
we
have
the
counselor
controller
running
and
we
have
MongoDB
in
small.
So
to
make
this
more
interesting.
B
B
Ok,
so
now
one
thing
that
cannister
allows
is
configuration
through
config,
Maps
and
secrets,
so
in
this
case
we're
going
to
specify
a
config
map
that
has
the
bucket,
where
we
want
to
push
our
data.
So
here
we're
just
going
to
push
it
to
bucket
I
pre,
created
and
pull
gangster
and
we'll
create
that
that
config
map
cool
now
I'm
gonna,
add
some
secrets
which
I
won't
show
here,
but
these
easily
give
us
access
to
that
bucket
and
finally
I'm
going
to
create
a
blueprint
which
will
give
us
instructions
how
to
perform
these
operations.
B
B
So
the
the
long
link
there
is
where
we
actually
pushed
the
data.
This
is
an
artifact.
We
also
can
see
that
there
are
several
phases
that
it
well.
There
was
one
phase
that
executed
in
this
case.
We
took
the
consistent
backup
and
that
was
completed.
So
in
addition
to
the
storing
the
output
inside
the
action
set,
we
can
also
check
the
logs
of
the
controller
which
contain
the
full
output
from
whatever
operation
we
performed.
B
So
this
is
a
lot
here,
but
we
can
see
in
the
middle
that
we've
completed
an
upload
and
we've
pushed
that
Letarte
of
the
backup
to
that.
Well,
so,
of
course,
that's
only
good
if
you
can
recover
them.
So
first,
let's
destroy
our
data
unison,
letting
a
crash
I'm
going
to
draw
anything
for
the
database.
B
And
now
I'm
going
to
use
a
can
sorry
a
canister
tool
which
will
allow
me
to
chain
action
sets
so
in
this
case
I'm
gonna
chain
off
my
previous
backup
action
set
and
we
perform
the
action
restore.
What
this
does
is,
creates
a
new
action
set
with
restore
and
and
passes
all
the
output
artifacts
from
the
previous
action
set
into
this
new
restore
action
set
cool.
So
now
we
can
check
the
status
on
that
and
we
can
see
that
the
final
phase
has
completed
just
make
sure
I'm
not
lying.
B
We
can
go
and
check
to
see
what's
inside
of
the
database,
so
we
do
find.
We
can
see
that
we've
successfully
recovered
our
precious
data
about
Roy's
diner.
We
know
it's
Hawaiian
cuisine,
so
that's
the
demo
now
I
just
wanted
to
show
a
little
more
details
on
what
a
blueprint
an
action
set.
Look
like.
So
this
is
a
blueprint.
It
looks
just
like
a
native
Korean
aise
API
object
because
it's
a
CRD,
a
blueprint
contains
a
list
of
actions
and
operations.
You
can
perform
this
case.
B
We
have
one
at
one
action
and
call
the
backup
any
action
can
have
one
or
more
phases.
This
case
we
have
one
phase,
which
is
just
run
through
cube
exec.
The
first
three
parameters:
tacuba
exec-
are
where
to
run
so
the
name
space,
pod
and
container,
and
then
the
remaining
parameters
are
the
man
we
want
to
run
so,
in
this
case,
we're
using
bash
to
invoke
consistent
backup,
so
the
corresponding
action
set
is
also
a
CR
D.
So
it
just
looks
like
again
a
Native
communities,
API
object.
B
In
this
case,
you
can
specify
sets
of
actions
that
will
be
performed
in
parallel
here.
We're
gonna
do
backup
we're
going
to
reference
the
blueprint
we
created
before
and
we're
going
to
perform
it
on
a
stable
set,
so
the
current
options
are
stateful
sets
or
deployments.
We
recognize
that
there's
high
level
groupings
that
this
fig
is
working
on
specifically
applications
as
we
want
to
work
with
the
community
to
figure
out
how
to
incorporate
that
into
a
canister.
B
B
So
with
that,
this
is
the
end
of
the
talk.
You
know,
we'd
love,
to
get
your
feedback
on
canister.
You
know
if
you,
if
you
have
feature
requests
or
you
have
you
want
to
contribute,
you
know
these
are
all
the
links
you
need.
So
we
have
github
and
slack
feel
free
to
shout
me
personally.
So
there's
my
email,
my
LinkedIn
with
that.
Thank
you.
Any
questions.
C
B
It's
a
great
question:
I
think
one
of
the
key
points
of
running
things
at
the
application
level
means
that
you
have
to
integrate
closely
with
both
the
application
and
application
specific
tools.
So
in
the
example
I
ran
we
ran.
We
ran
the
backup
directly
in
a
sidecar
of
the
reason,
for
that
was
because
we
actually
wanted
to
access
directly
to
the
manga
volumes,
and
then
we
use
specific
tools.
B
C
C
D
C
B
So
that's
not
part
of
canister
itself,
but
you
could
imagine
just
using
like
the
communities
API
to
go
and
create
these
things.
So
if
you
have
your
own
scheduling
mechanism
like
you
could
create
a
cron
which
would
go
in
and
run
these
action
sets,
but
that
is
kind
of
outside
the
purview
of
canister.
But
that
is
a
good
good
feature
for
sure.
E
E
In
brief,
all
we
wanted
to
assess
was
whether
there
is
an
interest
for
a
miss
cig
and
machine
learning,
workloads
and
potentially
in
participating
in
a
working
group,
to
make
sure
that
we're
building
the
right
primitives
for
kubernetes
to
help
run
these
workloads
well
and
there's
a
lot
of
other
interests
in
other
SIG's
and
other
communities
in
ml,
particularly
cig
data.
There
is
a
new
community
opening
up
called
Kuby
flow,
which
is
kind
of
a
a
set
of
tools
to
help
you
build
machine
learning
pipelines
on
top
of
kubernetes.
E
It
includes
things
like
airflow,
tensorflow
spark,
there's
an
open
shift
working
group
that
is
focused
on
machine
learning
and
open
shift,
and
we're
not
really
trying
to
focus
on
any
of
those
things.
We're
trying
to
focus
on
making
sure
that
we're
building
the
right
things
inside
of
core
and
as
extensions
in
order
to
support
machine
learning
on
chaos.
E
D
E
Ok,
so
I
mean
really
that's
all.
We
wanted
to
discuss
and
just
kind
of
see
if
there
was
an
interest
from
this
sig
and
it
seems
like
there
is
at
least
one
or
two
people
so
we'll
follow
up
and
take
an
action
item
to.
If
we
do
do
a
working
group
or,
however,
we
decide
to
get
together
because
a
lot
of
the
concerns
for
machine
learning
or
cross-cutting,
they
they
involved
Sigma,
they
involve
sig
network.
E
D
Have
a
question,
though,
how
do
how
does
this
differ?
How
does
this
differ
from
the
work?
That's
going
on
with
efforts
like
spark
on
Ku
band,
you
know
the
clearly
mentioned
the
Big
Data
sig,
like
what
I
guess,
what
the
machine
learning
such
an
overloaded
kind
of
term
in
some
ways
like
what
what
specifically
or
if
you
have
it,
you
know
what
kind
of
specific
details
with
this
new
organization
be
really
attempting
to
address
so.
E
Well,
okay,
so
for
SPARC
in
particular,
like
I,
mean
kubernetes
came
from
a
place
where
we
started
with
sharing
workloads.
We've
made
some
headway
into
addressing
stateful
workloads,
data
processing,
workloads,
I
mean,
if
you
look
at
the
core
batch
mechanisms
like
job
they're,
really
not
great
for
high
performance
data,
proc
SPARC
is
kind
of
where
we
made
it
in
an
inroad
bear.
Now
you
have
Emma
alive
for
SPARC,
which
is
definitely
something
that
would
be
of
interest
for,
like
linear
regression,
for
instance,
your
batch
inference,
but
there's
also
other
types
of
machine
learning.
E
Tensorflow,
for
instance,
is
generally
deep,
neural,
Nets,
deep
learning,
and
that
is
its
own
set
of
challenges
and
with
SPARC
we're
trying
to
address
things
like
data
locality,
making
sure
that
you
can
Co
locate.
The
SPARC
executor
is
next
if
you're
running
HDFS,
but
it's
a
different
set
of
challenges,
because
those
workloads
are
still
primarily
I/o
bound
when
you
get
into
machine
learning,
particularly
training.
E
Now
you
start
having
CPU
intensive
workloads,
potentially
memory
intensive
workloads
as
well,
so
the
set
of
concerns
might
include
things
like
cache
interference,
Numa
awareness,
these
type
of
challenges
and
we're
not
trying
to
be
opinionated
like
Ruby
flow,
is
somewhat
more
opinionated.
It's
here's,
a
set
of
tools
that
we
think
are
valuable
on
the
community
thinks
are
valuable
that
we're
going
to
invest
in
to
help
you
get
started
when
not
they're,
not
really
saying.
This
is
the
only
way
you
should
do
it
or
being
so
updated
at
the
point.
E
They're
not
listening
to
users
needs
or
having
empathy,
but
there's
a
little
bit
more
of
opinionation
like
for
from
from
the
level
of
concern
we're
trying
to
address.
If
you
want
to
use
caffeine,
we
went
back
to
work
well
for
you.
If
you
want
to
use
PI
SPARC,
we
want
that
to
work
well
for
you.
If
you
want
to
use
tensor
flow,
we
went
back
to
work
well
for
you
so
trying
to
make
trying
to
make
sure
we're
building
the
right
things
and
trying
to
make
sure
that
we're
addressing
the
concerns
of
our
actual
users.
D
Ok,
so
if
I,
if
I
understand
you
right,
then
this
is
like
pushing
beyond
kind
of
the
initial
thrust
of
kubernetes,
which
is
like
running.
You
know,
like
you
kind
of
said
jobs
or
running
workloads
that
are
based
around
processes
and
whatnot,
that
look
more
or
less
like
standard
OS
type
stuff,
and
you
really
want
to
address
the
specifics
that
are
coming
out
of
like
analytics
and
machine
learning
workloads
and
try
right,
but
cool.
E
But
with
ml
we're
trying
to
really
focus
on
ml
and
not
so
much
data
processing,
workflows,
but
data
processing
is
generally
mean,
like
you,
looking
for
a
typical
ml
practitioner,
staging
their
data,
sanitizing
they're,
just
in
their
data,
are
all
steps
that
they're
going
to
have
execute
a
machine
learning
pipeline
to
do
any
type
of
training,
let
alone
prediction.
We
think
we
have
the
right
things
for
prediction
serving.
Oh,
you
know
really.
We've
only
tested
it
on
tensorflow.
E
At
this
point,
there's
some
other
people
who
are
out
there
using
other
forms
of
prediction
serving,
but
is
the
serving
is
still
basically
a
stateless
serving
workload.
But
so
we
think
we
handle
that.
Well,
the
training
components.
We
would
definitely
want
to
have
discussion
in
the
community
around
and
we
want
it
to
be
a
discussion,
that's
agnostic,
to
whether
you're
running
it
on
AWS
or
Google,
Cloud
or
OpenShift,
or
an
agnostic
to
whether
you're
running
tensorflow
or
khaki
or
SPARC
ml
website.
Mlm.
D
E
I
mean
there
are
definitely
challenges
with
the
data
processing
data
gravity
and
specific
is
kind
of
one
thing:
that's
a
large
challenge
in
the
space
and
then
hybrid
deployments,
where
you
might
have
data
that
spans
multiple
regions,
trying
to
schedule
your
workloads
well
there
or
maybe
even
migrating
data
from
one
region
to
another
or
migrating
from
an
on-prem
deployments
cloud.
These
are
all
challenges
that
are
kind
of
in
the
data
processing
space,
but
are
also
a
large
challenge
for
machine
learning.
Practitioners.
D
E
We
don't
want
to
have
a
sake,
because
we
don't
think
that
at
least
initially,
we
don't
want
to
have
a
ciggy
unless
it
becomes
something
where
people
want
to
have
a
type
of
organization
that
lasts
forever.
We
just
want
to
bring
people
together
for
a
temporary
working
group
with
the
mandate
of
assessing
the
current
suitability
of
kubernetes
to
machine
learning.
Finding
where
the
friction
points
are
where
the
pain
points
are,
where
the
limitations
are,
and
at
least
coming
up
with
a
path
forward
to
address
them.
E
We
want
to
participate
in
because
again,
because
we're
running,
potentially
cpu-bound
workloads
and
ones
that
may
benefit
from
isolation
for
a
memory
knowing
about
couplet
and
interacting
with
couplet,
is
important.
Also
I
think
the
accelerators,
the
hardware
accelerator,
geyser
and
sig
note
for
the
most
part
and
then
sig
network
again,
because
storage
bandwidth
can
be
a
huge
limiter
for
distributed
machine
learning.
So
getting
all
of
those
people
together
in
a
working
group
to
have
a
conversation,
is
kind
of
the
overarching
goal.
A
Might
just
be
worth
noting
that
you're,
probably
not
gonna,
see
that
many
new
SIG's
come
up.
In
fact,
there's
been
talk
about
consolidating
sakes
because
there's
just
so
many
there's,
maybe
thirty
of
them
now
and
look
at
other
sub
projects
to
existing
SIG's.
The
way
we
have
several
sub
projects,
many
of
the
other
SIG's
dont
didn't
hey.
There
might
be
multiple
SIG's
with
cross-cutting
concerns
and
looking
at
that
kind
of
thing,
and
then
working
groups
when
you
want
SIG's
to
cross
with
each
other.
A
Just
because
you
know
SIG's
tend
to
be
extremely
long-running
like
years
and
years
and
years
where
a
working
group
might
come
together
to
solve
a
problem
and
then
SIG's
will
end
up
owning
the
code
at
the
end
of
that,
and
then
the
working
group
can
dissolve
because
it's
it's
not
needed
anymore,
it's
kind
of
there
to
solve
an
issue.
That's
there
today,
but
maybe
not
there.
Tomorrow.
E
The
other
thing
to
note
is
that
working
groups,
so
a
working
group
or
a
sake
like
say
big
data-
is
a
good
example.
See
big
data
came
around
for
a
while
and
then
when
there
was
no
further
work
to
do,
it
was
kind
of
put
on
hiatus
and
then
last
year
they
brought
it
back
for
the
SPARC
work.
That
does
happen.
E
Working
groups
usually
I
mean
you
can
bring
one
back,
but
usually
after
it's
complete
like
a
fixed
goal
and
like
a
fixed
outcome
that
the
working
group
is
desiring
to
achieve,
and
once
it's
achieved
you
would
just
turn
it
down.
But
there's
a
reason
to
say
you
couldn't
reopen
the
working
group.
If
the
needs
arise
later
down
the
line.
E
A
All
right
thanks
is
there
anybody
else
any
last.
If
you
do
have
interest
in
this
feel
free
to
also
reach
out
offline,
or
if
you
know
anybody
who
does
please
reach
out
to
them
and
let
them
know
as
we
get
into
this,
because
the
more
input
the
better
something's
made,
the
better
its
tested,
the
better
we're
able
to
look
into
it.
A
A
What
is
the
scope
of
a
sake,
and
how
does
you
know
so?
You
know
one
sig
scope
compared
to
another
sig
scope-
and
you
know
I'm
sure
I
can
add.
Nan
and
I
are
happy
to
go
off
and
start
doing
some
of
this,
but
it
shouldn't
be
done
in
a
vacuum
right.
Is
there
anybody
here,
who's
interested
in
helping
us
work
on
and
probably
a
couple
weeks
before
we
get
started
with
the
home
summit
and
some
of
the
stuff
coming
up
that
we're
busy
with?
A
Okay,
since
I
wasn't
even
gonna
start
on
any
of
this
until
after
the
dev
summit,
because
I
gotta
come
up
with
a
presentation
to
talk
about
there
along
with
a
whole
bunch
of
other
stuff
and
Adnan's
busy
with
some
face-to-face
stuff,
we're
just
kind
of
busy.
So
I
was
thinking
that
we'd
probably
start
in
early
March
after
the
dev
summit.
Would
you
be
interested
in
helping
then
yeah
early
March
would
be
great.
Okay?
Is
there
anybody
else
interested.
A
Probably
not
we're
just
gonna
have
to
craft
a
document
that
says
here's
some
of
the
different
processes
we
have
and
here's
some
of
the
ones
we
think
we
have
and
so
we're
looking
for
folks
to
help
just
craft
those
processes
like
I'll,
be
honest.
Most
people
probably
don't
know
how
sig
leads
become.
Sig
leads,
and
we
should
probably
talk
about
that.
A
Maybe
document
it
and
maybe
change
it
and
I
would
really
like
input
on
how
we
should
change
that
over
time,
because
there's
probably
a
lot
better
ways,
especially
I
mean
the
way
we
started
is
just
Michelle
and
I
decided,
hey,
there's
no
place
for
this.
Can
we
start
something
and
because
we
were
interested
in
it,
we
became
the
sig
leads.
Well,
things
have
changed
a
lot
in
the
last
couple
of
years,
and
so
we
should
look
at.
How
can
we
do
better
and
so
I
want
input
from
folks?
A
So
we
get
input
and
we'll
craft
something
up,
and
then
we
can
bring
it
before
everyone
and
probably
debate
it
and
discuss
it
clean
it
up
and
then
probably
just
do
a
pull
request
to
the
community
repo
with
what
that
is,
and
so
I'm
hoping
it's
just
a
few
hours
of
everybody's
time
to
just
talk
about
what
we
think
it
should
be
crafting
some
language.
This
isn't
something
crazy,
formal
and
many
of
us
are
busy.
So
we
don't
want
it
to
be,
but
just
to
make
sure
that
the
eyes
are
dotted
and
T's
are
crossed.
A
So
and
think
about
what
you'd
want,
if
you're
doing
this
in
a
sig
charter
like
what
are
the
things
that
you
think
should
be
in
there?
What
have
you
learned?
Matt,
Fisher
you're
already
the
first
line,
but
thank
you
for
adding
yourself
to
the
notes,
the
the
so
just
just
think
about
what
you
should
have
in
there,
and
so
even
if
you're
not
interested.
If
there's
particular
processes
that
you
think
should
be
covered
by
the
Charter.
A
Let
us
know
and
share
this
SIG's
are
not
just
a
few
people
doing
stuff,
it's
everybody
who
shows
up
and
is
involved
so,
if
you're,
here
you're
involved.
This
is
one
of
those
things
where
it's
like
hey.
You
know
your
two
cents
are
important.
So
please
speak
up.
It's
your
sake.
I
might.
D
A
No,
it's
being
worked
on.
It'll
initially
come
out
of
the
steering
committee.
Probably,
but
as
I
understand
it
they're
current
go
is
they
are
more
interested
in
wanting
this
stuff
documented
than
telling
everybody
how
to
do
it
so
different
SIG's
will
have
different
ways.
They
do
things
and
they
understand
it.
They're
just
asking
for
folks
to
get
maybe
a
jump
on
it
and
say
architecture
is
going
to
do
that
and
contributes
is
going
to
do
that
and
hopefully,
when
we
start
in
two
three
weeks,
we
can
grab
some
of
their
stuff
as
good
starting
points.
A
A
So
there's
that
then,
the
the
next
thing
that
I'll
throw
onto
this
is
they're
also
talking
about
coming
up
with
a
process
for
sig
related
repos
right
now.
You
know,
you've
got
kubernetes
and
there's
that
organization
and
the
things
under
kubernetes
are
like
production-grade
or
grandfather,
Dan
or
big
deal
things.
You
got
things
like:
mini
cube
and
helm
and
kubernetes
and
some
stuff
that's
been
broken
out
of
kubernetes,
that's
starting
to
go
there,
but
it's
things
that
are
not
experiments
or
toys
or
where
you
try
something
new
out
and
so
they're.
A
Looking
at
some
new
processes
to
come
in
and
some
documents
have
been
floating
around,
I
don't
actually
know
where
the
link
is
right,
share
right
now
or
the
current
state
of
it,
because
it's
been
rewritten
by
the
steering
committee
to
look
at
all
right.
There's
these
things
that
are
part
of
kubernetes.
Then
there
are
these,
maybe
experiments
and
stuff
that
SIG's
do
and
they've
talked
about
different
SIG's
having
their
own
orgs
they've
talked
about
having
one
org
that
all
the
SIG's
repos
go
in,
and
automation
to
get
stuff
in
there
manage
it.
A
A
lot
of
it
has
to
do
with
the
manageability,
oh
by
the
way,
whoever
is
doing
the
notes
and
cleaning
things
up.
Thank
you.
It's
a
lot
of
this
is
about
the
manageability
of
that
stuff
and
then
they're,
looking
at
things
that
are
maybe
kubernetes
related
they're,
not
owned
by
the
kubernetes
organization
or
anything
but
they're
related
and
and
I,
don't
know
exactly
how
they're
gonna
get
into
the
differences.
A
If
it's
under
the
kubernetes
umbrella,
it
obviously
has
to
have
a
CLA
and
a
number
of
other
things
on
it,
but
there
is
an
opportunity
for
this
thing
to
say:
hey
there
are
some
projects
we
would
like
to
do
as
a
sig
that
are
related
to
kubernetes
experiments,
whatnot
that
we
want
to
do
and
what
kinds
of
things
would
we
possibly
want
to
go
in
here?
There's
a
big
push
to
say
most
stuff
should
be
done
out
in
the
ecosystem.
Do
it
on
your
own?
A
It
doesn't
need
to
be
related
to
kubernetes
and
then,
if
it's
worthwhile
and
lots
of
people
are
using
it,
then
maybe
it's
something
that
can
be
folded
in.
If
that
makes
sense-
and
so
there's
kind
of
this
gray
area
of
what
makes
sense
to
bring
in
that's
an
enabler
to
others-
that's
something
related
to
kubernetes,
but
isn't
really
an
ecosystem
project.
What
would
those
look
like
and
what?
A
What
should
we
talk
about,
bringing
in
and
maybe
doing
in
one
of
these
repos
and
I'm,
not
suggesting
anything
at
this
point
and
the
process
is
definitely
not
outlined
yet
I.
Think
sig
testing
is
testing
a
little
bit
of
this
as
they
iterate
on
it,
but
they're
working
on
that,
and
so
we
should
probably
keep
in
our
minds.
What
are
the
kinds
of
things
we
might
want
to
that
make
more
sense
to
be
brought
in
because
they
enable
others.
A
One
of
the
things
that
I
think
can
brought
up
to
me
last
week
is
there
is
some
stuff
that
they've
been
doing
at
Google
I.
Think
it's
another
Google
cloud
platform
org
right
now,
that's
about
making
it
easier
for
people
to
write
their
own
controllers.
I,
don't
know
about
you,
but
going
in
and
writing
a
controller
that
isn't
necessarily
an
easy
thing
right.
B
E
What
Kuby
builder
does
is
just
does
all
of
it
for
you,
because,
ultimately,
all
of
the
controllers
do
pretty
much
the
same
thing.
So,
with
Kuby
builder,
you
put
your
types
in
and
you
run
the
code
gen
and
you
basically
just
get
well,
you
run
in
it
first
and
it
sets
the
project
structure
up
and
you
put
your
types
and
then
you
run
the
code.
Gen
and
all
you
have
to
do
is
fill
in
the
reconciliation,
loop
function
and
all
the
other
stuff
is
just
done
for
it.
E
So
that's
one
another
one
that
might
be
a
fit
I
think
Anthony
would
be
better
to
speak
on
it
than
me.
Medic
controller
is
being
developed
which
is
kind
of
a
if
you
have
a
very
simple
controller
and
it's
just
doing
basic
transformations.
It's
a
way
that
you
don't
really
have
to
do
the
code
Jen
at
all.
It
just
pretty
much
handles
it
for
you
and
your
entire
controller
is
basically
a
web
hook.
E
So
those
were
two
of
the
kind
of
things
that
I
always
discussing
that
might
be
appropriate
if
there's
interest
in
the
sig,
because
they're
really
made
to
lower
the
barrier
to
entry
for
tool
developers
and
for
people
who
want
to
generate
their
own
controllers
and
build
things
that
are
going
to
be
part
of
the
kubernetes
ecosystem.
I
think
one
of
my
kind
of
personal
experience
is
coming
from
a
Mises
background.
E
Is
writing
and
research
framework
could
be
on
the
order
of
30
to
a
hundred
thousand
lines
of
code,
and
we
don't
want
that
to
be
the
case
for
kubernetes.
If
it
requires
that
much
code
and
most
of
its
generated,
that's
one
thing,
but
we
don't
want
people
to
have
to
continuously
develop
the
same
code
again
and
again
and
again,
if
I
mean
there's
a
known
best
practice
that
we
can
auto-generate.
A
Yeah,
that
makes
sense,
and
so
we
don't
have
a
process
for
a
repo
yet
and
so
it's
coming,
and
so
we
should
probably
I
just
wanted
to
like
seed
folks
minds
to
think
about
the
difference
between
what's
out
in
the
ecosystem.
If
there's
stuff
we
should
have
in
here,
maybe
maybe
one
or
both
of
these
or
more
things,
what
kinds
of
things
and
how
should
we
do
that,
as
a
sake
just
to
kind
of
seed,
your
thinking
on
it,
so
when
that
process
does
come
out,
probably
in
the
next,
hopefully
a
few
weeks.