►
From YouTube: Kubernetes SIG Apps 20170213
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Today
we
have
an
agenda
and
I
will
go
ahead
and
share
it
in
chat
here,
and
this
should
link
you
directly
to
today's.
If
anyone
has
additional
items
to
talk
about
or
one
of
our
demos
popped
off,
if
you
have
a
demo
that
you
would
like
to
do
or
you're
prepared
to
do
or
something
you
want
to
share
off
the
cuff,
we
do
have
a
little
bit
of
time
feel
free
to
pop
in
and
do
that,
that's
the
additional
items.
A
So,
if
you're
responsible
for
an
area
like
shards
or
stateful
SAS
or
those
kinds
of
things,
if
you
could
please
fill
in
your
details
and
I
just
dropped
that
link
into
chat
and
it's
the
second,
it
looks
and
chat
like
they're
kind
of
work
together,
but
if
you
could
put
your
name
there,
that
will
help
Michelle
and
I
and
the
PM's
just
keep
track
notify.
Folks,
you
know,
know
who's
in
the
needing
to
report
on
them
things
stuff
of
that
nature.
A
The
second
thing
in
the
announcements
that
I'd
like
to
do
is
congratulate
home.
It
has
officially
graduated
from
KU
brunetti
incubator,
so
yay
it
round
of
applause
for
for
graduation
I
know
it
came
out
beginning
of
last
week
almost
week
ago,
but
this
is
our
first
meeting
since
then,
and
so
home
is
an
official
kou
pernetti's
non
incubator
project,
congratulations,
Matt
and
everyone
else
who
works
on
home
and
then
the
third
announcement
is.
This
is
the
last
day
to
complete
the
sig.
A
A
So
there
is
another
link
that
I
just
dropped
in
to
chat,
which
is
also
in
the
meeting
minute,
if
you
could
take
just
a
few
minutes
and
fill
that
out
and
help
us
improve
the
city,
and
so
with
that
we'd
like
to
get
onto
a
few
of
the
uber
Nettie's
core
stand-ups,
just
to
let
us
know
what's
going
on
and
how
things
are
going
and
the
first
one
we
have
I
was
done
in
here
is
stateful
stretch.
Do
we
have
anyone
who
can
talk
about
safe,
we'll
touch
on
today.
B
If
we
don't
I
can
jump
with,
I
can
give
a
quick
high
level
with
nobody
else.
Who's
been
making
change
and
move
on.
Am
I
doing
that?
There's
been
a
couple
of
factors
going
into
single
sets
to
ensure
that
problems
people
found
during
the
15
relief,
and
then
you
are
address
things
like
deadlocking.
Wind
pods
fail
in
a
weird
way
that
win
in
just
a
little
while
ago.
There's
also
additional
tests
being
added
and
there's
been
some
discussion
around
adding
a
burst
mode
as
an
alpha
flat,
I'm
Colette's.
B
That
would
allow
them
to
create
all
pops
at
lunch
versus
going
through
the
simpler
startup
flow
into
this
discussion
happening,
combat
and
additional
right
now,
one
might
open,
but
I'll
take
it
into
the
chat
afterwards,
as
well
as
taking
more
feedback
through
these
cases.
I
know
that
Erik
and
the
guys
on
his
team
had
been
gathering
for
input
as
well.
C
Hi
Clayton:
this
is
a
good
one
question:
how
many,
if
I,
have
to
contribute
or
in
a
look
through
all
the
use
cases
and
add
more
use
cases
from
our
site.
What
would
be
the
best
place
to
look
at
that?
A
document?
That's
like
a
running
document
or
something
so
a
lot.
B
Of
this
is
being
tracked,
there
was
a
document
for
improvements
to
staples
and
I
have
no
idea
where,
when
I
don't
looked
at
it
in
a
couple
of
weeks
generally
waited
a
lot
of
these
are
being
handled.
Is
that
people
are
dead,
assign
specific
issues
that
make
it
difficult
opening
issues
I
do
not
have
a
document
handy
for
April
sets
of
the
things
that
people
wanted
to
do
next,
mostly
because
it's
being
handled
as
issues
at
this
point.
B
C
Go
ahead,
yeah
I
have
an
issue,
I'm
not
sure.
If
I
can
discuss
about
it.
I
do
give
a
high
level
idea
about
it.
Is
that
not
if
I
have
to
configure
an
application
which
is
God
to
charge
I
know
of,
like
you
know
that
this
has
got
multiple
charge,
you
know
mysql
could
have
charged
two
then
for
those
are
type
of
applications.
If
I
scale
down,
then
the
shards
need
to
be
properly
shut
down.
So
how're
you
how's,
that
sort
of
use
case
is
being
handled.
Currently,
today's.
B
It's
not
using
spatial
sets
today,
although
I
it
have
been
discussed,
I've
seen
people
who
have
Reddick
written
reddit
controllers
that
directly
programming
lettuce
they're
not
actually
using
face
it,
sets
today
either
for
that
kind
of
a
no
I'm
going
evolution.
This
is
a
couple
of
staple
set
support
today
and
control
once
you
get
to
a
point
where
that
could
be
built
around
staple
sets
for
staple
set.
So
the
answer
is
no.
B
E
B
F
Just
just
a
heads
up,
then
I
guess
we
we
have
some
design
work
in
progress.
The
moment
it's
nearing
a
point
where
we
might
be
able
to
put
it
up
for
broader
review,
Iraq,
I'm,
federating
staple
sets
across
multiple
clusters
and
yes,
if
anyone's
interested,
you
can
reach
out
to
eat
it
or
myself
and
we
can
get
you
involved
in
the
variance
back
there
and
hopefully
in
the
next
week
or
two
we'll
have
something
for
the
broader
community.
Thank.
A
F
A
G
H
H
A
E
Yeah,
so
this
is
alex
from
the
opposite
team.
Asmaul
specific
update
up
from
the
fact
that
others,
I'm
going
to
discussion
for
changing
the
way
that
we
calculate
hashas,
which
is
the
names
for
immediate
muriatic
acid.
Probably
this
week,
I'm
going
to
set
up
a
proposal
for
that
I'm,
not
sure
this
canal
and
so
on.
Point
six
historic
village,
okay,.
A
I
Alright,
everyone
see
that
I
can
see
it
cool,
let's
get
started,
so
this
is
going
to
be
an
overview
of
storage
classes
and
dynamic
provisioning.
This
is
a
feature
that
was
introduced
a
few
releases
ago.
It's
headed
into
GA
in
1.6
and
let's
get
started
here
so
I'm,
going
to
start
off
with
a
quick
recap
of
how
to
do
basic,
how
to
specify
a
volume
in
your
workload,
how
to
use
it
and
then
I'll
introduce
TV
pvcs
to
kind
of
set
the
context
for
where
dynamic
provisioning
fits
in,
and
so
this
is
the
basic.
I
The
classic
example
is
using
a
GP
epd
right.
You've
got
a
replica
set
definition
and
inside
the
replica
said
spec
you
define
as
part
of
the
pod
template
what
volume
you
want
to
use
and
how
you
want
that
volume
to
be
mounted
and
in
this
particular
case,
where
explicitly
specifying
a
GCE,
persistent
disk
named,
tended
disk
and
we're
mounting
it
into
a
location
called
flash
data
inside
the
containers
inside
the
pod.
You
know
the
magic
hair
is
Cooper,
Nettie's
takes
care
of
mounting
and
attaching
and
everything
for
you.
I
But
there
there's
a
problem
here
and
I
love
to
show
off
this
picture,
because
it
kind
of
really
makes
it
clear
in
my
head
what
goober
Nettie's
does
and
and
to
me
what
Cooper
Nettie's
does.
Is
it
acts
as
a
traction
layer
above
what
used
to
be
a
bunch
of
different
for
implementation?
Cobranet,
you
kind
of
takes
all
that
gunk
underneath
away
and
just
gives
you
a
nice
consistent
interface
to
be
able
to
deploy
your
applications
again,
and
so
in
order
to
do
that.
I
What
you
want,
ideally,
is
the
API
that
you
expose
to
Coober
Nettie,
the
the
configuration
that
you
deploy
against
KU
Brunetti
should
be
portable,
but
if
you
noticed
in
that
last
step,
we
were
specifying
BCE
persistent
disks
in
our
replicas
set.
That
meant
that
if
we
took
that
pod
definite
of
replica
set
definition
and
we
dropped
it
on
and
brunette
ease
cluster
running
on
amazon,
for
example,
it
wouldn't
work
anymore,
because
there's
none
CPD
available
to
sort
of
kinda
address
that
issue.
What
we
did
was
introduced.
I
The
concept
of
TV
PVC,
which
you
guys
are
already
familiar
with.
What
this
did
was
a
decoupled.
The
consumption
of
storage
from
the
actual
implementation
of
storage
that
you
have
a
PV
object,
which
represent
some
piece
of
storage,
that's
available
for
users
on
the
cluster
to
use
it
is
created
by
the
cluster
administrator
ahead
of
time.
I
So
after
I
create
this,
my
request,
I'm
going
to
get
bound
to
the
available
100
discs,
TV
and
now,
in
my
replica,
said
in
my
workload
definition.
What
I
can
do
is
specify
the
persistent
volume
claim,
instead
of
the
exact
disk
that
I
want
to
use,
and
the
beauty
of
this
is
that
the
user
facing
configs
now
are
our
portable
I
can
go
ahead
and
drop
this
config
on
Amazon
or
some
other
clusters,
which
doesn't
have
gpe
PDS
and
avid
still
work
as
long
as
there's
some
storage
available.
I
But
there
was
still
a
little
crusty
stuff
step
in
there,
which
was
having
to
manually
provision
your
disks
ahead
of
time.
That
kind
of
sucks,
especially
now
that
we're
in
this
cloud
native
world,
where
you
know
you
could
dynamically
provision
you
automatically
provision
storage
on
demand
on
GCE,
on
amazon,
etc.
I
You
can
call
out
to
the
cloud
API
and
say,
give
me
a
name
of
this
size
and
have
it
within
a
matter
of
seconds,
and
so
it
would
be
really
neat
to
have
Cooper
Nettie's
integrated
into
that,
and
that
was
kind
of
the
the
key
in
bringing
this
feature
to
light.
So
dynamic,
provisioning
and
storage
classes
will
allow
you
to
allow
stories
can
be
created
on
demand.
I
It
eliminates
the
need
for
the
storage
administrator
to
have
the
cluster
administrator
to
have
the
pre
provision
that
storage
it'll
get
provision
automatically
when
the
user
requests,
the
storage
and
another
beautiful
thing
is
the
way
that
this
feature
was
designed.
Is
we
didn't
want
to
what
you'll
notice
is
that
lots
of
different
storage
systems
provided
a
bunch
of
different
knobs
and
parameters
that
you
can
tweak
that
are
very
specific
to
their
storage
provider?
I
And
what
we
didn't
want
to
do
was
try
to
get
into
the
business
or
trying
to
enumerate
every
single
possible,
not
that
every
single
storage
provider
could
support
in
the
kuru
native
API,
because
that
was
a
game
that
we
would
never
win.
So
the
way
that
we
designed
it
is
have
an
opaque
pass
through
so
volume
plugins
can
expose
what
set
of
parameters
they
an
arbitrary
set
of
parameters
that
they
support
and
then
cluster
administrators
can
set
those
parameters
as
an
arbitrary
blob
that
gets
passed
through
to
to
the
plugins
on
dynamic
provisioning.
I
That
way,
Cooper
Nettie's
doesn't
have
to
be
aware
of
those.
So
again,
this
feature
was
introduced
in
urban
areas
and
1.2
and
promoted
to
beta
in
1.4
and
plan
GA
1.6.
So
let's
jump
into
what
it
actually
looks
like.
So
to
start
with,
what
you
do
is
you
create
a
storage
class
definition?
The
storage
class
is
basically
a
way
for
a
cluster
administrator
to
say,
let
me
enable
dynamic
provisioning
here
are
the
different
types
of
storage
I
want
available
and
they
are
in
folks.
Here
is
what
I
want
to
happen.
I
So
in
this
case,
as
a
cluster
administrator,
I
want
to
expose
two
types
of
storage.
I
want
flow
storage
and
fat
storage.
I
mean
these
are
arbitrary
labels.
I
can
call
them
whatever
I
want
in
this
case.
This
should
be
descriptive
enough,
so
I'm
going
to
call
them
slow
and
fast
they're,
both
going
to
both
the
GCE
persistent
disk
volume,
plug-in
and
they're
going
to
pass
in
the
parameter
type.
The
slow
one
is
going
to
result
in
a
standard,
spinning
disk
and
the
fast
one
is
going
to
result
in
in
in
an
SSD.
I
And
so,
as
a
user
of
the
cluster,
now,
when
I
go
to
request,
my
storage
I
again
create
a
PVC,
a
persistent
volume
claim,
but
this
time
I
have
an
additional
annotation
in
my
claim,
which
indicates
a
beta.
We
have
volumes
a
beta
dr.,
benetti's,
I/o,
flash
storage
class
I
can
specify
the
storage
class
that
I
want
to
use
for
dynamic
provisioning.
So
when
that
annotations
present
and
our
persistent
volume
claim
object
is
created,
cooper
Nettie's
triggers
the
dynamic
provisioning
flow
and
it
will
call
out
it
will
find
that
storage
class.
I
Look
at
what
volume
plugin
its
printing
to
all
out
to
that
volume.
Plugins
create
command
pass
through
those
parameters.
Once
the
volume
is
created,
it'll
create
a
persistent
volume,
object
automatically
to
represent
that
new
piece
of
storage
and
find
it
back
to
this
persistent
volume
claim,
and
once
that
binding
happens
when
the
user
can
go
ahead
and
just
as
they
were
the
standard
PVC
by
sticking
getting
in
their
replicas
set
up
our
definition
as
a
persistent
volume
claim.
I
So
that
pretty
much
wraps
it
up,
that's
dynamic
provision,
storage
classes,
what's
next
for
the
stories
fig,
what
we're
working
on
for
this
quarter
and
commitment
good
part
of
the
year
local
storage
jumping
up
and
ask
for
a
lot
onto
the
method
that
we're
finally
getting
to
design
in
this
fifth
quarter,
hopefully
start
implementing
subsequent
quarters
out
of
tree
volume.
Drivers
is
something
that's
been
asked
for
a
lot
and
there's
a
major
effort
underway
to
make
it
happen.
I
Aligned
with
other
orchestration
frameworks,
I
believeth
containerized
mounts
was
something
of
a
challenge
for
a
while,
because
what
we
realized
is
that
operating
systems
going
to
differ
a
lot
and
what
may
be
available
may
not
necessarily
be
available.
So,
for
example,
when
we
revved
the
we
change
the
underlying
base
image
here,
gcg
ke
and
it
no
longer
contains
you
cluster
f
s
mounting
tools.
We
realize
this
is
something
that
can
happen
a
lot.
I
So
we
need
a
standard
way
to
be
able
to
fix
this
and
having
those
tools
containerized
and
available
to
Coober
Nettie's
kind
of
solves
that
problem.
So
that's
something
we're
going
to
continue
to
drive
forward.
Snapshotting
is
something
that
we're
starting
to
look
at
something
that
you
want
to
make
ties
with
their
some
values,
with
there's
some
new
volume
plugin
coming
down
the
line
and
a
metrics
work
and,
of
course,
storage
class
you're
going
to
be
a
we'd
love
for
more
folks
to
get
involved
with
storage,
special
interest
groups.
I
I
So
flex
volumes
were
a
way
for
us
to
give
an
escape
hatch
to
folks
who
wanted
to
do
out
of
tree
volume
plug
in
right
now.
Basically,
all
all
volume
plugins
must
have
their
code
checked
into
the
crew
bernetti
score
and
are
compiled
and
built
into
the
primary
to
learn
any
binary.
This
is
something
a
lot
of
folks
up
in
a
lot
of
storage.
Vendors
have
been
complaining
for
for
a
long
time,
so
our
quick
and
dirty
solution,
for
that
was
to
create
a
volume
plugin
called
flex,
which
is
an
exact
based
model.
I
It
doesn't
expose,
attach
or
detach
and
it
doesn't
expose
create
and
the
week
which
are
the
dynamic
provisioning
command
moving
forward.
What
the
dynamic
provisioning
folks
decided
to
do
was
come
up
with
a
different
way
to
expose
dynamic
provisioning
out
of
tree
rather
than
go
through
flex
volumes.
If
you
look
up,
the
I
could
find
a
link
later
and
post
it
in,
but
it's
out
of
tree
dynamic,
provisioning
I'll
find
a
link
and
post
it,
but
basically
what
it
allows
you
to
do
is
through
an
annotation.
You
specify
another.
I
My
understanding
is
it's
another
controller
that
sits
there
and
just
listens
for
a
PVC
request
with
a
certain
annotation
when
they
exist
it
will
go
ahead
and
trigger
the
dynamic
provisioning
process
for
that
particular
plugin.
Moving
flex
forward.
This
quarter,
we're
going
to
add,
attach
and
detach
for
1.6
beyond
that,
I'm,
not
sure
whether
we're
going
to
expose
creator
delete
in
it
flex
in
its
current
form
is
probably
not
what
is
going
to
be
our
ultimate
out
of
tree
solution.
F
You
know
all
virtual
machines
are
implicitly
connectable
to
that
storage
stuff.
Do
you
guys
have
any
thoughts
of
work
for
supporting
those
kinds
of
environments
where
there
are
multiple
storage
pools?
If
you
like,
and
multiple
storage
networks,
which
may
be
connected
to
different
pool
from
different
nerves,
its
essence
there's
an
impact
on
scheduling
there,
obviously
yeah
so.
I
I
think
the
baby
steps
towards
that
are
going
to
be
the
local
storage
discussions
are
the
designs
that
are
currently
underway.
What
we've
realized
is
that
a
lot
of
especially
on
Prem
folks,
have
very
beefy
machines
with
a
lot
of
storage
attached
locally
to
the
same
machine,
and
they
want
to
be
able
to
have
some
sort
of
way
to
favor
where
their
workloads
end
up.
And
you
know
if
you
have
a
workload,
that's
using
storage
on
no
day.
I
It
should
end
up
landing
on
no
day
and
there's
to
be
some
sort
of
data
gravity,
so
local
storage,
the
the
design
is
underway
right
now.
Kind
of
is
a
is
working
towards
that.
But
I
think
what
you're
asking
for
is
a
little
bit
different,
which
is
you
have
a
larger
cluster
which
you
can
deploy
or
compute
to
anywhere.
Cpu
memory
is
available
equally
everywhere,
but
storage
is
not
necessarily
portable
across
the
entire
cluster.
You
may
have
it
available
in
some
parts
of
the
cluster.
Is
that
right?
The.
F
Exactly
and
I
mean
the
sort
of
canonical
example
is
a
data
center
might
have
you
know
a
bunch
of
wrecks
and
each
rack
might
have
rack
local
storage
and
there's.
You
know,
implicitly
a
fan
in
each
rack
which
connects
all
of
the
nodes
in
the
track
to
that
storage,
and
then
you
might
have
some.
You
know
external
storage
systems,
an
EMC
box
or
a
meta
box
whatever
or
multiple
of
them,
and
then
each
one
of
those
has
a
fan
associated
with
little.
F
You
know,
maybe
a
socialist
more
than
one
Sam
and
those
fans
are
not
connected
to
all
the
notes.
You
made
my
hair
either
a
subset
of
nodes
in
each
rack
attached
to
each
fan
or
something
like
that.
I
mean
that
I'm
sort
of
you
know
waiting
on
a
hands
bit
here
that
the
models
actually
not
as
complicated
as
it
sounds.
The
bottom
line
is
you
have
more
than
one
network
and
more
more
than
one
logical
pool
of
storage
associated
with
each
Network
and
not
all
know
the
associated
with
all
of
the
network.
Yeah
we're
like.
K
F
C
I
F
I
Interesting
enough
I
think
Eric
Eric
tuned
advice.
There
was
you
can't
optimize,
for
you
can't
have
your
schedule
or
optimized
for
every
given
parameter.
There
needs
to
be
some
parameter
which
is
a
given,
so
if
you're
so
in
the
Google,
DC's
I.
Think,
for
example,
the
network
is
thought
to
be
essentially
in
unlimited
quantity,
and
that
makes
life
a
lot
easier
for
you
scheduler
two
other
assumptions,
and
if
you
constrain
that
as
well,
which
is
the
case
when
your
backbone
is
not
strong
enough
to,
you
know,
have
network
storage
running
over
it.
I
I
I
Trying
to
remember
but
I
believe
the
way
that
the
algorithm
is
written
is
it
will
try
to
look
at
the
annotation
storage
class
annotation
on
TV
objects
and
if
there's
one
that
already
exists
and
is
available,
it
will
match
it.
Don't
quote
me
on
that:
try
it
out,
but
I
believe
that
was
what
we
were
originally
intending
the
design
to
be
its.
A
I
I
Right
yeah,
so
the
last
few
weeks
have
been
really
great
for
us.
We
were
the
first
project
to
graduate
from
incubator
to
a
full
crew.
Pernetti's
project
I
was
fun.
It
was
quite
an
experience
honestly
I'm
trying
to
work
on
a
blog
post
to
sort
of
explain
how
the
project
grew
through
that
process
and
why
I
think
it's
been
a
really
good
process
for
us.
But
you
know
this
week
our
attention
is
turned
to
22
point
2.0,
release
of
helm,
I've
just
started
working
on
some
of
the
release
notes.
I
The
official
release
date
is
going
to
be
tomorrow,
happy
Valentine's,
Day
and
it
started
working
on
the
release,
notes
and
was
really
struck
by
the
fact
that
this
release
has
marked
a
very,
very
interesting
trend
in
the
community,
which
is
that
we
are
now.
You
know
for
many
releases.
It
has
been
fortunate
of
living
area,
I've
contributing
the
vast
majority
to
PRS.
Things
are
really
starting
to
turn
in
this
case,
so
we've
been
really
excited
to
be
lots
and
lots
of
ers
coming
in
from
the
community.
I
I
don't
have
a
total
count
yet,
but
that's
just
really
exciting.
For
us.
The
community
really
has
sort
of
is
growing
on
pace
with
what
we
would
really
hope
to
see
in
a
successful
open
source
project,
and
that's
been
really
encouraging
to
us.
So
tomorrow
look
out
there
for
the
two
point,
two
point
O
release
after
that
you
know,
will
switch
a
focus
for
a
couple
of
weeks
on
making
sure
that
we
don't
need
another
patch
release
and
then
we'll
be
off
on
the
2.3
roadmap.
I
It
features
four
to
point
to
point
over
right
now,
their
whole
bunch,
actually,
the
one
that
we
were
considering
sort
of
our
headline
feature,
is
the
one
that
Michelle
has
been
working
on
for
quite
a
while,
which
is
the
introduction
of
a
framework
for
running,
chart
verification
test
inside
of
your
cluster.
But
there
have
been
several
other
big
PRS
that
have
made
their
way
in
increments,
for
target
own
and
for
operators.
That's
all
I
got
thanks.
A
M
Thanks
Matt,
so
chop-chop
repo
update,
we've
been
trying
working
a
different
way
to
actually
get
PR
version,
so
we
published
about
a
week
ago
a
review
process
because
using
different
local,
so
I'll
go
ahead
and
post
it
in
the
applicant
very
consistent.
But
basically
what
we
do
is
we
take
a
bunch
of
chaos
and
focus
on
them
for
a
week
and
then
do
a
stand-up
Emily
between
all
the
chart
maintained.
Has
nuts
been
working
out?
M
A
N
So
if
you
guys
could
hear
me
I'm,
I'm,
Charlie
from
Matt,
so
I'm,
one
of
the
maintainer
4k
compose
both
taking
both
everyone,
calls
it
differently
and
anyways.
So
we
just
do
it
a
17
days
ago.
It
was
one
the
most.
It
was
different.
You
really
so
taking
both
someone
like
196
commits
since
the
last
release.
So
we're
pretty
proud
about
that.
We
knocked
out
a
ton
of
bugs
clean
up
everything,
including
Eli
parameters,
checking
out
checking
validation
for
arguments
we
switched
from
your
face.
N
You
lied
to
Cobra
the
SP
13
Cobra
that
really
helps
in
terms
of
developments
and
getting
a
lot
of
new
features
on
put
in.
We
added
a
batch
completion
support
which
is
great
switch
to
outputting
gamal
to
the
JSON
by
default
yeah.
So
in
every
quickly
show
hoping
this
doesn't
make
here
guys
of
screen
go
fullscreen
one
sec.
N
We
all
I
can
cheer
straight.
Okay,
perfect,
so
just
want
to
give
a
quick
rundown
in
terms
of
L&T
t-shirt
developments
for
taking
bows-
and
you
don't
know
what
okay
compose.
Is
this
basically
a
conversion
tool
for
converting
your
daughter
compose
file
to
create
circle
x?
So
we
take
a
composer.
All
here
express
master,
write,
a
slave
and
the
guestbook
demo,
so
you
can
with
rich
but
I'll.
You
have
all
your
files
I'm
not
going
to
go
too
much
into
this
topic.
Just
stand
up
compose
up
and
it
puts
in
community.
N
They
go
it's
starting
to
come
up
anyways.
So
that's
k
can
post
a
nutshell.
We
did
outputting
by
llamo
by
default,
I
like
how
helm
has
come
out
of
Cuba
days,
incubator,
because
the
Action
Teams
Burke
two
charts
as
well
anyways.
So
we
added
openshift
support
morally
to
support
those
I
know
that's
more
side,
but
I'm
not
going
to
go
into
that
and
then
future
development
is
really
knocking
out
bugs
and
getting
to
a
one-point
overlies
I'm,
getting
everything
stable
at
the
moment.
N
We
do
have
a
few
bugs
in
terms
of
provisioning
volumes
and
PVCs.
So
if
you
have
to
have
a
persistent
volume
set
up
before
you
deploy
your
PPC,
we
need
to
basically
improve
the
user
experience.
In
terms
of
that,
that's
it
for
me,
like
that's,
really
highlighting
all
the
features
that
we
have
so
far.
All.
L
Can
I
start?
Yes,
please
go
ahead:
okay,
okay,
cool
hi,
I'm,
much
of
communities
from
up
control
team.
So,
basically
a
last
two
weeks.
We,
the
main
accomplishment,
was
finally
getting
working
and
two
enters
functional
tests
on
Travis,
so
we'll
hopefully
learn
let
dogs
in
and
have
more
developer
friend
experience
for.
Anyone
who
wants
to
contribute.
L
We
also
merged
service
accounts
service,
account.
Support
for
for
a
controller
like
today,
I
think
and
I
think
it
is
the
last
of
the
basic
of
negative
Cuban
epic
object
that
we
need
to
support
so
I.
Think
in
this.
In
this
regard,
the
app
controller
is
more
or
less
complete.
If
you,
if
you
need
any
any
native
kubernetes
object,
supporting
a
control.
L
O
You
can
find
reference,
implementation
of
code
and
correct
thing:
we're
working
on
at
github,
link,
education,
check
and
so
what's
on.
The
agenda
for
us
is
we're
working
on
writing
a
spec
and
we
plan
to
present
that
I'm
pieces,
I,
hope
the
helm,
Deb
sinks
and
the
gaps,
and
that
is
going
to
be
a
group
effort.
So,
if
you're
interested
in
contributing
towards
that,
you
can
post
on
the
mailing
list
or
should
be
an
email.
O
We
also
just
demoed
last
week
at
the
gaps,
so
Antoine
presented
our
reference
implementation
and
it's
helmet
integration.
So
if
you
understand
that
I'm
pretty
sure
Abigail
will
be
available
shortly,
so
we
continued
work
internally
at
Carlos
trophies
a
beta
API
implementation
for
queda
I/o
implementing
this
API.
So
then,
people
being
able
to
like
push
and
pull
of
charts
or,
like
a
doctored,
add
files
to
quite
I,
oh
and
then.
A
Right,
thank
you
for
anybody
wondering
yes,
we
did
have
the
demo
last
week
and
if
you
have
the
notes
up,
I
will
paste
another
link
in
here
to
last
week's
episode.
The
recording
is
already
available,
and
so
you
can
jump
over
to
youtube
and
watch
it.
It
seemed
like
and
I
just
jumped
the
link
in
the
chat
for
anybody
who
wants
to
go.
Look
at
that,
and
so
the
last,
and
if
we
have
a
for
open
discussion,
is
day's
work
flow.
Do
we
have
anybody
on
from
day
as
workflow,
to
give
us
an
update?
Oh.
P
I
could
say
something
you
guys
hear
me
sure
can
yeah.
We
don't
have
a
lot
to
report.
I
guess
something
that
met
Fisher
did
a
few
months
ago.
Finally
came
to
fruition,
which
is
we
upgraded
the
registry
proxy
over
in
cube
add-ons
to
be
a
little
more
battle
tested
and
we're
using
that
now
not
much
else
to
report.
We
do
have
something
we
want
to
demo
for
you
guys,
but
it's
probably
take
another
week
or
two
so
no
said
about
that.
A
So
you're
looking
to
get
a
thought.
We
have
open
slots
coming
not
next
week
because
we
do
have
we're
takin
it
off
for
a
holiday,
but
on
februari
27th,
you
think
so
the
27th
or
our
next
uplands
plus,
actually
I'm
not
march
six.
You
think
you'll
have
some
from
architects,
certainly,
but
either
of
those
days
too.
So
I
put
you
down
for
march
six
for
the
agent.
Will
you
be
giving.
A
A
So
it
is
now
on
march,
six
demo
as
the
second
demo
so
be
prepared
or
if
you're
not
going
to
be.
Let
me
know
and
I'll
take
some
geologists
to
remind
you
before
it
comes
alright.
So
thanks
for
the
day's
work
flow
update,
now
we've
got
about
15
minutes
left.
Do
we
have
any
open
discussion
topics
anything
we
want
to
talk
about
and
by
the
way
next
week,
I'm
sorry.
A
D
I
I
There
are
people
who
are
doing
multi-tenant
stuff
and
it
is
possible,
but
it
takes
some
very
careful
configuration
with
multiple
killer
instances
running
and
you
can
feel
free
to
ask
about
that
in
the
slack
room
and
the
people
who
have
been
working
on
I'm
sure
we'll
jump.
In
fact,
if
anybody
in
this
room
now
is
one
of
those,
please
feel
free
to
jump
in,
but
that's
going
to
be
one
of
our
major
foci
for
the
next
2.3
2.4
of
alicia's.
I
C
I
Get
to
one
of
the
things
we
learned
from
going
through
the
incubator
processes
that
there
has
to
be
a
link
from
your
read
me
to
your
road
map
so
and
that
will
get
updated
this
week
as
we
solidify.
What's
going
to
go
into
the
2.3
release
and
so
yeah,
it's
a
good
time
to
ask
and
a
good
time
to
jump
in.
If
you
actually
would
like
to
weigh
in
on
things
and
give
your
own
input
on
what
you're
looking
for
and
what
your
needs
are.
So.