►
From YouTube: Kubernetes SIG Apps 20180416
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right
to
kick
this
off
I
will
go
ahead
and
share
the
meeting
minutes
and
agenda
into
chat
here.
If
you
go
to
this
document,
you'll
see,
we've
got
planned
out
or
space
for
planning
for
a
few
more
weeks,
so
scroll
down.
So
you
see
the
highlighted
green
April
16th.
It's
an
easy
way
to
find
it.
We
highlight
the
one,
so
we
have
a
couple
of
quick
announcements
here
to
get
going.
First,
there's
the
kubernetes
application
survey
that
the
app
dev
working
group
is
putting
on
the
deadline
is
currently
this
coming
Thursday.
A
So
if
you
haven't
taken
it
and
you
deal
with
apps
or
you
know,
people
who
would
be
great
to
give
us
their
two
cents,
please
share
it.
Please
take
it.
Please
feel
free
to
post
it
in
every
venue,
because
this
data,
which
will
be
publicly
available
to
everybody
afterwards,
can
help
us
all
and
so
the
richer.
The
data
set
the
more
useful
information
we
have
to
pour
through
and
then
the
other
thing
is
the
1.11
feature.
A
Freeze
is
April
24th,
and
so
that's
coming
up
here
in
eight
days,
and
so
we
wanted
to
give
everybody
notice.
If
there's
a
feature
thing
that
you
need
to
deal
with,
the
feature
freeze
is
coming
up
here
shortly
for
1.11,
and
so
with
that
we
have
a
demo
of
scaffold.
Matt
are
you
on
to
give
us
the
demo.
A
D
We
so
we
released
this
tool
called
scaffold
about
a
month
ago.
It's
open
source
right
now.
It
could
everyone
see
my
screen
yeah
so
right
now,
it's
under
the
Google
cloud
platform,
organization
and
github,
and
so
a
scaffold
is,
is
a
cool
tool
that
lets
you
develop
locally
and
iterate
on
your
development
and
build
in
and
push
them
to
a
cluster.
That's
either
running
locally
or
one
that
you've
configured
a
provision
in
the
cloud,
and
so
we
we
developed
it.
D
So
we
so
the
way
you
get
this
working.
Is
you
and
you
make
a
scaffold?
Yamo
file
and
mine
looks
like
this,
and
so
you
configure
the
build
and
the
deploy
stanzas,
and
so
my
build
is
building
two
images,
one
for
my
worker
and
one
for
my
web
and
then
it's
deploying
here's.
My
deployment
manifest
and
it's
actually
running
to
my
local
mini
cube,
that's
running
so
you
can
either
run
scaffold
in
scaffold,
dev
mode
or
scaffold
run
mode.
D
D
We'll
go,
I
got
the
code
running
and
codes
up
in
Visual
Studio
right
now,
so
what
we're
gonna
do
is
well,
this
is
booting
up.
Is
we're
gonna
fix
a
bug
in
my
code
right
now
it
says
cloud
turtles
and
we
want
this
to
say
cloud
cat,
so
I
will
show
you
maybe
I
should
have
ran
this
first.
It
takes
a
little
bit
for
the
first
for
the
first
build
attempt
and
other
than
scaffold
dev.
D
E
D
C
D
So
if
you
look
at
the
repo
we
have
some
of
the
features
listed
out,
yeah
I
was
just
looking
at
them.
Okay,
here
we
go
so
so
one
of
the
cool
things
about
scaffold
is
there's
a
no
server
side
component.
It's
all
running
locally
on
your
machine
and,
like
I,
said
we
detect
the
changes
in
source
code
and
then
automatically
can
build
push
or
deploy
your
images.
We
handled
tag
management,
and
so
you
don't
have
to
worry
about
tags
anymore.
D
This
is
one
of
the
things
in
my
colleagues
repo
that
are
my
colleagues
project
that
we're
gonna
be
looking
at
as
he
was
doing
everything
with
some
bash
scripts
and
some
ugly
templating
with
the
tags
and
so
scaffold
basically
replaces
all
that
it
makes
it
much
cleaner
and
then
like
we,
we
support
existing,
tooling
and
workflows.
So
we
didn't
want
to
be
prescriptive
about.
G
C
You
know
one
nice
feature
to
note
is
that
we
in
the
latest
scaffold
release.
We
have
support
for
non
docker
builds,
so
we're
still
not
doing
any
resolution
of
resolution
of
we're
actually
creating
docker
files
for
you,
but
used
to
play
the
doctor
files,
and
then
we
also
have
a
basal
builder,
which
can
build
images
using
the
rules,
docker
rules
and
the
Basel
repo
and
we're
hoping
to
extend
more
kind
of
non
docker
builders
like
image
and
Zelda,
and
all
these
other
tools
that
help
you
build
images
without
a
doctor.
Daemon.
A
D
H
So,
basically,
how
do
you
see
you
mentioned
that
4c
ICD
process?
He
is.
How
do
you
integrate
the
flow,
so
I
don't
get
the
flow
if
I
want
to
divide
it
in
my
build
pipeline,
so
he's
yeah
I
get
the
use
case
when
you're
developing
and
then
the
image
built
automatically.
How
does
it?
What
is
the
flow
when
I'm,
when
I'm
building
in
a
CI
CD
system.
C
C
You
have
a
github
trigger
that
triggers
GCB
and
then
somewhere
in
your
pipeline,
you
run
a
scaffold
stuff
stuck
with
it,
which
actually
just
takes
your
repo
builds
the
image,
pushes
it
and
deploys
it
to
your
cluster,
and
then,
after
that,
you
can
have
steps
that
do
you
know
check
if
the
deployments
are
up
or
do
whatever
kind
of
extra
steps
that
you
need.
So
we
envision
scaffold
is
being
kind
of
a
step
and
a
Jenkins
or
a
spinnaker
or
Google
cloud
builder
pipeline.
It's
not
it's
not
nearly
the
whole
story.
C
D
H
So
I
said,
but
my
thing
is
so
so
I
get
and
that
it's
from
I
you
said
I
get
a.
This
is
post
testing
method
rather
than
you
know
you
test
before
you
actually
deployed
it'll
cluster.
So
in
this
use
case
you
deployed
the
cluster
and
then
you
can
only
do
testing
basically
in
staging
or
here
or
whatever.
Then
you
do
that,
but
if
they're
kind
of
like
in
builds
testing
before
you
actually
deploy,
which
is
also
automated,
it's
that's
something.
That's
actually
my
question
or
where
my
point
but
I
get
completely.
C
H
So
maybe,
if
you
want
to
expect,
for
example,
is
I
before
I
actually
deploy
so
there's
a
interview,
step
that
says:
okay,
the
application
is
running
correctly.
The
ad.
If
you
look
at
container
structure,
that's
also
from
Google
cloud
platform,
there's
some
interesting
test
happening
in
there.
So
what
I
would
expect
to
raise
up
was
something
interim
in
between
you
know,
before
deploying
and
before
after
building
to
run
these
tests
to
make
sure
that
the
image
is
actually
containing
what
I
want
it
to
contain,
including
commands
environment
variables,
resources.
H
C
That
so
we're
actually
mean
that
also
publishes
those
that
repo
so
integration.
There
would
be
pretty
easy,
but
I
I
do
see
what
you
mean
structural
test
and
kind
of
functional
test.
Your
docker
images
I
mean
is
that
belong
and
in
something
like
scaffold?
Possibly
you
know
if
we
were
to
add
a
kind
of
immediate
stuff
in
between
build
and
deploy,
it
would
probably
have
to
be
plausible
just
like
build
and
deploy.
We
wouldn't
really
want
to
force
you
you
can
intersect
with
us,
even
though
I
mean.
H
C
Yeah
yeah
we've
been
we've
been
a
little
hesitant
at
kind
of
the
escape
hatch
of
you
know,
just
a
task.
Runner
and
I
need
these
ducts,
because
at
that
point
you
know
that
kind
of
opens
up
the
floodgates
to
a
lot
of
things
you
can
do,
which
is
which
is
good
and
and
bad,
sometimes
but
yeah
having
having
somewhat
of
an
opinion
of
what
the
API
looks
like,
but
not
not
saying
exactly
what
the
implementation
so.
A
D
You'll
see
this
is
the
public
one.
There's
no
scaffold
yeah
most
in
this
one
so
and
I
think
that
might
be
something
with
the
credentials
is
different
from
the
private
repo
to
make
it
in
the
demo.
Actually,
so,
okay
looks
like
we
are.
Finally
up
so
now.
Scaffold
is
telling
us
that
it's
watching
for
changes.
So
if
I
go
and
run
my
mini
cube
is
the
only
way
I
know
how
to
do
it.
Service
list.
D
Of
where
this
is
running,
your
might
local
machine
I
think.
D
D
A
C
A
D
A
D
D
A
D
We're
good
I
mean
we
would
love
to
hear.
You
know
how
people
are
using
this
and
obviously
we
have
well.
We
have
a
would
love
to
see
like
issues
and
feedback
on
it,
something
we're
excited
about.
You
know
we
only
released
it
a
month
ago.
It's
it's
been
picked
up
quite
a
few
times
already,
so
we
do
think
we're
kind
of
you
know
filling
a
gap
for
developers
to
help
them
learn.
You
know
how
to
write
in
kubernetes
land,
especially
for
local
development,
making
that
a
lot
easier,
so
yeah
I'd
love
to
hear
from
people.
D
A
So
I
do
have
a
question.
The
project
you
started
off
with
had
kubernetes
configuration
files
in
it.
I
work
at
a
developer,
showing
up
with
my
own
codebase
and
I,
said
alright.
Now,
I
want
to
get
it
ready
for
kubernetes.
What
do
I
need
to
provide
as
a
starting
point
kind
of
that
day,
zero
configuration
to
get
going
using
scaffold
when
I'm
doing
something
with
mini
queue.
How
would
I
do
that.
C
C
Deployments
and
stuff
like
that,
we
we
don't
actually
do
anything
there
and
it's
not
really
on
our
roadmap
to
actually
come
put
out
any
of
these.
These
deployments,
or
you
know,
doctor
files,
because
it's
I
mean
at
least
for
me.
It's
a
very
difficult
problem
and
I.
Don't
know
if
I
have
a
great
answer
to
how
you
do
that.
So.
A
So
here's
an
idea
right,
if
you
don't
solve
it
with
automation,
documentation
is
a
great
way
to
start
saying.
You
know
if
you're
approaching
this,
here's
where
you
can
get
started,
creating
those
kubernetes
files,
so
somebody
does
approach
it
they're,
not
saying
where
do
I
go
and
what
do
I
do
if
scaffolds
their
entry
point,
they
could
say:
oh
here's,
where
I
go,
learn
what
I
need
to
go,
do
to
create
those
files
and
how
I
lay
them
out
so
right.
That's
just
just
a
fun
I.
D
Got
this
running
was
actually
the
ANA
Django
project,
so
I
haven't
developed
in
a
while,
but
when
I
did
I
was
writing
Django
apps
prior
to
Google,
and
that
was
like
that
was
my
hello
world
example
myself
and
it
was
pretty
easy
just
to
go
from.
You
know,
Django,
to
getting
scaffold
up
to
pushing
to
a.
A
J
H
Sorry
for
asking
too
much
there,
so
the
question
is:
you've
said
that
there's
no
server-side
components,
and
so,
if
I'm
getting
I'm
getting
it
correct,
you
have
you
scoop,
CTL
or
a
good
cuddle
food
to
actually
run
your
your
your
or
deploy
your
your
app.
But
you
can
also
have
support
for
home,
which
has
basically
runs
with
teller
and
teller.
Is
a
server-side
component
and
I'm,
not
sure
I'm,
not
sure
how
is
know.
How
is
there
there's
no
server
side
component
in
the
equation
that
is
yeah.
C
C
Saying
that
there's
no
scaffold
server
side
component,
so
we're
we're
not
actually
storing
any
state
on
the
cluster
and
the
scaffold
binary
itself
is
not
stateful
in
terms
of
configs
or
anything
like
that.
So
if
you're
using
a
deployment
tool
that
does
have
on
a
non-clustered
component,
then
sure
you
need
that
on
Cluster
component.
But
you
know
we're
planning
to
support
a
lot
of
tools
that
don't
on
such
as
coop
control
and
there's
some
other
tools
on
deployment
tools
that
don't
have
that
kind
of
on
cluster
state.
A
And
I
think
we're
starting
to
see
a
shift
of
tools
from
having
that
in
cluster
State
thing
to
move
more
towards
CR
DS
or
nothing
at
all.
You'll
see
that
with
home,
3
development
and
stuff,
like
that,
some
of
the
folks
I
know
who
are
trying
it
out
found
the
complexity.
So
I
can
understand
the
desire
not
to
have
that
I.
Think
it's
a
good
thing.
A
A
All
right,
then,
if
you
do,
please
continue
to
ask
them
in
the
chat
here.
Thank
you
for
the
scaffold
demo,
I
love
seeing
this
inner
loop
problem
being
handled.
This
is
fantastic.
Thank
you.
So
our
discussion
today
actually
shifts
towards
some
of
the
things
that
we
were
already
talking
about
here
in
the
scaffold
discussion
and
one
of
those
happens
to
be
the
first
one
in
the
agenda
is
image.
Building
and
I
know.
Folks
have
talked
about
other
ways
of
building
images.
A
A
How
many
of
the
cloud
providers
are
now
offering
a
service
that
says,
give
us
your
stuff
we'll
build
the
image
for
you
right,
API,
driven
service
based,
and
so
we
wanted
to
have
a
little
bit
of
a
discussion
on
image
building,
and
so
it's
it's
pretty
free-flowing.
We
got
a
number
of
people
here
who
were
interested
in
the
topic,
and
so
there's
kind
of
you
know:
there's
local
image,
building
versus
server-side
image,
building,
there's
different
tools
and
services.
A
L
Sure
I
got
something
that
I
can
kick
off
the
discussion,
I'm
purely
curious
on
if
other
people
so
there's
one
context
that
I've
heard
a
very
common
use
case
with
image
building
and
that
is
injecting
environment
secrets
into
the
image
built.
So,
for
example,
if
you're
building
a
container
image
that
happens
to
need
secrets
at
build
time
in
order
to
fetch
dependencies
from
say,
like
a
private
service
or
anything
like
that,
I'm
curious.
L
I
L
But
there
are
other
teams
doing
it.
Is
that,
like
using
some
kind
of
bash
script,
around
docker
build
so
using
like
docker
start
using
a
docker
run
to
inject
and
docker
commit
into
there
and
just
doing
kind
of
old
workflow,
or
is
that
and
I'm
using
an
alternative
to
workflow
tool?
I'm
just
curious
about
the
differences
between
the
two?
L
B
Is
using
the
the
two
scripts
assembly
and
run?
Yes,
somebody
is
responsible
for
doing
whatever
you
want
to
care
to
do
with
your
build,
while
the
s2
is
actually
running
the
the
docker
image,
which
is
usually
containing
the
the
entire
tooling
that
he
needs
to
build
you're
on
a
specific
application
and
the
run
is
then
responsible
for
running
the
apps.
But
yes
well.
M
L
And
if
I
recall,
the
OpenShift
build
experiences
more
of
like
a
tooling,
not
using
a
darker
file,
but
that's
around
like
their
own
custom.
Workflow.
So
like
Bill
de
I
know
that
they
do
like
a
build
a
mount.
And
then
you
actually
mount
in
a
file
system
and
do
injection
into
the
root
file
system.
That
way.
B
I
can't
speak
about
build
up
because
I
haven't
played
with
build
out
myself,
but
what
we
have
an
open
shift
is
either
you
do
the
regular
doctor
build
where
you
provide
the
dog,
the
repo
or
docker
file
itself,
or
you
do
ya
sui
and
the
SOI
can
in
common,
multiple
flavors,
the
simplest
being
you
just
point.
The
open
shift
at
the
sources
and
the
build
process
will
pick
up
your
application
and
build
it
because
by
default
it
is
provided
with
the
to
build
scripts,
of
course,
which
you
can
overwrite
or
whatever
you
want.
C
M
M
Another
Ansari
to
follow
up
on
source
to
image
and
docker
files.
There
is
a
docker
file
within
the
that's
used
to
produce
the
base
layers
of
this
source
to
image
kind
of
workflow,
but
usually
we
split
up
the
responsibility
for
maintaining
that
docker
file
and
they
associated
kind
of
patching
around
that
is
usually
architects
or
security
teams,
rather
than
your
from
in
nodejs
developers.
So
there's
kind
of
a
separation
of
concerns
around
who's
who
gets
access
to
the
docker
file
and
who's
responsible
for
application
security.
M
C
Yeah
one
thing
that
actually
so
I
mean
we're
just
starting
off
these
discussions
of
like
developer
tooling
in
cig,
apps
and
and
we're
so
where
we're
starting
off
a
little
slow
and
trying
to
figure
it
out.
Like
you
know
what
what
topics
would
be
would
be
good
for
the
people
here
at
sig
apps
to
talk
about
and
and
what
kind
of
things
people
are
interested
in.
C
But
one
thing
that
that
topic
kind
of
brings
up
a
little
bit
is:
there
seems
to
be
this
kind
of
gap
between
you
know
the
user
stories
of
you
know
the
operator
and
the
developer,
and
you
know
who's
responsible
for
the
builds
who's
responsible
for
the
deployments.
You
know
what
configuration
is
kind
of
available
to
the
developer.
You
know
is
for
something
like
helm.
Is
the
values
file
the
API
that
the
developer
sees
or
in
the
operators
are
actually
maintaining
the
charts?
You
know?
A
Yeah-
and
some
of
this
is
really
going
to
also
depend
on
organizations
too
right.
If
you
take
operators
and
developers
as
different
roles
and
some
small
close
teams,
especially
like
in
a
start-up,
they
may
be
the
same
people
doing
both
roles
and
in
other
organizations
you
know,
there's
companies
who
will
have
a
thousand
people
working
in
a
single
app
it's
it's
can
be
huge,
and
in
that
organization
you
might
only
have
a
handful
of
people
doing
the
ops.
A
Well,
many
of
the
other
people
are
working
on,
say
the
future
development,
and
so
it's
it's
really
split,
then,
and
so
I
think
it
depends
on
company,
and
so
that's
totally
important
roles
and
kind
of
this
mishmash
or
the
way
we
so
many
of
us
do
things
differently.
Taking
that
into
account,
I
think
would
be
useful
to.
C
C
N
Kind
of
adoption,
level
of
kubernetes,
because
I
think
what
the
discussion
we
just
had
experience
that
we
talking
about
a
user
who
wants
to
run
their
built
in
the
cluster
and
I.
Think
that's
like
one
cent
of
people
reading
most
people
run
their
builds
in
the
CI
system
and
make
it
work,
so
it
produces
a
docker.
A
yes,
instead
of
justified,
very
like
it
used
to
before,
and
that
build
you
know,
maybe
containerized,
maybe
you
are
containerized,
maybe
just
give
anything.
N
Maybe
they
build
a
binary
separately
and
then
add
it
to
within
a
doc
file
time
a
base
on
which
they
use,
and
you
know
that's
like
sort
of
that's
the
distant
one
for
most
people
really
and
what
we
just
discussed
was
more
like
a
mature
organization.
I
mean
each
ends
up
their
picture
in
terms
of
their
covariance
adoption.
C
Yeah
I
guess
I
agree
that
you
know
there
are
a
lot
of
different
personas
here
and
you
know
each
each
persona
and
different
each
different
size
of
organization
is
going
to
need.
You
know
very
specific,
tooling,
on
tailored
to
their
size,
but
I
guess
just
questions
to
think
about.
Like
you
know,
should
we
building
our?
C
Should
we
be
building
our
tooling
to
be
multi-tenant,
you
know,
should
we
be
building
our
on
clustered
components
with
kind
of
you
know
our
back
and
security
in
mind,
you
know
just
because
people
don't
necessarily
want
to
do
on
closure
built
today.
You
know:
what's
stopping
us
from
making
that
kind
of
the
blessed
path,
even
for
developer,
workflows,
all
right,
don't
think
that
those
are.
You
know
necessarily
wrong
things
to
to
look
at
right.
I
I
So
if
you
build
the
base
image
separately
and
you
don't
need
to
install
packages,
you
can
around
all
of
this
non
privileged
user
like
non-root
in
a
container,
and
you
don't
need
to
dr.
soak
it
because,
yes,
I
will
start
a
container
from
the
base
image
run
your
scripts
and
then
commit
to
stop
so
like
this
is
already
working
in
cluster
and
it's
secure,
but
it
has
limitations
for
sure.
I
can't
build.
That
will
be
easier.
I.
M
Know
Jespers
else
image
is
also
non
privileged.
So
that's
another
interesting
tool
to
look
into
for
non-privileged
builds
that
could
be
on
cluster
potentially
I.
Think
we,
with
our
open
shift,
base
image
layers
that
are
used
with
s2i.
A
lot
of
that
build
of
the
docker
file
is
done
before
we
get
it
into
the
cluster
to
kind
of
isolate
that
risk
and
have
that
done
by
admins
or
whoever's
responsible
for
security.
C
I
have
a
question
like
when,
when
what
would
you
need
to
start
thinking
about
this,
let's
say:
I:
have
you
know
a
thing
that
can
build
docker
images
for
me
today?
Should
everybody
be
going
out
and
figuring
out
how
to
build
them
in
cluster,
or
is
there
any
condition
that
causes
you
to
raise
the
red
flag
and
start
looking
into
it?
I.
N
Don't
think
that
there's
there's
any
reason
for
you
to
worry
about
that
I
think
you
already
in
a
good
place
if
you
have
that
this
is
something
that
we're
discussing.
As
you
know,
from
the
perspective
of
projects
that
the
sake
and
producers
I'm
planning,
anything
related
to
this
say,
gets
much
range
so
from
from
the
users
perspective.
I,
don't
think,
there's
a
big
worry
about
that.
If
you're
happy,
because
you
see
estimation
up
some
words
but.
A
I
think
that
figuring
out
ways
to
build
tools
or
to
build
images
is
important,
because
there
are
a
lot
of
people
who
are
building
CI
processes
and
CI
tooling
right
I
mean
how
many
companies
create
their
own
CI
processes
and
tooling,
based
on
you
know,
starting
with
Jenkins
as
a
foundation
or
concourse
or
drone,
or
something
else
or
Travis
or,
and
then
they
go
and
they
layer
in
their
own
processes.
You
know,
DevOps
people
are
Deb's
and
so
a
lot
of
us.
A
N
L
Good
point:
at
the
same
time,
contrarian
to
that
there
are
also
developer
tools
out
there,
that
people
are
building
so,
for
example,
like
obviously
the
scaffold
team
ourselves,
the
draft
team
and
other
teams
out
there
as
well
like
we're,
building
CI
CD
systems
and
all
that.
So
to
answer
the
answer,
the
original
question
of
do
it:
I
use
currently
docker
billdocker
push
or
whatever
else
I
have
my
own
process
in
place
to
build
and
push
images
should
I
be
concerned
or
should
I
be
invested
in
this
kind
of
discussion
on
I.
L
Think
if
you
already
have
a
CIC,
tooling
system
in
place,
guy
doesn't
really
necessarily
move
to
that
or
there's
no
major
benefit
to
move
over
to
in
cluster
systems.
However,
it
is
a
really
good
discussion
to
have
around
if
you're,
building
a
tool
around
CIC
D
or,
if
you're,
building
a
tool
around
development
processes
that
kind
of
have
that
inner
loop
of
building
images
and
so
I
think
that's
where
the
discussion
is
partially
going
right
now
and
I
think
it
is
useful.
We
did
do
that
like
in
cluster
building
step
before
with
draft
I
found.
L
There
were
quite
a
few
limitations
actually
because
a
couple
of
things
that
I
was
a
little
bit
of
a
red
flag
moving
forward
to
in
cluster
docker
image.
Building
for
draft,
which
was
one,
was
there's
kind
of
a
discussion
between
like
do.
We
want
to
support
in
kubernetes,
alternative
docker
run
times
or
docker
run
or
container
run
times
so,
for
example,
there's
cryo
that
is
being
built
out.
L
It
might
not
be
such
a
great
idea
to
build
it,
end-to-end
CI
CD
tool
or
something
that
does
docker
image
builds
using
docker
in
the
future.
In
case
hedging
against,
like
if
say
the
container
runtime
ever
changes
inside
kubernetes
I,
don't
imagine
it
will
or
I
don't
imagine.
I
have
no
idea
if
it
will,
but
it
was
one
concern
for
me
when
I
originally
like
we
were
building
this
project
out.
So.
A
I
two
ideas
on
this
that
just
came
up,
but
one
is
well
we're.
You
build
your
stuff
matters
a
little
bit
if
you're
doing
local,
dev
so
say
I'm
on
a
plane
and
I'm
disconnected
and
I
want
to
do
this.
This
loop
thing.
You
know
it's
kind
of
the
example
we
use,
especially
for
some
of
us
who
have
to
travel
from
time
to
time
right
when
I'm
totally
closed
off
from
using
a
software-as-a-service
that
doesn't
work
now
say
I'm
using
mini
cube
and
I'm,
using
like
draft
or
scaffold.
Where
does
my
image
build
happen?
A
Should
it
happen
in
mini
cube,
which
I
mean
it's
a
single
node
cluster,
but
should
it
happen
there
or
should
happen
on
my
local
machine
and
then
how
does
this
end
up
working
out
and
then
to
combine
to
that?
The
thing
that
I'm
starting
to
wonder
here
is
if
you're
dealing
with
services
and
indirection
is
this
the
kind
of
thing
that
should
be
exposed
as
a
service
via
Service
Catalog,
and
then
you
could
have
many
different
implementations
behind
it,
but
your
end
user
doesn't
have
to
care
and
you
can
have
something
in
mini
cube.
A
If
you
move
to
a
cluster
in,
you
know,
google,
it
works
there.
If
I'm
learning
in
microsoft,
it
can
use
whatever
build.
The
service
ends
up
there.
If
I'm
doing
it
on
premise,
I
could
have
my
own
service
and
then
you
work
that
way,
but
then
something
like
Service
Catalog
gives
you
that
indirection.
So
you,
the
end
consumer,
don't
have
to
worry
about
knowing
everybody's
API
I,
don't
just
a
couple
thoughts.
I
So
my
idea
here
is:
if
we
even
need
to
deal
because,
like
with
the
Builder,
you
could
actually
run
a
container
run
the
build
out
or
something
like
that
in
it,
build
the
image
and
push
it
somewhere
like.
Maybe
we
don't
even
need
to
assume
that
is
a
docker
or
something.
Maybe
when
we
move
forward
like
we
build
our
something,
does
daemon
less
and
can
run
in
the
container
every
namespace
is
coming.
Maybe
that
should
be
done
just
there
and
no
API
yeah.
C
One
one
one
thing
that
I
think
that
we
should
be
targeting
with
all
of
our
developer
tools
is
targeting
the
interfaces
that
we've
created
and
the
tools
not
the
hard
core
implementations
of
these.
So
we
shouldn't
be
building
cooling
around
docker.
We
should
be
trying
to
target
the
CRI
when
possible.
You
know
we
should
be
trying
to
target
the
CSI
when
possible
on
CNI
all
that
sort
of
stuff,
because
I
think
going
forward.
C
C
You
know
to
get
some
feedback
on
their
tools,
and
this
is
something
that
we
did
kind
of
early
on,
with
mini
cube
by
putting
cryo
and
in
rocket
and
other
runtimes
other
than
docker,
so
that
users,
you
know
kind
of-
had
an
easier
way
of
trying
these
things
out,
but
I
think
that's
something
that
we
should
be
thinking
about.
You
know
kind
of
across
the
spectrum.
Now,
now
that
we
have
a
lot
more
of
these
interfaces
as
a
developer,
why?
Why
do
I
care?
C
Oh
God
go
ahead,
I
almost
see
this
is
kind
of
taking
that
away
from
from
the
developer
or
the
user
that
you
know
today.
The
user
needs
to
do
their
docker
builds
kind
of
out-of-band,
and
then
you
know
provide
the
image
and
you
know,
do
the
authentication
and
push
kind
of
all
by
themselves.
But
if
we
could
start
to,
you
know
obscure
some
of
these
details.
You
know
I,
think
that
might
be
a
better
way
forward.
You
know,
I
mean
I'm.
Closer
goes,
I
mean
the
nice
thing
is
that
you
don't
have
docker
twice.
C
You
don't
have
docker
in
your
cluster
and
then
docker
and
you
know
a
SAS
or
you
know
another
CI
system.
You
know,
as
we
can
start
to
take
away
those
dependencies
and
and
have
the
the
cluster
be
kind
of
the
only
dependency
for
these
tools.
I
think
we
could,
you
know,
make
the
developer
workflow
a
lot
easier,
yeah.
A
I
was
gonna,
say
something
along
the
same
lines
if
I
don't
have
to
install
and
maintain
docker
on
my
local
work
environment,
especially
if
I'm
in
an
enterprise
and
a
lot
of
them
still
lock
down
a
lot
of
machines
or
I've
got
to
deal
with
that
and
I
just
have
the
one
tool
chain
that
needs
to
be
updated
rather
than
duplication
in
there.
That
could
make
things
simpler.
C
C
A
Mean
it
could
be
the
difference
between
I,
install
mini
cube
and
or
mini
cube,
Plus,
docker
and
now
mini
cube,
plus
docker
are
slightly
different
versions.
The
version
of
docker
and
mini
cube
and,
and
now
I've
got
to
maintain
updates
to
all
these.
It's
just
one
less
piece
in
the
complexity
puzzle
that
I
have
to
install
maintain
on
my
system.
C
A
C
Code
and
then
framework
II
thing
that
handles
updated,
so
I'm
wondering
how
much
people
care
what's
inside,
of
one
of
those
frameworks
or
if
that's
not
that's
too
restrictive,
to
have
something
that
orchestrates
kind
of
your
dev
workflow
and
kind
of
handles
it
for
you.
If
it's
the
wrong
extraction
there
so.
F
I
think
independent
of
with
the
local
developer
flow,
where
someone
is
iterating
on
their
application
and
rebuilding
it.
There
would
also
generally
be
some
kind
of
CI
system
somewhere
that
is
doing
builds
and
those
builds
what
our
workload,
like
any
other
workload
which
I
was
trying
to
host
the
chat,
and
we
are
increasingly
seeing
people,
run
those
CI
workloads
or
try
to
run
those
CI
workloads
on
kerbin
Eddie's.
F
For
the
same
reasons,
they
want
to
run
any
workload
on
turbine
IDs,
because
it's
easier
than
setting
up
for
VMs
or
doing
something
else
and
doing
builds
in
those
systems.
Right
now
is
kind
of
painful,
because
either
they
need
to
poke
a
hole
and
access
the
local
doctor
socket
or,
and
then
they
may
want
to
create
a
pool
of
dedicated
nodes
for
those
machines
if
they
are
running
out
on
community.
F
So
I
think
there
is
a
need
for
some
kind
of
tools
to
build
images
that
don't
require
any
special
privileges
to
actually
do
the
build
steps
for
the
common
kinds
of
builds
that
people
are
doing,
and
that's
just
as
like
something
that
is
very
relevant
to
see
gaps
and
that
it's
just
a
workload
for
kubernetes
and
that's
kind
of
orthogonal
to
the
developer
workflow.
But
it's
related
in
that
the
same
tools
that
it
would
be
used
for
on
cluster
CI
might
also
be
useful
and
the
developer
workflow.
F
A
Yeah,
no,
no,
that's
a
really
good
point
is
the
resources
of
doing
it
and
there
are
a
lot
of
people
who
will
have
a
dev
environment
out
in
the
cloud,
so
they're
not
using
mini
cube
and
they
might
be
using
scaffold
or
draft
or
something
like
that
for
their
closed
loop,
and
this
actually
pushes
the
stuff
out
to
the
cloud.
But
their
pipe
is
big
enough
that
it's
pretty
quick,
so
they're
even
able
to
do
it.
There
yeah.
F
C
And
as
users
have
kind
of
beefier
clusters
that
they're
paying,
for
you
know,
the
the
one
will
reuse
that
those
those
same
clusters
for
development
and
there'll
also
be
a
lot
of
things
that
you
just
can't
do
locally
like
some
of
the
machine
learning
workflows.
And
so
we
want
you
know
and
in
scaffold
we
chose
kind
of
like
the
the
coop
control
context.
As
you
know,
the
the
endpoint
that
we're
targeting
so
that
you
know
it
doesn't
just
work
with
something
like
mini
cooper
or
GK,
or
something
like
that.
C
That
actually
brings
up
a
point
in
the
limit
that
Brian
made
about
you
know
things
being
better
better,
that
more
resource
available
in
it
in
the
cluster
is:
we've
talked
to
some
people
and
they
have
trouble
running
their
whole
stack
on
the
laptop,
no
matter
how
big
it
is.
Maybe
that's
just
shared
services
that
they
need
to
reach
out
to
so
something
that
I
think
we
should
also
look
at
as
we
go
deeper
into
the
dev.
Tooling.
Is
how
do
you
split
up?
C
You
know
what
each
individual
developers
responsibilities
or
what
things
they
run
are
and
what
tooling
do
you
have
available
in
order
to
slice
and
dice
those
things?
How
do
you,
you
know
have
a
manifest
or
whatever
that
says
what
things
you
should
deploy
here,
where
everything
else
lives
that
can
get
pretty
complex
and
bespoke
pretty
quickly.
A
All
right
so
so
I
get
a
jump
in
interest
with
a
time
check.
We've
got
about
eight
minutes
left
and
we
actually
had
one
more
topic
that
we
do,
and
so
this
is
great
conversation
and
we'll
pick
up
with
a
whole
lot
more
of
the
developer
stuff.
Again
in
two
weeks
next
week,
we're
gonna
be
talking
about
the
application.
Crd
will
have
Kuby
builder
in
for
a
demo
and
some
stuff
like
that.
A
If
you
go
to
the
agenda,
there's
a
place
you'll
see
telepresence
is
there
first
if
anybody
else
wants
to
throw
their
project
in
here,
we
do
like
to
give
everybody
an
opportunity
to
just
share
updates,
what's
going
on
with
your
projects,
and
so,
if
you
want
to
add
something
in
here,
real
quick
you
can,
and
so
with
that
I
really
wanted
to
give
telepresence
a
moment
to
give
us
an
update.
What's
going
on
so.
J
J
J
That
sort
of
thing
can
sometimes
be
challenging,
and,
and
so
we
are
experimenting
with
an
actual
persistent
long-lived
proxy
that
runs
on
the
cluster,
and
that
way
you
can
actually
have
Auto,
reconnects
and-
and
you
can
actually
have
a
much
faster
client
sort
of
time,
because,
right
now,
when
you
start
up
telepresence,
it
actually
deploys
it
does
a
deployment
of
the
proxy
each
time.
You
start
up
the
telepresence
process,
which
obviously
takes
a
bunch
of
time,
and
then
it
sets
up
all
the
stuff
locally.
J
If
you
just
have
the
proxy
always
running,
then
you
can
just
reconnect
to
the
proxy
whenever
you
get
disconnected.
So
we
think
this
will
actually
help
solve
problems
when,
if
you
get
disconnected,
teleport
doesn't
clean
up
after
itself,
and
it
will
also
improve
performance
and
also
allow
us
to
support
reconnect,
which
is
a
popular
option
for
folks
who
are
on
Caltrain
and
have
dead
zones
when
they're
using
telepresence
on
the
train,
so
so
yeah.
So
that's
the
general
update.
L
Most
of
that
has
been
used
like
as
a
bash
script,
so
that
obviously
doesn't
necessarily
work
on
Windows
without
windows,
subsystem
for
Linux
installed
and
if
it's
being
run
inside
that
system.
So
what
we
want
to
do
is
we
want
to
actually
add
like
PowerShell
support
in
there,
so
people
can
actually
go
and
start
using
draft
and
all
these
other
development
tools
on
more
than
just
like
Linux
and
Mac
OS,
which
has
been
kind
of
like
the
status
quo
for
a
large
amount
of
the
tools
being
built
here.
So.
O
Add
something
on
draft
as
well:
I've
been
working
on
task
support,
so
I.
My
background
is
in
Braille
and
to
work
with
a
rails
project
in
kubernetes
I'm
going
to
need
to
do
them.
Some
things
like
before,
actually
deploying
my
project
set
up
some
dependencies
and
then
once
I
have
my
application.
Containers
running
I
may
want
to
run
like
things
like
write,
commands
to
set
up
my
database
schema
and
seats
and
data
and
stuff.
So
to
do
these
kinds
of
tasks.
P
L
All
right,
sorry,
Adnan
for
the
helm,
upgrade
support
is
I'm,
not
quite
sure
about
queue.
Babs
is
the
repository
updating
and
all
that
stuff.
Is
that
done,
out-of-band
outside
of
cube
apps
or
when
the
user
goes
to
the
browser?
Are
they
able
to
update
like
the
repositories
and
what
the
index
is
pointing
to
for
cube
apps,
so
they
can
do
upgrades
and
whatnot.
P
So
cube
apps
has
a
bunch
of
controls.
Whenever
you
add
a
new
repository,
it
sets
up
a
cron
job
to
go
and
pull
from
that
repository
every
hour
by
default,
but
you
can
also
manually
refresh
it
in
the
UI
and
then
and
then
whatever
so,
based
on
that,
whatever
versions
it
has
picked
up,
you
can
go
and
upgrade
from
once
you've
once
you've
installed
a
shop.
That's
pretty
slick.
L
A
Okay,
then
I
have
one
more
thing.
Somebody
poked
me
in
a
side
channel
about
koukin
EU,
which
is
coming
up
in
a
couple
of
weeks,
and
one
of
the
conversations
they
were
looking
to
have
is
a
dev
tools.
I
didn't
know
whether
this
should
be
a
buff
or
Ken.
If
you're,
there
you've
been
organizing
the
sig
app
sessions
there.
If
we
have
time
for
that
kind
of
topic
in
there.
A
Wait
can
you
hear
me
I
can
hear
you
now
Ken
okay,
so
he
want
to
organize
a
discussion
around
Deb
tools.
Yeah
somebody
asked
about
that
in
a
side
Channel
about
Deb
tools
and
I
didn't
know
whether
that
would
be
a
bought
that
folks
should
get
together
on
or
if
there's
time
in
the
sig
app
sessions.
We
already
have.
G
I
was
thinking
about
doing
that
inside
of
the
sig,
a
special
for
the
deep
that
if
we
want
to
have
a
discussion
there,
okay
I
mean
we
have
an
hour
and
as
far
as
I
can
tell
an
hour
and
a
half
for
the
deep
bath,
so
I
was
planning
on
doing
something
similar
to
what
we
did
last
year,
just
kind
of
a
brief
overview
of
your
overview
and
get
a
feel
for
what
people
would
like
to
talk
about.
And
what
they'd
like
to
discuss.
I
mean
the
deep
dive.
Is
the
communities
meeting?
N
A
The
next
one
is
April
23rd,
which
is
where
we're
looking
at
Kuby
builder.
The
applications
here
be
things
like
that
and
then
the
following
meeting
won't
be
until
May
14th,
because
the
week
of
April
30th
is
coop
con
and
then
the
following
week.
It's
the
Monday
after
coop
con
folks
are
usually
not
ready
to
have
conversations
about
stuff,
then,
and
so
we
push
it
off.
We
give
that
extra
Monday
right
after
coop
con
is
a
day
off,
so
the
next
one
will
be
May
14th,
where
we
dig
into
more
of
those
developer
topics
again.