►
From YouTube: Kubernetes SIG Apps 20180618
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
the
June
18
2018
kubernetes
sig
apps,
my
name
is
Matt.
Farina
I'll
be
carrying
this
meeting
today.
The
meeting
minutes
and
the
notes
an
agenda
and
everything
are
in
this
Google
Doc
I
just
pasted
it
into
chat,
so
folks
go
ahead
and
head
on
over,
and
you
can
see
what
we're
going
to
talk
about
today.
I
just
have
one
announcement.
If
you
know
folks
have
something
they
want
to
demo,
we
do
have
the
regular
weekly
demos.
A
If
people
have
things
I
want
to
demo,
please
reach
out,
and
let
us
know
that
can
be
something
to
do
with
controllers
and
app
management.
It
can
be
something
to
do
with
tools
that
build
on
top
of
it
to
help
you
operate
things.
For
example.
We
recently
had
case
in
that
isn't
a
demo.
It
can
be
developer
tools
like
the
one
we're
going
to
have
today.
A
B
Okay,
that's
way
so
for
the
folks
that
used
to
develop
monolithic
applications
like
I
did
back
in
the
day.
Essentially,
we
had
an
IDE
where
we
had
everything
there.
So
we
had
a
code.
We
had
the
logs,
we
had
the
metrics
and
when
there
was
a
problem
with
their
application,
what
we
used
to
do
was
simply
start
debugging
and
the
ID
was
the
one
place
for
truth.
There
was
a
place
that
we
had
everything
right
now:
transitioning
to
micro
service
architectures.
We
have
multiple
services
with
some
tens
of
hundreds
of
services.
B
So,
essentially,
what
I,
what
we
did?
We
started
building
tools
to
emulate
that
sort
of
debugging
your
applications.
So
the
first
thing
that
we
started
doing
in
order
to
debug
your
application
is
writing
print
statements
in
our
in
our
application.
So
if
there
is
anyone
that
does
that
or
does
not
do
that,
please
share
your
hand
in
zoom
chat,
but
essentially
what
we
do
is
we
put
a
print
in
our
application.
B
We
build
the
application
we
deploy
it
and
then
we
started
and
then,
with
the
information
that
we
got
from
from
the
print
statement,
we
continue
developing
and
debugging
the
application.
But
what
what
is
missing
compared
to
monolithic
applications
is
the
ability
to
essentially
have
breakpoint
debugging
to
put
our
breakpoints
in
your
application
and
test
to
see
what
happens.
So
this
is
exactly
what
we're
trying
to
achieve
with
the
vs
code,
related
extension
and
draft,
and
essentially
this
is
a
really
simple,
golang
application.
B
It's
a
simple
web
server
that
reads
the
hostname
and
then
responds
with
with
a
simple
message
containing
the
hostname
and
essentially
it's
just
a
simple
application
that
will
try
to
deploy
to
kubernetes
first
and
then
do
breakpoint
debugging
for
it.
So
because
it
uses
draft,
it
has
a
helmet
art.
So
if
you're
not
familiar
with
with
draft
I'll,
just
paste,
the
I'll
paste
the
link
towards
draft
pack
seen
in
in
chat
later,
but
essentially
a
draft
application.
It's
simply
a
chart,
and
it's
the
only
thing
special
about
this.
B
Is
it
has
a
couple
of
labels
and
annotation
so
that
draft
knows
it
is
a
draft
application.
So
it
contains
a
docker
file.
So
we
know
how
to
build
the
application
and
then
it
has
a
chart.
So
we
can
deploy
and
essentially
what
we're
trying
to
achieve
is
simply
press
f5
and
we
start
the
process
of
building
the
application,
pushing
it
to
a
container
registry
and
then
upgrading
the
helm,
charts
I,
don't
install
it
or
upgrade
it
if
it's
not
in.
B
If
it's
not
in
in
the
cluster
and
at
this
point
what
happens,
is
we
start
the
application
process
and
at
this
point,
draft
is
going
to
forward
the
application
ports
exposed?
Will
forward
it
locally.
So
we
have
two
ports,
its
port
8080,
where
the
application
is
listening
and
then
there's
port
2
3
4
5,
which
is
the
default
delft
port.
B
So
if
you
go
to
localhost
8080
1,
what
should
happen
is
we're
hitting
a
breakpoint
in
our
application
that
lives
inside
a
kubernetes
cluster
and
we're
able
to
breakpoint
debug
it
as
if
it
was
a
local
application.
So
at
this
point
we
were
able
to
see
the
application
request.
We
can
interact
with
this
in
the
debug
console,
for
example,
we
can
always.
A
B
A
B
That
should
make
it
easier.
I
hide
my
face
now
so
essentially
right
now,
you
should
be
able
to
interact
with
your
application
as
if
it
was
local.
So
you
can
interact
with
your
with
your
request.
You
can
see
the
values
you
can
change
the
values
in
your
application
and
you
can
continue-
and
this
is
the
result
that
the
application
should
output
now,
usually
when
we
work
with
micro
services,
there
are
multiple
services,
an
application,
usually
front-end
service
is
calling
in
back-end
services
calling
in
other
services.
So
in
this
example,
we
have
another
micro
service.
B
It's
a
nodejs
service
which
essentially
will
create
some
requests
to
the
go
application
that
we
just
saw
so
we're
doing
the
same
thing
for
the
nodejs
application.
It's
building
the
it's
pushing
it
and
then
it's
doing
a
helm,
install
and
at
this
point
the
debugger
is
attached
to
that
application
as
well.
And
if
we
go
to
the
other
application
port,
we
should
be
able
to
debug
the
nodejs
service,
and
this,
in
turn,
should
create
a
request:
the
go
micro
service
and
we're
able
to
debug
that
as
well.
B
So,
regardless
of
how
many
services
you
have
in
an
application,
you
should
be
able
to
chain
the
requests
and
essentially
breakpoint
debug
through
all
of
your
applications.
Now
the
experience
right
now,
as
you
can
see
it,
it's
not
ideal
in
terms
that
you
need
to
vs
cold
instances
and
we're
working
towards
making
that
in
a
single
one
so
that
you
don't
have
to
have
multiple
instances
of
vs
code,
but
essentially
in
broad
terms.
This
is
the
experience
that
we're
working
towards
being
able
to
debug
applications
across
multiple
services
in
your
app.
B
B
That's
you
bye-bye
folks,
which
is
helm
so
I
took
the
Joker
file
for
it
and
it's
I
simply
and
I.
Simply
instead
of
starting
a
tiller,
I
also
did
a
delve
attach.
So
this
is
what
you
have
to
do
to
debug
a
golang
application.
It's
we
just
start
the
debugger
as
well,
and
essentially,
if
we
do
the
same
thing,
which
is
f5,
we
should
start
building
the
app
The
Container
image,
push
it
to
docker
hub
and
then
upgrade
it
and
release
it
in
in
the
cluster.
B
So
right
now,
what
we're
doing
we're
using
draft
to
debug
to
deploy
in
debug
helm
and
behind
the
scenes
draft
itself
uses
help.
So
essentially,
what
we're
gonna
end
up
with
are
two
instances
of
tiller
right,
one
that
is
used
by
draft,
which
is
the
production
one
and
one
that
we're
debugging
and
one
we're
iterating
against.
B
So
when
that,
when
that's
ready,
we
will
see
debugger
attached
and
we'll
be
able
to
create
requests
that
to
that
instance.
So,
if
you're
familiar
with
with
tiller
it,
it
uses
a
couple
of
words.
It
uses
44
134
to
communicate
over
G
RPC
with
the
clients
and
it
uses
44,
1
and
35
2
for
readiness,
probes
and
health
checks
and
stuff
like
that,
and
we
also
forwarding
for
2
3
4
5,
which
again
is
the
default
delft
port
for
debugging.
B
It's
container
creating,
but
essentially,
let's
take
a
bit
a
look
at
what
what
actually
what
we
need
in
order
to
debug,
in
this
case,
a
going
application.
So,
by
default
the
delve
debugger
uses
it
has
to
be
trace
the
running
process
in
order
to
connect
to
it
and
by
default
in
darker,
which
is
the
default
container
runtime
for
kubernetes,
that
is,
that
is
disabled
by
default
in
the
security
context.
A
B
That's
a
great
question,
so
essentially
what
this
is
doing
specifically
is.
It
does
not
modify
the
networking
of
either
your
cluster
or
your
local
machine.
Essentially
what
it
does
is,
every
time
you
do
an
f5,
it
will
build
your
container
image,
push
it
to
container
registry,
and
then
the
draft
client
will
forward
all
application
ports
exposed
in
your
helm,
chart
in
your
kubernetes
manifest
it
will
expose
them
to
to
your
localhost.
B
So
we
can
specify
this
local
port
mapping
and
essentially,
when
you
access
localhost
on
either
of
these
ports,
every
requests
you
create
there
will
be
forwarded
to
your
cluster.
So
this
is
how,
behind
the
scenes,
the
draft
and
the
vs
called
extension
are
are
using
the
communities
things
at
that
to
forward
requests
from
from
your
local
costs,
from
your
local
machine
to
the
cluster.
A
B
Yeah
that
that's
that's
the
main
difference
yeah
with
telepresence,
you
run
the
process
application
locally,
whereas
with
draft
you're
running
the
application
inside
your
cluster
in
a
pod,
which
is
essentially
the
same
environment
that
you're
gonna
run
it
in
production.
So
this
is
running
and
at
this
point
you
can
use
helm
as
you
would.
Normally
you
simply
pass
the
house
which
is
localhost
for
e
for
134,
which
will
forward
your
request
inside
a
cluster.
B
So
at
this
point
we
should
be
able
to
debug
the
cluster
side
server
side
component
of
helm
locally,
as
if
it
were
running
locally.
So
at
this
point
you
can
do
the
same
things
that
we
did
is
just
a
golang.
Application
can
debug
it
in
the
same
way
as
the
web
application
from
earlier,
and
that's
pretty
much
it
one
more
thing
that
you
can
do
with
vs
code,
it
which
is
also
debug,
client-side
things.
B
So
this
is
the
same
project,
but
this
time
we're
debugging
the
CLI,
we're
debugging,
the
local
component
of
of
helm
and
essentially
we're
doing
the
helm
list,
and
we
had
a
breakpoint
on
the
CLI
and
then
when
we
continue
helm
lists.
The
client
side
will
create
a
G
RPC
request
to
the
server
side
component,
and
this
is
caught
in
the
breakpoint
here
and
essentially
we
can
debug
any
sort
of
client
server,
app
side
applications
with
this
technique.
B
There's
one
more
question:
is
there
any
way
to
debug
an
application?
That's
already
running,
so
it
essentially
depends
on
the
type
of
application
that
it
is
specifically
for
nodejs.
Yes,
you
can
do
that
right
now
for
dev
the
vs
code.
Implementation
of
delft
does
not
allow
you
to
do
that
once
you
disconnect
the
debugger,
it
will
stop
the
process,
but
that's
more
of
an
implementation
detail
of
the
delve
adapter
4
vs
Co.
Essentially,
the
end
goal
is
to
be
able
to
do
that.
B
B
In
theory,
everything
that
has
a
remote
debugger
protocol
should
be
able
to
to
be
debug
using
this
technique,
because
essentially
draftin
vs
code
only
do
the
plumbing
behind
the
scenes
for
your
debugger
to
be
able
to
attach
in
the
end,
if
you,
if
you
ever
used
vs
code,
debug
applications
in
the
end,
the
launch
file,
the
configuration
file
is
simply
a
goal
and
configuration
file
or,
in
the
case
of
nodejs
a
nodejs
configuration
file.
It's
nothing
more
with
just
before
you
can
attach
a
debugger.
B
A
B
As
I
was
saying
earlier,
it
depends
on
whether
the
debug
adapter
supports
that
so
for
nodejs.
Yes,
you
can
do
that.
You
can
do
that
for
dotnet
core
as
well.
The
implementation
for
golang
does
not
currently
support
that,
and
you
have
to
restart
your
pod
in
order
to
attach
a
debugger,
because
essentially,
when
you
start
the
container,
you
start
the
process
it
had.
B
The
the
debugger
process
has
to
start
as
well,
and
this
is
today
so
what
we're
looking
for
in
future
kubernetes
version,
so
starting
with
kubernetes
110
you're
able
to
have
shared
name
spaces
between
the
pods
of
the
same
deployed
between
containers
of
the
same
pod.
So
essentially
what
we
will
be
doing
once
every
major
provider
supports
kubernetes
110
and
the
shared
process.
B
F
B
If
you
want
to
try
this
stuff
out,
you
can
take
a
look
at
the
kubernetes
extension
for
vs
code,
and
you
can
look
at
draft.
Essentially
everything
that
I
saw.
You
works
with
the
latest
version
of
both
the
things
that
I
linked
to
in
in
chat,
we'd
love
for
you
to
give
it
a
try
and
give
us
feedback
on
what
you'd
like
to
see
and
what
what
things
we're
not
doing
and
we're
not
doing
right
everything
that
you
saw
works
with
any
kubernetes
cluster.
It
can
be
either
a
mini
cube
cluster.
B
A
A
A
We
just
saw
one
debugging
setup
and
another
one
was
touched
on,
which
is
telepresence,
which
will
you
will
run
your
application
locally,
so
like
I'm
on
a
Mac
and
I
would
run
it
on
the
Mac
and
then
it
would
connect
into
the
cluster
and
be
able
to
talk
to
those
micro
services.
That
is
another
way
cuz.
You
can
do
bug
it
locally.
A
B
B
B
There
aren't
that
many
solution,
so
both
Google
and
Microsoft
have
absolution
that
debug
applications
on
top
of
gke
and
aks,
but
those
things
only
work
inside
of
Google
Cloud
and
Azure.
Their
articles
stack,
Drive
debugger
for
kubernetes
and
there's
a
short
F
space
is
for
for
Azure
community
service.
I
did
not
look
that
much
into
those
solutions,
mainly
because
you're
not
able
to
run
them
elsewhere
for
open
source
solutions
that
work
anywhere
are
there's
also
squash
squash
at
I/o
and
I
really
encourage
folks
to
take
a
look
at
squash.
B
The
experience
the
overall
experience
is
what
you
saw
earlier
is
inspired
from
idiot,
defines
squash
experience
so
essentially
you're
able
to
do
to
debug
across
micro
services
using
vs
code
and
I
think
there's
also
plug
in
for
for
other
ideas
for
other
code
editors.
The
main
downside
that
I
see
and
the
main
reason
that
folks
try
to
steer
away
from
squash
is
the
fact
that
you
need
a
privileged
daemon
set
running
on
your
cluster.
B
So
you
have
a
part
that
has
access
is
privileged
and
has
access
to
all
the
processes
running
on
your
nodes,
simply
because
the
squash
server
side
component
has
to
be
able
to
attach
to
your
to
your
processes.
So
while
the
overall
experience
is
brilliant
and
I
cannot
encourage
folks
enough
to
check
out
squash
and
to
check
out
the
cube
contact
from
from
Austin
I
think
I'll
also
link
to
that
in
in
in
chat.
B
The
main
downside
is
the
privilege
table
set
essentially
the
first
tool
that
we'll
be
able
to
debug
to
have
the
debugging
experience
across
micro-services
and
not
require
folks
to
change
their
deployment
files
and
manifest
that
much
that's
significant
compared
to
production
or
as
close
to
production
as
possible
is
gonna,
be
the
one
that
people
will
mostly
use.
I
assume
it's
it's
a
personal
opinion,
but
this
is
what
we're
trying
to
achieve
with
via
scolding
draft.
A
A
And
what
open
tracing
does
is
it
allows
you
to
pass
down
metadata
between
different
different
micro-services,
so
you
can
capture
a
request,
yeah,
so
there's
open
tracing
and
then
the
CNC
f
project,
of
course,
is
Jaeger
and
the
idea
is
you.
Can
you
know
if
a
request
comes
in
to
say
your
web
head
right
and
it
comes
into
your
browser
or
into
your
J's
front
end,
and
then
it
hits
maybe
an
API
server,
and
then
maybe
it
talks
to
two
or
three
back-end
services
right.
A
How
do
you
trace
that
request
from
end
to
end
and
know
the
interactions
along
the
way
and
it's
distributed
tracing,
and
so
there's
Jaeger
and
I'm
dropping
some
of
the
links
in
here
so
there's
open
tracing,
which
is
the
spec
and
tooling
around
it.
There's
Jaeger,
which
is
an
implementation
of
open
tracing
and
I'm,
wondering
if
dependency
analysis
and
anything
like
this
can
be
used
as
part
of
the
debugging
process.
A
A
A
A
C
A
C
A
There's
hope
open
census,
yeah
it's
it's,
basically
the
ability
to
just
trace
and
log
and
understand
a
request,
and-
and
it's
not
just
for
debugging.
Of
course
right.
You
can
do
all
of
this
in
production
systems
and
development
environments.
So
that
way,
when
something
goes
awry,
you
can
kind
of
trace
where
it
goes
through
everything
and
that's
where
I
had
looked
at
it
less
than
debugging
locally,
but
I'm
just
wondering
how
it
plays
out
there.
Okay,.
A
G
Clearly,
some
bias
coming
out
here
a
little
bit
because
we
actually
work
on
the
project,
but
at
the
same
time,
I
find
this
amendment
are
immensely
useful
when
both
Raju
and
I
were
working
on.
This
we've
actually
used
the
debugger
for
debugging,
some,
both
client-side
stuff
for
draft,
as
well
as
Hellman
tiller,
just
playing
around
with
it
and
looking
through
I
think
it's
immensely
useful
for
debugging
and
looking
at
current
state,
and
things
like
that.
G
A
A
All
right
sounds
good,
so
yeah,
so
the
next
topic
we
had
and
let
does
anybody
else-
want
to
talk
more
about
debugging.
Did
anyone
else
have
anything
more
to
throw
in
I've
been
throwing
some
of
the
notes
into
the
document
along
the
way
the
agenda
and
minutes
I
know
I
missed
some
stuff.
So
if
folks
wanted
to
go
ahead
and
throw
some
of
that
in,
please
feel
free
to
the
next
topic
we
have
up
here
is
actually
developer
environments,
and
so
the
first
question
was:
what
do
folks
actually
use
for
developer
environments?
A
A
Yeah
I'll
share
mine
to
open
it
up.
I
can
use
two
things:
I'll
either
run
a
I
run
either
local
mini
cube,
which
is
kind
of
my
calm
and
go
to
when
I'm
doing
a
lot
of
quick
development,
because
then
I
don't
have
to
deal
with
the
network
or
the
internet,
or
things
like
that
and
then
I'm
one
of
those
folks
who's
lucky
to
have
I'll
either
use
something
in
Google,
Cloud
or
asurs,
where
I
have
been.
A
But
one
of
the
managed
services
I'll
go
spin
up
a
cluster
there
when
I
need
something
larger
or
I
need
to
share
something,
and
so
I'll
have
a
cluster
running
out
there
and
one
of
the
public
clouds
and
then
I
can
point
people
to
it
or
make
it
route
with
some
DNS
that
I
have
or
or
when.
It's
a
larger
application
that
I
need
to
do.
Testing
on
I'll
run
something
there
and
the
same
thing
kind
of
applies
to
CI
environments,
so
anything
that
I've
got
a
CI
on
and
I
want
to
test
operationally.
D
I
have
question:
can
book
theory
yeah?
So
how
do
you
manage
kind
of
like
the
the
local
development
flow
if
the
app
is
too
big
to
run
in
on
your
laptop,
for
example,
or
you
have
some
dependency
that
you
can't
quite
run,
you
know
you're
using
some
database.
That's
you
know,
provider
specific
or
something
like
that.
How
are
people
I,
guess,
charting
their
applications
and
choosing
how
and
when
to
to
point
things
either
locally
or
remotely.
A
D
Even
in
that
case,
what
did
you
do
in
the
opens
that
case
but
I
think
it's
a
general
question.
I
think
there
there
are
many
ways
to
you
know
fix
those
problems,
but
in
general
you
have
some
piece
here
and
some
piece
over
there.
How
do
you
manage
that,
and
is
that
other
piece
shared
or
you
know?
How
did
you
deal
with
that
in
the
open
session.
A
That
may
also
be
good
if
I'm
dealing
with
things
like
large
databases
and
datasets
and
I,
don't
want
everybody
to
have
to
copy
those
down
or
yet
or
even
just
have
those
locally
on
their
systems,
and
that
might
be
kind
of
a
more
secure
way
for
holding
all
that
information
and
just
expose
some
of
that.
So
if
it's
Postgres
from
my
sequel
or
something
like
that,
just
have
that
accessible
but
limited
through
something
and
then
make
that
a
configurable
element
to
my
application.
That's
probably
what
I
would
do,
because
for
me
developing
an
application.
A
I'd
want
to
work
on
my
essential
business
logic,
which
means
the
things
that
aren't
that
I'd
want
to
push
elsewhere
and
then
make
the
instances
of
those
that
I
use
a
configurable
element
for
my
application.
And
so
that
makes
it
easy
to
push
those
other
things
elsewhere.
Where
I
focus
locally.
On
the
thing
that
I
care
about
and.
D
Then
so
I
think
that's
that's
a
common
approach,
or
at
least
the
one
that
I've
heard
of
the
most
is
having
some
sort
of
shared
service
or
something
like
that.
But
then
you're
hitting
the
problem
that's
arise,
then,
is
how
do
you
know
what
the
state
of
that
thing
is?
If
you
are
really
sharing
state
and
how
do
you
make
sure
you
know
somebody
doesn't
screw
up
a
schema
or
something
like
that
or
it
doesn't
have
the
ability
to
do
that,
but
still
has
the
ability
to
iterate
on
that
piece
of
the
puzzle.
D
D
D
E
E
E
D
D
B
One
more
approach
that
can
be
can
be
taken
when
discussing
outside
services,
like
databases
or
messaging
systems
and
stuff
like
that
is
to
use
the
open
service
broker.
So
essentially,
within
the
vs
code,
kubernetes
extension,
we
were
able
to
bind
to
a
certain
service
from
the
extension
itself
and
you
essentially
right-click
your
chart
and
say
bind
to
a
specific
service
and
it'll
automatically
fill
in
the
secrets
and
everything
required
for
application
to
one
and
what
we
actually
see
folks
doing
is,
if
you're
developing
locally.
D
B
A
Looked
at
it
and
I've
discussed
it
and
I've
demoed
it,
but
I
haven't
done
anything
more
than
that
Vick,
because
it
is
one
of
those
things
that
I
too
was
looking
at.
You
know
if
you're
gonna
go
portable
environments
and
you
need
to
deal
with
some
of
this
stuff.
The
open
service
broker
is
kind
of
nice
for
those
services,
and
so
it's
just
a
little
bit
further
to
kind
of
deal
with
some
of
the
state
stuff
there
yeah.
D
I
think,
in
order
to
get
into
that,
the
problem
that
open
service
broker
solves,
you
have
to
solve
a
lot
of
other
problems
first
and
have
mechanisms
for
understanding
how
developers
are
gonna.
You
know
develop
in
the
first
place
and
share
their
environments,
and
things
like
that.
So
it's
probably
down
the
road
a
bit
for.
First
of
those.
A
That
I
know
of
and
I
know
it's
it's
gonna
be
debatable.
Yeah
different
organizations
and
different
people
will
do
different
things
and
sometimes
different
things
in
different
places
like
whether
you
have
staging
in
QA
environments
and
production
and
how
you
separate
that
I
mean
that
can
be
just
a
world
of
difference
depending
on
your
organization.
So
I
don't
know
that
there
actually
is
the
best
practice.
A
I
know
one
practice
that
I've
seen
that's
pretty
common,
though,
is
if
you're
using
a
CI
system,
and
you
want
to
go
spin
up
your
app
actually
just
having
one
kubernetes
cluster,
that
you're
running
the
app
in
and
then
creating
a
new
namespace
and
sticking
the
app
in
there
and
then
testing
it
and
maybe
cleaning
up
the
names
afterwards.
I've
seen
this
happen
for
both
just
doing
a
CI
run
also,
and
then
it's
torn
down
after
the
CI
runs
done.
A
I've
also
seen
it
where
a
pull
request
is
mapped
to
a
namespace,
and
so
the
PO
reak
that
namespace
is
constantly
updated.
Anytime,
new
content
hits
the
pull
request
and
then
you've
got
one
place
you
can
always
share
and
then,
when
the
pull
request
is
closed
or
merged
or
whatever,
then
it's
cleaned
up,
I've
seen
kind
of
both
of
those
workflows
for
for
CI
related
activities,
but
then
for
developers
I've
seen
you
know
just
mini
cube
I've
seen
some
of
them
have
their
own
clusters.
A
D
Yeah
I
think
in
the
ideal
case,
everybody
wants
to
be
able
to
just
run
locally,
run
everything
locally
and
then
commit
and
then
have
that
CI
process
that
you
describe
mat
and
you
know
if
it's
a
github
flow
which
which
PR
is
having
an
environment
there,
that
the
reviewer
can
look
at
it.
So
that
I
think
it's
super
useful,
but
still
on
that
local
side
kind
of
depend
on
the
fact
that
everything
can
run
locally
for
you
and
I.
A
And
there's
another
concern
with
mini
cube
that
was
brought
up
a
little
earlier
in
the
chat
and
just
read
right
here:
I
guess,
mini
cube
does
not
run
Windows
containers,
it
runs
Linux
container
and
so
for
anybody
who
doesn't
know
kubernetes
and
I
don't
know
if
it's
GA
or
vs.
beta
I
can
run
Windows
containers
yeah.
A
Beta
okay,
thank
you,
but
it's
that
ability
to
do
Windows,
because
now
there's
Windows
container,
so
you
can
have
your
your
Windows
applications
running
in
Windows
containers
and
Windows
servers.
Kubernetes
supports
that
as
a
beta
feature,
but
mini
cube.
Doesn't
let
you
do
that
locally?
So,
if
you're
gonna
do
development
on
that,
you've
got
to
be
somewhere
else.
That
supports
the
windows.
Yeah.
A
B
B
A
So
I'm
gonna
ask
if
it
folks
even
have
stories
of
non-ideal
situations
and
things
you
learned.
Please
share
those
I'll
share
our
non-ideal
situation.
At
least
I
went
through
I.
Even
looked
at,
it
is
I,
got
credentials
to
public
clouds
to
go,
create
my
own
clusters
and
then
I
got
access
to
the
code,
which
of
course
had
helm,
charts
and
configuration
files
and
things
like
that
and
go
run
free,
but
I
also
knew
kubernetes
coming
in
and
I'm
not
sure
what
it
is
on
board.
Folks
kubernetes,
which
is
kind
of
a
hard
problem.
A
Right
now,
there's
been
a
number
of
things
written
about
kubernetes
being
hard
and
complicated,
and
then
there's
some
companies
like
I,
want
to
say
stripe
and
some
others
who've
talked
about
removing
the
complicated
kubernetes
bits
away
from
people
and
giving
them
a
simpler
interface
and
I.
A
Think
in
stripes
they
were
talking
about,
cron,
runs
and
coming
up
with
a
simpler
interface
and
then
building
turning
that
into
cron
jobs
behind
the
scenes
for
people,
but
there's
stuff
like
that,
going
on
to
try
to
simplify
it
as
well
as
it's
kind
of
hard
and
hand-holding
and
taking
time
and
Doc's
if
folks
have
better
ideas
and
how
they
did
it.
What
I've
gone
through
is
very,
very
messy,
I'd
love
to
hear
it
so.
C
Am
I
coming
in
clearly
right
now
crystal
all
right,
so
this
is
actually
one
of
the
one
of
the
real
goals
of
Kasaan.
It
is
that
it
it'll
democratize
the
environment
and
allow
experts
to
do
expert
things,
but
allow
neophytes
to
do
pretty
much
everything
else,
which
is
all
the
day-to-day
stuff.
So
this
is
this
is
why
we
are
putting
in
the
work
on
that
to
remove
the
complexity
of
a
lot
of
the
complexities
of
kubernetes
beyond
what
a
normal,
what
a
normal
operation
or
a
normal
operator
would
have
to
see
day-to-day.
C
A
So,
just
out
of
curiosity
here,
right
so
case
on
it
and
you
can
go
you
scaffold
and
draft
you've
got
the
config
files.
You
can
just
start
running
your
application
and
then
iterating
on
it
right,
because
somebody
else,
who's
working
in
the
application
may
have
already
created
your
your
kubernetes
file
or
your
home
chart,
and
things
like
that.
A
So
you
can
go
iterate
on
your
application,
but
there's
also
that
onboarding
of
how
does
somebody
go
get
a
kubernetes
environment
to
work
in
and
get
access
to
and
then
what
kind
of
access
do
you
give
them
to
that
development
cluster
right?
So
there's
all
that
stuff,
that's
kind
of
maybe
sidestepping
the
tooling
too.
As
far
as
onboarding
and
what
assumptions
do
they
know
about
kubernetes?
D
Think,
there's
testing
a
baseline
of
knowledge.
You
have
to
have
they're
good
I,
don't
think
you
can
avoid
super
daddy's
101,
no
matter
what
level
of
abstraction
here
cannot
provide
to
your
developers,
but
then
definitely
I
think
pointing
them
towards
I,
guess
the
pieces
of
what
is
in
code
that
they
need
to
to
understand
and
look
after
and
then
giving
them
a
way
to
iterate
on
those
things.
I
found
that
giving
developers
a
way
to
see
their
changes
and
understand
like
through
screwing
things
up
and
changing
things.
D
You
start
to
understand
how
that
some
works
more
so
giving
them
the
tools
to
iterate
and
pointing
them
in
the
right
direction
to
change
things
like
you
know:
I
think
everybody
if
they
pick
up
a
new
project
and
they
want
to
you
know,
add
a
new
feature.
Their
first
gonna
just
go
and
try
to
figure
out
how
do
I
add
a
log
statement
somewhere
and
make
sure
that
I
am
actually
changing
with
those.
B
A
So
I'm
gonna
get
maybe
a
little
opinionated
here
we
got
to
make
it
simpler.
There
been
a
number
of
posts
talking
about
how
things
are
complicated
and
even
I
found
the
blog
post
at
stripe
ad.
If
you
go
look
at
it,
they
built
their
kubernetes
cluster
to
run
cron
jobs
and
they
don't
actually
open
up
kubernetes
cron
to
people
they
actually
have.
A
If
you
look
for
the
heading
making,
cron
jobs
easy
to
use,
there's
a
section
on
what
they're
doing,
because
they
don't
want
their
developers
to
have
to
learn,
kubernetes
or
even
go
through
kubernetes
101
or
learn
the
different
types
of
controllers
they're.
Trying
to
make
it
simple
to
do
that
way.
Right
and
that's
you
know
with
other
platforms.
It
tends
to
be
a
little
simpler
and
kubernetes.
A
It
say
it
takes
some
eight
hours
to
get
through
that
that's
eight
times
thousands
and
thousands
and
thousands
of
people,
and
so
that's
what
like
the
enterprise
is
gonna
grapple
with,
and
so
that
onboarding
is
actually
a
huge
cost
thing
right
and
so
kubernetes
is
hard
or
if
I
went
to
Cloud
Foundry
I
can
probably
do
well
on.
You
know
a
101
deploying
app
for
developer
here
in
in
a
couple
of
minutes.
They
can
get
going
on
it.
A
C
But
Matt
Cloud
Foundry
is
looked
at
more
as
a
path.
Kubernetes
is
a
platform
of
platforms
which
I
would
like
into
more
of
Linux
and
Linux
distribution.
So,
but
we
do
expect
our
developers
to
if
they're,
deploying
to
Linux
or,
if
they're,
deploying
the
Windows
to
have
at
least
a
bit
of
understanding
of
the
underlying
platform
and
we're
going
to
have
to
kubernetes
unless
we're
gonna,
relegate
it
to
pass
status,
which
isn't
gonna
happen.
But.
D
At
some
point,
they're
gonna
have
to
start
learning
some
of
what
they're,
changing
and
so
I.
Don't
think
we'll
be
able
to
get
away
from
that
and
to
do
Brian.
You
know
analogy
just
to
Linux:
I
didn't
learn
Linux
when
I
showed
up
at
my
first
job,
using
1x,
I,
slowly
kind
of
chipped
away
and
learned
little
pieces
of
it
as
I
got
there
I
think
over
time,
we'll
see
that
you
get
more
the
case
where
you've
seen
little
chunks
of
of
kubernetes.
Oh,
it's
kind
of
in
the
head
space
a
little
bit
more
right.
D
Now
we
just
have
this
huge
leap
from
zero
to
60
and
I
think
making
that
onboarding
and
that
the
knowledge
transfer
of
that
101
as
easy
as
possible
and
having
a
well
known
path
to
get
to
not
proficient
but
to
understand
what
what
you're
changing
in
some
way
shape
or
form
is
going
to
be
really
useful
and
necessary
for
a
while
I.
Don't
think
we
can
just
skip
over
that.
No.
A
But
we're
also
getting
into
the
kind
of
the
separation
between
app
developers
and
app
operators
right,
most
nodejs
developers
don't
have
a
clue
how
system
D
works
on
Linux
today,
and
they
don't
need
to
write
to
do
any
of
their
app
stuff
ever.
In
fact,
they
can
even
deploy
it
to
a
lot
of
places
and
not
have
to
to
deal
with
that
and
so
I'm
hoping
we
started
to
see
more
tools
built
on
top
of
kubernetes.
A
How
can
we
automate
as
much
of
that
as
possible
and
make
the
onboarding
to
it
so
simple
that
app
developers
only
have
to
worry
about
their
business
logic
and
a
lot
of
the
rest
of
the
stuff?
There's
tooling
and
other
things
made?
That
makes
their
life
simple,
because
if
we
ask
them
to
learn
a
lot
that
one
slows
them
down,
because
they're
now
saying
I'm,
not
gonna
touch
my
business
logic,
I'm
not
gonna,
go
learn
this
other
stuff
right
and
deal
with
getting
my
features
and
stuff
done.
A
I'm
gonna
go
learn
something
that's
not
related
to
that.
Then
I'm
gonna
go
operate
it
and
deal
with
it
and
and
dig
into
it
and
go
write
codes
and
deal
with
the
a
mold
and
all
this
other
stuff,
because
it's
hard
right-
and
so
you
know
one
of
the
things
I
mean
it's
one
of
the
reasons
for
the
all
the
sass
surfaces
these
days.
Right
is
because
I
want
to
focus
on
my
business
logic
and
not
on
all
the
other
parts
so
make
my
so
I
can
just
focus
there.
A
It's
a
huge
thing:
if
you
go
pull
like
companies,
they're
interested
in
developer,
velocity
and
flexibility,
and
things
like
that
and
to
get
velocity,
you
got
to
focus
on
your
business
logic,
not
the
other
stuff,
and
so,
if
you've
got
automation,
that
can
do
that.
So
you
don't
have
to
that's
a
huge
win
and
that's
an
easier
onboarding
experience.
Well,.
D
Why
can't
they
do
that
today
with
kubernetes?
It's
the
same.
It's
the
same
thing
right.
If
you
mapped
the
entire,
you
know
postcode
commit
process,
then
you
can
do
that
with
with
the
kubernetes,
and
you
separate
the
operators
from
the
app
developers
and
you're
in
the
same
world
that
you
were
in.
You
know
before
you
write
your
no
js'
stuff,
you
run
it
locally.
You
write
some
unit
tests
you
commit
and
then
it
goes
off
and
goes
into
a
pipeline
that
you
may
or
may
not
care
about.
I
think
that's
still
a
viable
world.
A
D
Why
do
I
care
to
where
it's
being
deployed?
That's
that's
one
thing
like
if
there's
a
device
deploy
is
commit,
you
know
for
most
places
and
then
it
goes
into
a
review
process
then
goes
into
a
QA
system,
and
that
system
does
some
automated
test.
Then
there's
some
annual
validation
and
protesting
and
then
push
down
to
production
and.
D
A
Travis
CI
and
deploy
to
Heroku
from
there
and
an
app
developer
not
needing
to
know
much
else
can
make
all
of
that
work.
But
if
they
gotta
go
deal
with
kubernetes,
it
gets
far
more
complicated.
So
what
I'm
just
saying
is:
there's
a
space
where
we
can
still
make
more
tooling
to
make
stuff
easier
for
people
right,
because
right
now,
I
go
Travis.
I
go
to
circle.
Ci
I
can
have
my
nodejs
app
I
can
easily
deploy
it
there
with
simple
api's
and
add-ons.
A
But
if
I'm
gonna
go
do
this
to
kubernetes
now
it's
far
more
complicated,
you
have
to
have
an
operator,
whereas
with
nodejs
I
may
not
even
have
to
have
one,
because
the
automation
and
the
default
templates
and
everything
are
there
to
just
tell
me
what
to
do
so.
I
don't
have
to
think,
but
with
kubernetes
there
is
there's
still
more
knowledge.
Somebody
has
to
deal
with
so
you're
even
saying
you
have
to
have
an
operator
and
there's
a
lot
of
people
who
don't
want
to
have
to
deal
with
that
right.
A
So
how
do
we
make
it
simpler
for
those
cases
where
it
needs
to
be
simpler?
Which
makes
the
whole
onboarding
simpler
and
and
it's
more
shareable,
because
it's
not
different
companies
coming
off
with
one
off
onboarding.
It's
general
tools
that
can
now
be
shared
between
companies,
so
each
company
doesn't
have
to
go
figure
it
out
right.
So.
D
Are
you
are
you
advocating
that
there
should
be
the
sub
path
on
top
of
kubernetes
that
we
can
all
agree
on
that?
Is
that
easy
layer,
or
are
you
saying
that
there
should
be
incremental
pieces
that
then
can
be
built
into
a
path
so
that
there's
the
missing
piece
I
think
is
I,
don't
think
adding
a
bunch
of
extra
incremental
pieces
on
top
of
kubernetes
that
makes
building
up
as
easier
gets
us
there,
but
if
there
was
a
path
it
sounded
like,
you
were
very
much
advocating.
A
Because
there's
different
ways:
I
mean
draft
is
not
a
pass,
but
draft
replicates
a
lot
of
what
passes
do
by
auto
detection
and
generating
things
for
you
right.
It's
not
exactly
a
pass,
but
it
gets
along
the
same
lines
and
it
uses
helm
and
other
stuff
to
deploy
and
operate
it.
So
it
gets
into
some
of
those
similar
ideas.
What
I
think
is
is
that
space
of
making
things
very
simple
and
easy
to
understand
and
easier
to
follow,
don't
make
me
think,
needs
more
solutions.
A
I
don't
want
anything
to
be
baked
into
kubernetes
itself
and
I.
Don't
know
the
right
answer
and
there's
probably
more
than
one
just
like
we
have
them
verse,
Emacs
debates,
right
versus
Adam,
verse,
visual
studio
code
versus
peak
or
whatever,
oh
yeah.
Thank
you
for
the
time
jack
by
the
way,
I
think
it
would
be
nice
to
see
tools
that
went
into
that
space
and
tried
to
solve
those
problems,
and
then
we
get
to
see
what
gets
uptake.
D
A
It's
all
about
making
that
onboarding
in
that
experience
easier.
So
so
thank
you.
Yes,
we
are,
after
we
are
after
time
and
so
I'll
call
time
in
the
meeting.
Thank
you,
everybody
for
coming
and
we'll
be
back
here
same
time
next
week.
So
thanks
everyone
for
a
lively
discussion
and
the
recording
will
be
online
shortly
this
time
shortly,
thanks
everyone.