►
From YouTube: Kubernetes Office Hours (West Coast Edition) 20180516
Description
Join our monthly live stream where Kubernetes experts answer user questions, join us on #office-hours on slack or post in our question thread (see below):
Info: https://github.com/kubernetes/community/blob/master/events/office-hours.md
Question thread: https://discuss.kubernetes.io/t/office-hours-coming-up-on-wed-may-16/112
A
If
those
of
you
listening
in
the
channel,
let
us
know
how
the
audio
and
video
sounds.
I've
noticed
this
morning,
the
first
few
seconds
it
stutters
and
then
it
gets
gets
going
pretty
good.
So,
alright,
it
is
May
16th
2018.
If
this
is
the
kubernetes
office
hours
for
the
West
Coast.
So
we
do
these
sessions
twice
a
day
one
in
the
morning
here
for
the
Europeans
and
one
in
the
afternoon,
so
welcome
everyone
to
today's
kubernetes
office
hours.
B
A
Awesome
all
right,
and
while
we
go
users
feel
free
to
go
ahead
and
start
asking
questions.
It's
like
I'm
gonna
go
over
the
ground
rules,
real
quick,
first
of
all
judgment
free
zone,
so
everyone
has
to
start
from
somewhere.
So
if
you
see
someone
asking
a
question
in
the
channel
and
there's
no
such
thing
as
a
dumb
question,
so
we
want
to
encourage
people
to
ask
questions
and
and
learn
the
stuff.
So
that's
why
that's?
Why
we're
doing
this?
Well,
we
will
do
our
best
to
answer
your
questions.
A
Please
be
cognizant
that
we
don't
have
access
to
your
cluster.
So
if
something
is
really
really
deep
and
technical
and
there's
always
there's
always
some
firewall
or
networking
thing
at
work
or
whatever
it
is,
we
might
not
be
able
to
help,
but
we
can
at
least
try
to
get
you
to
a
place
where
you
can
figure
out
where
the
problem
lies.
So
please
be
cognizant
of
that.
We
will
try
our
best.
A
However,
panelists
you're
encouraged
to
just
expand
on
your
answers,
so
we're
not
just
here
to
help
people
with
their
questions
but
to
also
understand
concepts.
So
if
you
have
any
pro
tips
or
any
stories
from
you
know
you
doing
this
kind
of
sordid
production,
please
please
toss
it
in
there
same
with
those
those
of
you
in
the
channel
audience.
A
You
can
help
us
out
by
pasting
URLs
too,
like
Doc's
blog
posts
or,
if
you
see
something
that
you're
familiar
with
and
you're
familiar
with
documentation
stuff
feel
free
to
just
whack
it
into
the
slack
Channel.
What
we
like
to
do
is
collect
all
the
URLs
of
all
those
resources,
we've
shared
and
put
those
in
the
notes,
so
that
people
are
aware
that
these
resources
exists
and
kind
of
helped
bring
eyes
to
those
documentation.
A
We
are
monitoring
live
on,
discussed
at
kubernetes
I/o,
which
is
our
shiny
new
user
forum.
We
have
a
thread
there,
which
I
will
paste
in
the
channel
now
and
what
we
do
is
we're
using
this
live
thread
to
kind
of
keep
track
of
questions.
If
you
have
a
question
on
Stack
Overflow
that
you
want
to
remind
us
about,
you
could
put
it
there
and
it's
also
the
place
where
we
will
follow
up
with
you.
A
If
we
can't
answer
our
question,
so
this
is
a
new
research
for
the
kubernetes
community
and
we're
looking
forward
to
using
it
to
kind
of
help
gather
a
whole
bunch
of
information
and
we'll
put
stell
the
videos
in
there
and
there's
all
sorts
of
good
stuff
in
there.
So
keep
your
eyes
out
on
that.
You
can
help
us
out
by
tweeting
spreading
the
word
paying
it
forward
or
letting
people
know
that
we
exist,
which
is
always
fun.
All
of
our
sessions
are
live
and
recorded
and
available
on
YouTube
going
all
the
way
back.
A
So
there's
a
playlist
there.
If
you
want
to
subscribe,
if
you're
use,
if
you're
watching
the
office
hours
like
at
work
or
something
like
that,
and
you
feel
that
there's
something
that
you
can
help
tell
us,
would
make
it
better
for
you.
As
far
as
like
helping
your
team
out
at
work
or
something
like
that,
just
let
us
know
we're
always
open
to
feedback,
and
if
you
want
to
sit
in
on
this
panel,
this
panel
is
volunteer
aisle
staffed
by
volunteers.
A
So
if
you
want
to
share
story-
or
you
want
to
pay
it
forward,
it's
only
one
hour
a
month
is
the
commitment,
and
but
if
you
can
only
come
one
or
two
sessions,
we
could
work
something
out.
But
if
you're
looking
for
a
way
to
give
back
to
the
community,
sharing
your
expertise
is
always
a
good
way
to
do
that.
And
if
you
do,
you
will
earn
the
highly
coveted
urban
IDs
water
bottle.
Mario
d,
did
you
have
that
next
to
you?
There
I
think?
A
Yes,
very,
very
rare
item
there,
so
we're
always
looking
for
help.
And,
lastly,
we
will
be
raffling
another
highly
coveted
item,
the
kubernetes
t-shirt.
So
what
we'll
do
is,
if
you
ask
your
question,
we'll
keep
track
of
all
the
people
asking
questions
and,
at
the
end,
Geoff's
highly
highly
sophisticated
Python
script,
we'll
pick
a
random
winner
for
a
lot
of
people
that
have
asked
questions.
A
If,
if
it's
like
an
off
time-
and
you
have
a
question
and
there's
no
office
hours,
you
can
put
it
in
there
and
then
we'll
get
to
it.
Eventually,
we
do
run
these
monthly
in
the
third
Wednesday
of
every
month.
So
of
that,
unless
anyone
has
any
questions,
we
are
ready
to
get
started
and
right
off
the
bat
it
looks
like
we
have
someone
trying
to
run
18,000,
cron
jobs,
and
so
who
who
is
involved.
This
is
a
question
we
want.
Is
this
the
question
we
want
to
tackle
first,
or
is
this
a
meaty
one?
B
The
original
the
original
question
was,
someone
is
trying
to
schedule
a
cron
job
that
runs
every
15
minutes
and
it
is
looking
like
it
schedules
every
30
minutes
or
so.
But
then,
when
we
go
a
little
bit
deeper,
we
found
out
that
that's
one
cron
job
out
of
18,000
cron
jobs
on
that
cluster.
Okay,
that's
kind
of
where
we're
at.
D
And
I'm
almost
certain
that
the
reason
why
that's
happening
is
that
the
cron
job
manager,
more
or
less,
goes
through
all
of
the
available
cron
jobs.
Really
so
I'm
willing
to
bet
that
that
particular
cron
job
is
not
the
only
one
of
theirs
that
is
actually
being
late
simply
because
the
cron
job
manager
doesn't
get
to
getting
there.
Not
it
should
run
frequently
enough.
A
Let's
frame
this
question
is
that
a
lot
of
cron
jobs?
Yes,
okay,
let's,
let's
put
everything
in
this
scale
perspective
here,
so
like
what?
What
what
is
what
would
be
something
that
would
be
blocking
cron
jobs?
That's
a
right
is
this:
like
do
you
run
out
of
master
like
what
what
okay,
if
cron
jobs
I
mean.
D
A
D
Think
so,
no
dirt,
it's
controller
code,
so
I,
don't
think
it
clobbers
itself.
Okay!
Well,
the
result
is
that
you
can
end
up
with
a
lot
of
cron
jobs,
running
late
right,
the
so
now
for
this
particular
user
use
case.
The
reason
why
is
18,000
cron
jobs
is
its
ETL,
that
is,
extract,
transform
load,
in
other
words,
they're
taking
a
whole
bunch
of
data
and
pipelining
it
from
one
place
to
another
and
doing
changes
to
it
on
the
way,
presumably
for
a
large
variety
of
clients.
D
I
would
also
argue,
there's
a
second
reason
why
you
don't
want
to
use
the
built-in
cron
jobs
for
it
like
well.
I
would
say,
definitely
take
this
to
sig
apps.
This
is
an
interesting
performance
problem.
I
doubt
they.
This
will
be
that
I
bet.
This
would
be.
The
first
they've
dealt
with
this
use
case
with
over
10,000
cron
jobs,
and
it
would
be
interesting
to
fix
the
performance
problems
there.
However,
there's
another
reason
why
I
personally
would
not
use
cron
jobs
for
a
large
scale,
ETL
use
case,
which
is
that
there's
insufficient
tracking.
D
So
when
you
have
ETL
cron
jobs,
some
of
them
need
to
run
once
within
X
period.
Some
of
them
are
actually
what
we
call
backfill
cron
jobs
as
in
they
need
to
run
once
per
hour,
and
if
you
miss
several,
we
need
to
backfill
the
hours
that
they
missed
as
an
example,
the
you
know,
whereas
others
get
superseded
by
later,
cron
jobs,
etc.
There's
a
whole
complicated
set
of
logic
that
we're
not
prepared
to,
and
you
need
to
know
when
they
missed
Brenden
know
if
they
didn't
run
for
certain
jobs
at
big
yulie.
D
If
you've
got
18,000
of
them,
you
need
actually
like
a
good
console
to
tell
you,
which
one
did
not
run
and
why
and
the
existing
built-in
cron
jobs
are
not
going
to
supply
you
with
that
functionality.
Sha
when
I
was
at
Mozilla,
we
actually
designed
a
separate
system
for
this.
That
I
believe
is
still
out
there
and
open-source
to
specifically
deal
with
the
same
kind
of
issue
of
Mozilla
and
and
I.
D
A
C
A
C
B
Even
necessarily
think
develop
like
Josh
said:
there's
Mozilla
has
their
own
open
source
thing
and
there
are.
There
are
other
things
that
fill
this
gap.
I
mean
like
in
in
the
background
get
lab
is
using
sidekick.
You
know
you
can
do
stuff
with
like
celery
cubes.
That
gives
you
a
much
better.
You
know
management
interface
for
jobs
that
you
need
to
make
sure
running,
run
on
a
certain
schedule.
I
was
gonna,
say
a
queueing
system.
You
know
celery
something
that
really
like.
B
B
Don't
think
Cooper
nice
cron
jobs
is
probably
going
to
be
your
best
bet
for
that,
so
it
might
take
some
markets
acting,
but
not
to
say
that
kubernetes,
you
know
we're
getting
a
pass
on
on
the
platform
and
saying,
oh,
it
shouldn't
be
able
to
handle
these
like
that's
you
know,
that's
fine,
not
exactly
it
should
be.
Would
do
it
well,
but
that's
a
lot
of
a
lot
of
cron
jobs
for
sure.
A
All
right
we
will
move
on
and
if,
at
any
time,
I
miss
any
of
your
questions.
Have
you
put
it
on
the
channel?
Please
feel
free
I
saw
one
here
from
mm
lack
says
just
want
to
know
when
to
use
API
aggregates
versus
CR
DS.
Looking
at
writing
quote
operators
for
databases
and
API
aggregate
seems
quote
superior,
but
everyone
seems
to
UCR
DS
for
that
question.
Mark
I.
C
D
D
Yes,
I
mean
a
C
or
D
is
basically
packaging.
Right
I
mean
when
you
look
at
a
C
or
D
you're
like
hey
I'm
gonna,
take
mine,
I'm
gonna,
take
an
API
server,
well,
I'm
gonna,
take
at
a
minimum,
a
custom
controller,
potentially
an
API
server,
potentially
a
scheduler
custom,
scheduler
yeah
and
some
syntax
extensions
custom
objects
for
coop
control,
etc
and
bundled
them
all
together
into
a
package
that
can
be
deployed
all
at
once.
I
mean
that's
really
what
a
CRT
is.
No,
so
it's
not
really
in
either
war
I
mean.
D
A
Okay,
hopefully
that
gives
you
a
little
insight
lack
sorry,
I,
don't
know
what
your
full
name
here
is,
but
if
you
have
a
follow
Marcus,
if
you
have
a
follow
up,
question
just
go
ahead
and
whack
that
in
the
slack
channel
as
well
and
we'll
get
back
to
it.
If
at
any
time
we
mentioned
something,
it's
not
clear
to
you,
viewers
absolutely
feel
free
to
just
ask
it
in
the
slack
Channel.
So
if
you
don't
understand
anything-
and
you
want
us
to
to
go
back
and
reclaim
stuff
weren't
happy
to
do
that,
this.
A
B
A
B
There's
a
a
I
saw
a
github
issue
that
was
filed
and
closed
relating
to
it,
it's
obviously
a
scheduler
construct,
but
what
services
have
to
do
with
scheduling
of
POD
in
terms
of
placing
it
that'd
be
interesting,
possibly
something
with
configuring
the
service
to
point
to
another?
If
you
have
many
many
many
pods
creating
many
endpoints
that
the
service
must
be
updated
with,
but
I
wonder
if
someone
knows
more
than
I
do.
D
You
I
believe
you
always
want
to
use
the
shared
version
like
I,
believe
it's
a
huge
efficiency
gain
for
kubernetes
overall,
but
haven't
worked
on
developing
anything
with
the
Informer's,
and
so
that's
based
entirely
on
troubleshooting,
with
certain
pull
requests
and
I'd
like
to
get
another
opinion
on
that.
Mm-Hmm.
A
B
You
want
to
tap
into
to
do
that
and
just
copy
into
the
volume
itself
and
so
making
sure
that
yeah
making
sure
the
volume
is
available
to
that
in
a
container
would
be
really
your
only
thing
and
then
having
it
executed
command
to
do
that.
I've
done.
Fancy
like
encryption,
you
know
in
your
case
would
be
kind
of
copying
on
archive
decrypt
and
then
extract
the
archive
or
whatever
you're
trying
to
do
so.
That'd
probably
be
the
easiest
way
and
then
with
it.
You
he
said
mentioned
he's
using.
B
Tell
me
a
awesome
good
job,
but
I
mean
that
really
just
makes
it
easier.
You
edit
your
templates
to
allow
a
functionality
where
you're
running
in
a
container
as
well,
and
then
you
can
kind
of
set
the
key
parameters
for
maybe
variables
around.
Where
to
get
that
data
or
how
to
authenticate
to
the
source
to
get
that
data
etc?
And
you
know
and
pass
those
in
as
values
when
you
launch
your
application,
I
thought
I'd
be
how
I
would
go
about
it
and.
A
A
B
Earlier
today,
I
do
from
kubernetes
novelist's
users.
Jtf
I
have
a
strange
issue.
We
are
using
resource
limits
to
cap
memory
usage
for
a
pod,
but
the
container
inside
the
pod
is
actually
allocating
more
memory.
It
seems
that
it
does
not
respect
the
actual
memory
limit
set
any
idea
what
might
be
going
on
it's
Java.
D
D
D
A
B
B
A
A
One
place:
are
they
and
that's
a
rhetorical
question
that
might
be
a
great
blog
post,
I
wonder
if
we
can
get
someone
did
should
get
a
job
person
to
kind
of
collate
those
together
because
it
seems
like
for
a
lot
of
these
are
like
well.
Jonatan
will
fix
all
of
this,
so
okay,
alright,
Andres
TC,
welcome
asks
any
recommendations
from
running
services
with
direct
server
return
inside
the
cluster
is
running
pods,
as
hopes
Network.
The
only
option
we
are
using.
A
C
B
With
with
kubernetes
I
mean
kubernetes
edits,
edits
main
pointed
is
meant
to
be.
You
know,
focal
ingress
for
to
our
applications
right,
so
directory
return
is
basically
a
concept
of
a
load,
balancer
offloading
a
client
to
talk
directly
to
a
back-end
service
by
basically
sending
it's
a
direct
path
to
get
to
it
right
and
I'll.
Let
Bob
take
it
from
there.
So.
C
I
know
you
can
do
some
DSR
settings
with
things
like
cube
router,
but
you
have
to
be
running
like
bgp
and
use
bgp
with
your
like
topper
x,
which
is,
and
a
cube
router
also
supports
like
ecmp.
So
that
way,
when
it's
going
down
to
like,
if
you
have
a
pod
running
on
multiple
hosts,
it'll,
actually
be
smart
and
only
direct
the
routes
to
the
hosts
that
are
running
that
the
hosts
are
actually
running
the
service
of
the
pod.
C
That's
map
to
behind
the
scenes,
that's
by
your
best
bet,
I,
think
there's
a
of
calico
supports.
Dsr
now,
I'd
take
a
look
at
cube,
router
I.
A
C
A
Globalcontainer
networks
on
kubernetes
of
digitalocean,
hey,
you
know
what
having
these
talks
available
to
us
is
awesome
is
awesome
and
they
are
available
almost
the
same
day
that
they
get
that
talks.
That's
really
handy
literally
hours
and
hours
of
stuff
there,
those
of
you
that
are
listening.
If
you
click
through
there
to
see
saf
channel
there's
an
entire
playlist
of
all
the
talks
at
fq
con
so
makes
good
for
good
listening
and
watching.
Do
we
have
any
more
questions
from
the
audience
now
Jeff
I
feel
like.
A
A
A
A
All
right,
let's
go
grab
this
one
from
the
back
lot.
There
Jeff,
while,
while
the
live
audience
figures
out
more
questions.
A
B
Is
he
frozen?
Okay
I
actually
have
a
really
quick
one
that
came
in
shortly
up
the
last
times
last
office
hours,
and
that
is
around
dev
QA
and
prod
environments,
four
deployments
separate,
namespaces,
separate
clusters
or
a
mix
of
both
and
someone.
You
know
someone
trying
to
understand
kind
of
what's
the
best
approach
for
this
and
Jeff
actually
responded
quickly
from
the
logs
here,
and
he
basically
said
it
depends
on
the
app
how
big
it
is
on
base
if
base
if
it
could
be
at
a
lower
level
with
underlying
components.
B
You
know
his
kind
of
generic
was
Debbie
may
be
completely
separate,
well,
QA
or
maybe
staging
and
production
are
probably
namespace
the
same
or
kind
of
in
the
same
Sun
at
or
however,
you
want
to
do
that
and
I
think
this
is
a
really
open-ended
question.
It's
very
dependent
on
what
you're
trying
to
do
I
think
the
core
answer
from
kind
of
what
I've
seen
and
what
most
people
try
to
attempt
to.
B
Some
separation
between
you
know
prod
and
other
environments
right
and
creating
a
place
for
a
pipeline
to
really
come
to
fruition,
with
your
CI
CD,
and
what
you're
trying
to
achieve
so
I
think
definitely
having
a
dev
is
a
fantastic
idea.
I've
seen
people
that
just
have
a
dev
and
a
prod
and
that's
it
they're,
just
a
playground
and
then
production.
E
B
Now,
at
my
company
we
actually
separated
literally
at
the
IP
tables.
You
know
layer,
three
level
right,
which
is
a
little
bit
crazy,
I
mean
even
DNS,
is
separated
right,
like
that
they
can't
talk
to
the
same
DNS
server.
They
have
their
own
individual
DNS
server,
which
is
a
little
bit
ridiculous
and
I.
B
Don't
know
if
I'd
recommend
that,
but
you
definitely
want
enough
separation
that
you
feel
comfortable,
saying
that
I
can
ship
my
app
in
production
and
know
that
it's
you
know
running
in
the
same
kind
of
parameters,
but
also
nothing
you're
doing
in
your
Dever.
Staging
environments
is
going
to
interfere.
A
A
A
So
saqib
asked
this
morning
those
questions
I
have
a
general
question
regarding
I'm
Prem
deployment,
where
cloud
services
aren't
an
option:
I
want
a
DB,
that's
a
che
c'è,
my
sequel
in
a
kubernetes
cluster.
How
can
I
achieve
this?
I
know?
That's
a
general
question
and
he
says
starting
group
replication
or
my
sequel.
Cluster
in
containers
is
fine,
but
I
get
bothered
by
volume
storage
should
I
have
local
violence
or
ajaan
each
node
or
should
I
have
some
network
storage
and
people
started
dropping
in
some
hints.
A
D
So
for
my
sequel,
you've
got
a
couple
of.
First
of
all,
you
have
to
kind
of
decide
how
you're
dealing
with
database
redundancy
as
your
main
I
twisted
in
my
sequel,
our
master
single
master,
replication,
on
which
case
you
might
you
probably
want
to
use
something
like
the
test
to
manage
replication
and
failover
for
you
or
alternately
I
know
a
lot
of
people
use.
My
sequel
with
Galera
on
kubernetes
and
Galera
is
multi
master
replication,
and
that
has
the
advantage
that
you
don't
necessarily
have
to
worry
about
master
failover.
D
It
does
have
the
disadvantage
in
that
you
need
more
my
sequel
nodes
per
database
than
you
would
for
a
solution
like
the
tests,
so
part
of
that's
also
going
to
depend
on
on
how
many
dating
is
you
have
and
how
heavily
loaded
they
are.
If
you
have
a
lot
of
lightly
loaded
database
than
single
master
with
the
test
is
probably
gonna
be
the
way
to
go.
D
If
you
have
fewer
and
larger
databases,
Galleria's,
probably
in
be
the
way
to
go,
the
in
terms
of
storage,
all
this
one's
on
stateful
sets
equal
sets
are
making
the
assumption
that,
on
the
storage
that
you
are
putting
the
database
on
is
somehow
available
to
new
nodes.
If
your
pod
gets
moved,
how
that's
available
is
kind
of
to
you
in
terms
of.
D
Minoan
terms
of
our
using
network
storage,
our
using
some
kind
of
clustered
storage
like
Gloucester
or
rook,
that
that
handles
copying
across
all
of
the
nodes
or
whatever
are
using
something
like
port
works
that
handles
copying
on
the
file
system.
All
of
these
are
viable
storage
options.
Depending
your
situation,
I
will
say
that
for
my
Postma
skill
based
solution,
sometimes
I
cheat
and
just
use
local
storage
and
count
on
PostgreSQL
replication
for
recovery
depending
on
the
setup.
You
know,
and
in
that
case
you're,
basically
assuming
you're
going
to
lose
the
storage
anytime.
D
The
pod
moves,
however,
there's
a
risk
inherent
now,
which
is
that
if
you
lose
the
entire
kubernetes
cluster
at
any
point,
then
you
need
to
have
an
alternate
database
recovery
mechanism
because
there's
no
guarantee
that
when
the
cluster
comes
back
up,
the
nodes
will
gets
the
database
pods
will
get
scheduled
in
the
same
nodes
and
thus
have
the
local
storage
available
to
them
in
the
future.
D
When
we've
got
topology
aware
storage
scheduling
available,
it
might
be
possible
to
actually
make
it
a
lot
more
probable
that
database
notes
will
start
up
on
the
same
local
nodes
based
on
storage
availability.
But
that's
not
a
stable
feature
yet
and
I
would
never
rely
on
it.
100%
I
would
have
something
like
offline
cold
backup
that
could
be
used
to
restore
all
the
database
systems
or
just
use
clustered
or
network
storage
available
told
it's.
B
You
don't
have
to
worry
about
a
distributed
storage
platform
and
and
everything
that
goes
along
with
it,
so
not
to
say
that
they're
bad,
you
know,
I,
think
a
lot
of
the
redundancies
and
replicas
that
some
of
the
new
distribution
platforms
provide
can
be
very
good
for
non
shared
storage
per
se
and
when
I
say
shared
I
mean
like
live
file.
Systems
to
multiple
pots
that
that
particular
feature
is,
is
what
I
think
needs
a
lot
more,
a
lot
more
effort
and
testing.
So
that's
my
two
cents
on
it.
A
E
My
name
is
hippy
hacker
and
I've
been
doing
some
work
for
the
ciencia
for
the
projects
you
see
I
to
integrate
all
of
the
different
projects
that
we're
curating
and
most
recently
I've
been
shifting
into
snooping
into
the
kubernetes
api.
So
that's
under
it
hub,
CNCs,
hey
guys,
new
and
digging
into
testing
and
a
couple
other
portions
of
the
community.
Oh
cool.
A
Again,
happy
to
have
you
thanks:
alright,
we're
actually
waiting
for
some
more
questions
from
the
channel.
I'm
gonna
go
ahead
and
pop
into
kubernetes
users
and
grab
more,
although
the
channel
does
not
seem
to
be
as
happen
as
it
was
earlier
this
morning
we
think
everyone's
in
a
late
lunch.
It's
okay
and
we
just
finished
talking
about
databases
and
saiful
sets
I,
don't
know
if
you
have
any
strong
opinions
on
that.
A
Let's
see
all
right,
I'm
gonna,
head
back
into
the
backlog,
then
from
Jeff
who's
having
internet
problems.
Still
Forster
would
like
to
ask
anyone
know
what
happens
if
you
configure
a
deployment
max
unavailable,
:,
zero
and
pod
disruption,
budget
max
unavailable,
one
which
one
wins
go
ahead
and
post
the
question
in
the
channel.
So
you
can
see
exactly.
A
C
Well,
CNI
is
just
like
a
very,
very
easy
to
use
plug
in
a
networking
model.
That's
used
by
kubernetes
and
I
believe
mezzo,
says
well.
Docker
uses
the
container
network
management
thing,
but
essentially,
like
you
can
on
your
networking,
just
as
like
a
daemon
set
it
in
your,
your
pod
will
get
like
a
vet
pair,
that's
plugged
into
the
the
CNI
driver
that
manages
the
actual
networking
under
the
hood.
C
As
far
as
it
working
cluster
wise
DEP
like
when
it
comes
like
deployments,
it's
pronounced,
just
a
daemon
side
depends
on
which
we
are
going
with
I,
don't
actually
use
final
I
use
a
couple.
Other
ones,
flannel
I
believe
is
like
the
X
LAN,
which
is
like
Mecca
Mac
encapsulation,
give
anything
like
particular
about
you
want
me
to
dive
into
and.
A
A
B
Sure
yeah
I
mean
this
is
basically
how
do
you
build
your
application
to
accept
the
sort
of
ingress
coming
from
your
core
interest
controller,
that
the
majority
of
people
will
basically
when
they
show
their
application?
It's
a
pod
and
it
is
the
application
running
in
one
container
and
then
an
index
container
running
in
front
of
that.
B
Obviously,
you'll
want
to
be
careful
with
things,
especially
around
load,
balancing
and
try
and
do
carry
an
IP
address
source
like
the
address
back
to
the
actual
application
that
can
give
a
little
bit
tricky.
So
definitely
some
something
about
there,
but
nginx
is
definitely
a
very
safe
bet
for
what
you're
trying
to
do.
I.
A
Yeah
any
other
any
other
comments
on
enforcing
over
to
http
hangover,
niro
ass.
No
luck
would
check
service
affinity
and
fortunately,
no
we're
gonna
have
to
do
our
homework
on
that
one.
As
far
as
I
know,
and
then
vice
key
I
guess
is
clarifying
I
guess
the
question
is
how
to
302
direct
everything
coming
in
from
port
80
to
HTTPS
something
so.
B
Dirty
I,
totally
derp
they're,
really
an
awesomely
hard
yeah,
there's
Wow,
so
continuing
what
I
was
saying.
You
can
actually
do
this
at
the
application
level
or
you
can
do
this
at
the
be
edging
doors
controllers
right
and
and
that
the
edge
intercourse
controller
is
the
best
place
to
do
it
so
that
you're
not
wasting
cycles
going
sending
traffic
back
to
your
applications
for
it
to
then
make
arriving
to
some
growing
decision.
B
So
look
at
the
end,
aggress
controller
docks
for
engine
X
and
you
should
be
able
to
see
ibly
there's
an
annotation
that
you
can
set
up
in
the
ingress
definition
to
do
this.
Sorry
about
that
I
kind
of
there,
but
the
the
other
thing.
Here's
a
lot
of
people!
Actually
a
lot
of
the
ways
SSL
is
handle
or
TLS
is
handled
is
can
be
at
the
edge
where
your
edge
ingress.
B
Actually,
you
know
handles
that
TLS
to
the
client
and
then
it's
unencrypted
getting
to
the
application
itself
from
the
the
edging
Durst
controller,
but
also
a
lot
of
people
like
to
afford
it
through
all
the
way
to
the
application
as
well.
And
so
that's
where
your
application,
specific
ingress
can
be
useful
as
well,
so.
E
A
E
E
A
B
A
Yeah,
if
you
like
sorry
about
that,
so
both
Jeff
and
and
Bob,
their
town
had
like
a
main
water
break
and
I.
Guess
it's
causing
all
sorts
of
infrastructure
issues
over
there,
so
that
we're
gonna
start
wrapping
up
here
soon
and
do
the
raffle
I
see
a
few
people
typing.
So
why
don't
we
give
it
a
few
minutes
and
then
move
on
from
there.
A
B
B
E
A
B
Don't
really
have
much
to
add
on
this
if
there's
a
high
end
limit
of
end
points
to
where
it's
putting
a
strand,
you
know
I
I,
don't
really
know
where
this
strain
would
would
come
in.
I
have
never
dealt
with
that,
so
I
would
say.
Maybe
a
Stack
Overflow
or
maybe
a
discussion
that'd,
be
a
good
discussion.
I
think.
A
Be
unless
you
know
we'll
get
a
one
last
question:
if
someone
has
an
easy
one
and
then
we'll
go
ahead
and
start
to
wrap
it
up.
So
let's
see
who's
asked
any
questions
here.
So
p.m.
webster
asks
a
question
and
dylan-esque
the
question
here.
So
we're
going
to
run
we're
going
to
run
the
raffle
now
duck
Mario.
Do
you
have
a
yeah?
A
B
On
this
question,
you
know
lower
level.
This
is
also
a
good
discussion.
I,
don't
my
suppose,
the
lower
level
you
separate
the
better
you're
going
to
be
from
a
security
perspective
for
my
my
net
n
Jack
round,
like
definitely
setting
boundaries
around
the
layer,
two
layer,
3
network
is
going
to
be
better
from
a
security
point
of
view.
You
want
to
try
to
implement
security
at
as
low
a
layer
as
possible,
so
yeah.
A
A
So
thanks
to
the
following
companies
for
supporting
the
community
with
the
developer
volunteers,
this
is
Amazon
bit
nami
giants
arm
hefty.
Oh
look
at
web
Northwestern,
Mutual
PAC
at
that
pivotal
Red
Hat
week
works
University
of
Michigan
VMware
and
the
CNC
F.
That's
for
you
happy
hacking,
hippy
hacker,
sorry
soon,
we'll
be
holding
a
lot
more
raffles
with
different
kubernetes,
spinners
and
all
sorts
of
great
things,
but
for
today
we're
gonna
do
t-shirts
feel
free
to
hang
out
and
hash
office
hours
afterwards.
A
We
definitely
owe
you
a
follow
up
on
what
was
the
one.
The
cheque
service
affinity,
I,
think,
is
something
we
should
follow
up
on.
I
think
we'll
definitely
discuss
that
I'm
having
dinner
with
the
fellas
here
in
a
little
bit,
and
we
will
we'll
figure
out
how
to
get
you
a
better
answer
for
that
and
with
that
Josh.
Do
you
happen
to
have
our
winner
and
you're
muted
lack
is
our
winner,
alright,
so
mm?
B
He
is
yeah
he's.
A
Here,
I
just
won
a
t-shirt.
I
will
PM
you
right
after
this
and
give
you
a
code.
There's
a
wonderful
CN
CF
store
where
you
can
get
this
fine
quality
item
here,
highly
coveted
by
many.
So
congratulations
as
always.
We
do
these
on
the
third
Wednesday
of
every
month.
Please
help
us
spread
the
word.
If
we
weren't
able
to
answer
your
questions,
sorry,
we
will
be
doing
follow-ups
on
discussed
on
kubernetes
I/o,
which
is
here
I'm
pasting
the
link
on.