►
Description
Join members from the ASP.NET teams for our community standup covering great community contributions for ASP.NET, ASP.NET Core, and more.
Community Links: https://www.theurlist.com/aspnet-community-standup-2020-03-10
A
C
A
D
E
B
B
A
E
Okay,
it's
cuz
the
other
person.
My
house
also
has
a
meeting
at
the
very
same
time
and
our
apartments
very
small
I
need
to
go
to
a
place
where
I
had
like
a
room
to
talk
but
I've
been
it
it's
a
total
ghost
town
I've,
seen
like
two
people,
total
and
I
feel
like
it's:
okay,
because
there's
such
a
little
like
number
of
people
but
I
still
don't
feel
good
about
being
here.
I
would
be
at
home,
but.
D
B
Well,
that's
cool,
are
we
talking
about?
We
were
talking
about
jokes
John's
kind
of
mess
with
the
audio
Justin
and
I
were
talking
about
how
like
in
case
anybody,
isn't
aware,
like
Microsoft,
basically
told
everybody
like
go
home
and
stay
home
like
don't
come
to
the
office,
which
is
which
has
been
great
like
I,
think.
B
B
B
E
B
B
Ocher
conference
speaker
so
we'll
have
to
see
about
that.
Give
it
a
try.
One
of
my
favorite
things
to
do
at
conferences
is
to
share
a
stage
with
with
Steve
Sanderson,
so
that
I'm
clearly
the
mediocre
one
of
the
two
of
us,
which
is
great
so
so,
hey,
I'm,
Ryan
I'm.
Here
with
Justin,
we
both
work
on
the
a
subpoena
team.
I'm,
we're
gonna
be
talking
about
kubernetes.
Today
this
is
going
to
be
a
continuation
of
a
walk
down
a
ferny
forest
path
that
we
began
at
the
last
community
stand
up.
B
B
Kubernetes
has
got
pods.
Pods
are
the
unit
of
deployment,
we're
gonna,
look
at
some
more
stuff
to
do
with
pods.
Today,
kubernetes
has
got
services.
Services
are
sort
of
like
the
unit
of
networking
in
kubernetes.
So
if
you
want
to
have,
if
you
want
to
have
something,
that's
addressable
by
other
things
running
in
the
cluster
services
are
the
way
that
you
do
that
we
talked
about
deployments
and
how
they're
different
from
pods.
B
Today,
we're
probably
going
to
get
a
little
bit
more
to
the
stuff
that
is
like.
Well,
you
should
learn
this.
You
should
do
this.
You
should
think
about
this
when
you're
deploying
examples
or
when
you're
running
applications.
So
that's
that's.
Basically
the
agenda
for
today
we're
gonna
go
a
little
bit
past
deployments
as
well.
B
Talk
about
more
stuff
when
we
get
to
it
and
Justin's
gonna
have
some
cool
demos
for
us
as
well,
so
I
just
talked
a
little
bit
about
pods
and
containers
and
deployments,
and
there
are
some
diagrams
that
I
found
that
I
like
that,
are
good,
I.
Think
for
explaining
these
things
just
to
sort
of
position
us
before
we
get
back
to
where
we
were
before
so
you've
got
the
master,
node
or
master
nodes.
B
In
some
cases
those
are
sort
of
the
controllers
and
those
those
things
decide
like
where
your
workloads
actually
get
deployed
and
then
on
every
node
you
can
think
of
nodes
can
either
be
physical
machines
or
they
can
be
virtual
machines.
You've
got
the
couplet
process
running,
which
is
sort
of
like
a
like
a
supervisor
for
all
the
work
that
kubernetes
is
doing
on
that
machine
and
then
there's
this
this
diagram,
one
flaw
with
it
is:
it
makes
it
look
like
there's
one
pod
per
node,
all
right.
B
I'm
the
wards,
so,
okay,
when
I
was
talking
about
meandering
down
a
forest
path.
I
won't
just
want
y'all
to
know
that
I'm
crazy,
like
that,
was
a
reference
to
what's
on
the
slides,
and
everybody
would
have
appreciated
that
if
I
were
a
better
presenter,
so
now
we're
caught
up
now
we're
caught
up
to
my
diagram,
thanks
Justin
for
having
my
back
so
we've
got
master
node
we've
got
our
nodes.
B
Nodes
are
gonna,
be
a
host
for
multiple
pods.
This
diagram
makes
it
look
like
there's
one
pod
per
node.
That
is
a
lie.
There
will
be
many
nodes
per
pod
in
general
and
then
pod
can
contain
multiple
containers.
There
are
reasons
why
you
want.
You
sometimes
want
to
have
multiple
containers
in
a
pod.
There
are
a
lot
of
reasons
why
you
don't
somebody
asked
a
really
good
question.
The
first
time
we
talked
about
this
content.
That
I
think
is
a
good
important
question.
Is
you
generally
put
things
in
separate
pods
when
they
can
be?
B
You
probably
would
not
put
an
application
and
its
database
in
the
same
pod
for
for
scalability
reasons.
Right,
like
you,
might
want
more
instances
of
the
application
that
talks
to
the
database
than
you
want
instances
of
a
database
server.
Some
of
cases
where
you
want
multiple
containers
running
in
the
same
pod
are
more
specialized
in
more
advanced
scenarios.
B
They
share
networking,
so
they
have
the
ability
to
do
networking
with
each
other
that
doesn't
leave
the
pod
and
and
they
sort
of
collaborate
on
the
set
of
ports
and
things
that
are
available
and
they
can
do
inter
process
communication
and
then
and
then
what
makes
a
deployment
sort
of
different
from
a
pod.
Well
deployments
are
really
just
like
a
wrapper
around
multiple
concepts,
so
deployments
add
concepts
like
updating
and
rollbacks
they're
a
wrapper
around
a
concept
called
a
replica
set,
which
is
a
different
kind
of
kubernetes
object.
B
Replica
sets
represent
a
number
of
pods
and
then
pods
are
the
things
that
actually
get
deployed
and
then
within
those
pods
they're
containers.
So
that's
sort
of
like
our
layer
cake
of
how
all
these
things
work.
So
with
that
I'm
gonna
jump
to
demos,
we're
gonna
be
we're
gonna
be
done
with
slides
for
a
little
bit
and
I
have
got
a
slightly
different
application
than
the
one
we
looked
at
before.
I
think
this
one's
a
little
bit
better
I've
got
two
services
here,
and
this
is
going
to
help
us
demonstrate
some
concepts.
B
B
B
So
what's
going
on
behind
the
scenes,
what
can
we
see
about
this?
Well,
one
of
the
things
that
we
can
look
at
is
we
can
get
the
list
of
pods
I've
got
six
pods
running
here.
So
I've
got
three
back-end
instances.
I've
got
three
front-end
instances:
you'll
notice
that
these
names
will
map
to
these
names.
So
here's
this
one
maps
to
this
one
and
so
on
so
like
the
name
of
the
pod,
becomes
the
name
of
the
host
name.
B
B
But
but
basically,
services
have
logical
names
is
every
every
kubernetes
object,
for
the
most
part
has
a
name,
and
the
name
that
you
give
to
the
service
is
actually
going
to
resolve
with
DNS
and
there's
a
whole
scheme
of
different,
like
names
for
resolving
things
in
in
DNS.
That
actually
differs
by
namespace,
which
is
a
concept
that
we
haven't
introduced
yet
so.
B
E
B
Okay,
so
let
me
answer,
let
me
add,
first
of
all,
can
people
hear
John
yeah?
They
can
okay,
great,
so
I'm,
not
gonna,
repeat
the
question
then,
because
I
think
everybody
out
there
and
radioland
you
got
to
hear
it,
which
is
great,
so
the
two
things
that
are
different
here.
The
reason
why
there's
metadata
equals
name
is
remember:
every
kubernetes
object
has
a
metadata
section,
most
kubernetes
objects.
It
might
be
true
of
all,
but
I'm
gonna
say
most
and
hedge.
My
bets
have
a
name
so
that
they
can
be
uniquely
identified.
B
So
if
I
come
over
to
the
command
line
and
I
say
coops
ETL
get
service
instead
they're
abbreviations
for
convenience
stuff.
You
can
see
that
all
these
services
have
names
right
like
front
and
a
back
end
have
names,
so
you
have
to
provide
a
name
so
that
you
can
see
it
the
other.
The
other
reason
why
is
you
need
a
selector?
B
The
reason
why
there
is
a
selector
here
and
what
this
selector
does
it's
in
the
spec
section,
so
anything
that's
in
the
spec
section
of
a
kubernetes
object
identifies
like
what
is
the
definition
of
this
thing.
So
metadata
is
like
informational
information
and
spec
is
like
what's
the
definition,
so
what
the
selector
is
actually
used
for.
B
Is
it's
used
to
match
what
pods,
but
to
this,
and
the
way
that
I'll
demonstrate
that
is,
I'll
demonstrate
that
by
doing
this,
goop
CTL
will
get
pods
and
then
I'm
gonna,
say
L
L
stands
for
label
and
then
I
can
write
a
query
and
I
can
say
app
equals
back-end
and
that's
going
to
get
me
just
the
back
end
pods.
So
how
that's
powered
is
in
my
template
for
my
pods
back
for
my
deployment.
I've
said
every
pod
that
you
create
of
the
backend,
give
it
the
label
backend.
Now.
B
One
of
the
comments
that
somebody
had
on
our
discussion
last
week
is
using
the
same
names
for
different
things.
All
over
the
place
might
be
productive
and
it
might
be
fun,
but
for
somebody
who's
trying
to
learn
this
stuff,
it's
very
like
unapproachable,
because
you
don't
know
necessarily
what
goes
to
what.
So,
let's
let's
figure
that
out.
So
if
we,
if
we
gave
all
these
things
different
names
like
if
this
was
back-end
pod
right,
then
all
of
my
pod
names
here
would
be
back-end
pod,
horrible
generated
name
right.
B
If
this
were
back-end
label
and
the
label
is
for
selection
and
filtering,
then
you
would
basically
go
like
this
for
very
detailed
reasons
that
I,
don't
completely
understand,
deployments,
need
both
match
labels
and
a
label
on
the
pod.
Spec
I.
Don't
completely
understand
why
that
is
I
just
know
by
memory
that
these
two
things
have
to
be
the
same.
This
would
look
like
this,
so
this
selector
and
it's
it's
the
combination
of
both
app
and
the
value
needs
to
match
like
one
or
more
labels
here.
Hopefully,
that
makes
sense.
B
Okay,
okay,
mechanically,
what's
happening
is
there's
a
DNS
entry
created
in
the
DNS
for
the
cluster
and
that
DNS
entry
says:
take
the
hostname,
take
the
hostname
backend
and
map
it
to
the
IP
addresses
of
all
the
pods
that
exist
within
the
cluster,
so
okay,
that,
hopefully
that
makes
sense
yeah!
That's
why
you
have
this
selector
is
you
could
do
like
what
I've
done
here
is
I've
got
one
sort
that
map's
to
one
one
deployment,
but
you
could
do
something
like
way
more
complicated
than
that.
B
If
you
wanted
to
right,
you
could
have
a
service
that
maps
to
multiple
labels.
So
that
brings
me
to
a
topic
that
I
wanted
to
talk
about,
actually,
which
is
what
what
is
a
service
like?
What
is
the
purpose
of
the
service,
and
the
answer
is
that
services
can
do
all
kinds
of
things
I
there
for
networking
and
they're
generally
for
DNS.
B
The
way
I
like
to
think
about
it
is
that
DNS
in
terms
of
distributed
systems,
programming
and
in
terms
of
networking
programming
is
a
little
bit
like
a
function.
Call
like
it's
your
most
basic,
most
primitive
sort
of
abstraction
that
you've
got
to
use
because
it
allows
you
to
hide
complexity.
So
some
of
the
kinds
of
things
you
can
do
with
services.
You
can
use
a
service
to
expose
something
to
the
Internet.
That's
what
I'm
doing
here
with
type
load
balancer.
B
You
could
use
a
service
to
expose
something
to
the
cluster
and
you
can
do
that
with
cluster
IP.
So
cluster
IP
basically
says
you
know,
load
balance
this
thing,
but
internally
I
do
round
robin
DNS,
but
internally
don't
expose
to
the
Internet.
You
can
also
do
some
other
things
here.
You
can
do
something
called
a
node
port
which
I
don't
see.
A
ton
of
usage
of
node
port
does
a
really
interesting
thing
where
it
actually
opens
a
port
on
every
service
or
on
every
pod.
B
A
B
Is
like
it
is
a
lot
like
a
backplane.
It
basically
does
a
bunch
of
proxying
for
you,
okay,
I,
don't
know
of
a
lot
of
good
use
cases
for
it.
I
just
know
that
it
exists
one
of
the
spiciest
ways
that
you
can
use
services
and
I.
Don't
remember
exactly
how
to
how
to
write
it,
but
I
know
it
exists.
Is
you
can
define
a
service
effectively
for
a
static
thing?
You
could
define
a
service
that
maps
to
a
different
hostname,
so
you
could
assign
a
logical
name
to
talk
to
some
remote
thing.
B
That's
not
even
in
kubernetes
like
let's
say
you
wanted
to.
You
have
a
DNS
entry
somewhere
else
for
your
Redis
cluster,
and
you
have
a
like
a
Redis
cluster
that
you've
provisioned
in
your
cloud
provider,
and
you
want
to
map
it
kubernetes
using
a
service
so
that
it's
queryable
with
dns
inside
your
cluster,
that
you
can
do
that.
B
You
can
actually
use
services
for
that
to
sort
of
just
take
control
of
how
DNS
works,
there's
also
a
whole
scheme
for
how
services
work
when
you're
going
across
different
namespaces
and
kubernetes,
which
is
something
that
we
haven't.
Really
introduced
yet
that
we
will
talk
about
a
little
bit
later,
so
we.
B
E
B
B
B
Let's
talk
about
some
more
stuff,
we
can
do
and
let's
talk
about
deployments
and
some
of
the
kinds
of
things
that
they
can
do,
because
we're
now
into
the
meat
and
potatoes
of
like
okay,
so
kubernetes
has
given
me
a
place
to
deploy
things,
but
what
are
some
of
the
features
that
I
actually
want
to
use?
So
one
of
the
things
that
we've
done
here
that
I
haven't
talked
incredible
amount
about,
is
I've
used,
configuration
and
I've
used
an
environment
variable
here,
so
environment
variables
are
like
one
of
those
primitives
you've
done
docker
composed
today.
B
Are
you
doing
anything
Linux
oriented
chances?
Are
you
are
already
pretty
familiar
with
environment
variables,
they're
just
basic
I'm,
using
this
double
underscore
convention
and
double
underscore
in
asp
net
terms
basically
means
like
:.
It
basically
means
separator,
so
the
colorization
and
BS
codes
a
little
off,
but
you
can
hopefully
see
here
that
I'm
reading
from
configuration
and
I'm
saying
app
:
value,
so
setting
setting
app
underscore
value
is
kind
of
equivalent
to
be
doing
Jason
and
saying
like
app
value
and
then
putting
my
text
in
here.
B
That's
basically
the
equivalent,
if
you're
familiar
with
our
Jason
configuration
format,
so
I'm
setting
an
environment
variable
here.
One
of
the
other
things
that
I'm
doing
here,
that's
kind
of
exciting
is
I've,
got
a
volume.
Mount
and
I've
got
a
volume
which
this
just
seems
like
a
bunch
of
kind
of
jargon.
Maybe
I
want
to
point
out
what
this
is
actually
reading
from
and
I
want
to
sort
of
deconstruct
all
of
this
for
for
everybody
out
there,
so
there's
a
volume
mount
will
start.
B
It
will
start
at
the
beginning,
and
let
me
highlight
because
not
everybody
is
a
yellow
person.
Let
me
highlight
the
thing
that
I'm
talking
about
so
for
my
container
I've
got
a
volume
mount
a
volume
mount
basically
says,
take,
take
a
directory,
take
a
take
a
drive
and
put
it
up
this
folder
structure,
so
I've
got
a
folder
in
Etsy,
front-end
config,
and
what
I'm
mounting
there
is
the
volume
with
the
name,
front-end
config
volume
pretty
exciting
stuff,
so
my
container
has
got
a
volume
mount
and
where
is
that?
B
Come
from
that
comes
from
the
pods
back,
so
this
is
a
pod
spec.
It's
a
template
for
creating
pods
and
in
my
template
for
creating
pods
I've
got
a
volume
named
front-end
config
volume,
which
is
the
thing
that
I'm
referencing
here
and
that
volume
is
being
populated
by
a
config
map.
So
let's
talk
about
volumes.
Real
quick
volumes
are
basically
the
way
that
you
interact
with
disks
and
storage
and
things
that
are
going
to
be
persistent
and
stick
around.
So
there's
a
million
ways
to
do
volumes
there.
B
Well,
maybe
not
a
million,
but
there's
there's
different
ways
of
doing
volumes.
You
could
map
a
volume
to
say
like
Azure
storage
or
like
s3
or
whatever
your
cloud
provider
is
like
sort
of
storage.
Abstraction
is
so.
Let's
say
that
you
decide
you
want
to
run
a
database
in
your
cluster.
You
probably
care
about
that
data
sticking
around.
B
So
you
probably
want
to
have
a
volume
that
your
database
is
going
to
use
for
its
storage,
that
you
can
do
backups
on
and
you
can
maintain
and
you
can
redeploy
the
database
and
it's
going
to
get
the
same
storage
and
read
the
same
data
and
things
like
that.
So
so
the
the
volume
that
that
handles
the
abstraction
of
talking
to
different
kind
of
storage
types
like
you're,
saying
s3
or
a
blob
storage
or
whatever,
okay
yeah,
because.
B
I
was
like,
oh,
that
sounds
more
complicated
than
like
a
docker
file
where
I
just
point
out
it
a
path,
but
this
is
actually
stored
somewhere
else.
Yeah
I
think
the
key
is
docker.
Docker
lets
you
sort
of
say,
like
I,
want
to
mount
a
thing
here
right.
That's
this!
That's
this
config!
This
is
saying:
I
want
to
mount
a
thing
here.
B
This
part
is
more
about.
Okay,
let's
describe
the
thing
that
I'm
mounting
there
and
there's
a
couple
there's
a
couple
ways
that
this
could
work.
So
one
of
the
ways
that
this
could
work
is
there's
a
there's,
a
resource
type
oops.
That's
the
wrong
gesture,
there's
a
resource
type
in
kubernetes.
So
let's
do
API
version
v1,
there's
a
resource
type
here
called
persistent
volume,
so
you
could
be
persistent
volume,
there's
persistent
volumes
and
there's
persistent
volume
claims
and
those
are
basically
ways
of
interacting
with
disks
and
storage
and
things
like
that
through
your
cluster.
B
So
how
exactly
what
the?
What
the
details
are
that
you
configure
to
those
things,
there's
sort
of
basic
ways
to
get
that
stuff
to
work.
If
you
actually
care
about
storing
the
data
that
goes
on
those
things
and
being
able
to
say
like
make
backups
of
that
data,
you
will
look
into
whatever
your
cloud
providers.
Implementation
is
and
how
to
do.
How
to?
B
How
would
you
basically
go
from
using
Azure
I
want
to
provision
an
azure
disk
and
then
I
want
to
map
that
to
my
database
server,
like
that's
kind
of
how
you
should
think
about
it?
So
there's
there's
ways
to
do
things
like
that
with
storage,
but
what
I'm
using
this
for
that
I
want
to
show
you
show
everybody
is
I'm
actually
using
it
for
config,
so
I'm
mapping
in
a
folder
here
and
I'm
gonna
have
that
folder
be
backed
by
config
and
then
inside.
B
This
is
a
config
map
which
is
another
type
of
kubernetes
object
and
inside
of
this
config
map,
I
have
got
a
JSON
file
in
a
llamó
file
in
a
key,
so
I
have
in
line
defined
Jasin
config
file
for
my
application
and
I've
mapped
it
to
a
file
called
config
JSON
on
disk.
So,
what's
kind
of
exciting
about
this
is
I've
used
this
config
file
to
override,
like
all
the
logging
settings
for
this
pod
or
for
this
application.
So
we
can.
B
B
You
can
see
all
this
debug
debug
debug
debug
and
you
can
see-
there's
like
a
million
logs
here
that
are
probably
more
detailed
than
something
you've
hopefully
had
to
see
before
and
that's
because
I've
turned
all
these
logs
up
to
the
max
using
this
config
file.
Now
something
else
that
you
could
do
with
this
config
map
and
the
reason
why
this
is
a
separate
file.
What's
valuable
about
doing
things
like
config
Maps
would
be
like
well.
B
So
I
could
basically
use
this
for
dynamically
managing
config
and
piping
that
through
so
one
of
the
other
things
that
I
think
is
kind
of
neat
to
show
about
this
is
we
can
actually
get
a
terminal
inside
the
pod
and
I
highly
recommend
vs
codes
tools
for
this?
If
you
haven't
checked
them
out,
so
it
starts
you
out
in
the
app
directory
or
wherever
this
sort
of
route
is
so
you
can
see.
I've
got
a
typical
little
like
Linux
file
system
here
and
I
called
this
Etsy
front-end
config,
and
so
you
can
see.
B
B
What
I'm
doing
to
read
that
in
is
I'm
just
customizing,
how
I
read
config
so
I've
got
this
file
here
and
I'm,
saying
it's
optional,
reload,
unchanged
and
I
can
read
this
file
and
I
get
genius
config
that
I
can
manage
in
kubernetes
if
I
want
to
dynamically
update
and
either
can
big
and
that
will
apply
to
all
these
running
apps.
Let
me
just
make
sure
I
followed
all
that
so
you're
waiting.
E
B
Basically,
in
your
startup
you're
reading
a
JSON
file,
but
that
JSON
file
is
actually
kind
of
an
abstraction
because
it
was
pulled
out
of
here.
Ii
am
oh
yeah.
Okay,
we've
got.
We've
got
multiple
hops
here
right,
so
the
app
has
the
expectation
that
this
file
exists
and
then,
if
we
follow
the
hops,
there's
a
file
to
find
in
a
key,
even
a
config
map.
B
The
config
map
is
wired
up
to
a
volume
which
is
basically
like
a
list
of
imports
into
the
pod,
and
then
the
volume
is
mapped
to
a
path
inside
the
pod
or
inside
the
container
that
corresponds
to
where
I
expect
it
to
be
so
that
I
can
read
it
okay.
So
when
you're
a
preset
etsy
front-end
config
Chasen
file,
it's
just
a
file
on
disk
as
far
as
it
knows,
yeah.
As
far
as
it
knows,
and
if
I
update
the
config
map
and
deploy
it
to
kubernetes
kubernetes
will
overwrite
the
file
on
disk.
Okay,
some.
B
So
there's
a
couple:
there's
a
couple
things
to
say
about
that.
So
the
the
sort
of
system
level
primitives
that
exist
for
applications
kubernetes
tries
to
be
like
tech
stack,
independent,
they
like
to
say,
like
kubernetes,
is
not
optimized
for
one
language
over
another.
So
so
what
are
the
ways
that
we
could
do
configuration
well
the
ways
that
we
could
do
configuration?
We
could
do
environment
variables
right
and
that's
that's
here,
so
you
could
use
an
environment
variable
to
map
in
config
as
well.
B
B
B
So
you
could
do
that
like
this,
this
I
think
there
is
a
like
entry
point
or
args,
or
something
like
that,
like
you
can
do
that
and
then
the
other
ways
that
you
can
get
configuration
into
an
app
would
be
to
host
an
external
service
and
then
have
the
app
read
its
configuration
from
that
service
or
files
on
disk.
So
those
are
sort
of
the
primitives
that
exist.
I.
Think
reading
from
another
service
is
not
a
primitive,
obviously
run
another
service,
but
in
terms
of
it
not
being
a
file
on
disk.
B
The
thing
is
that
kubernetes
can
turn
whatever
I
put
in
here
into
a
file
on
disk.
Now
one
of
the
things
about
config,
that's
a
little
bit
interesting
I
guess
is
that,
like
whatever
I
put
in
here
has
to
be
text
right,
so
whatever
I
put
in
here
has
to
be
like
valid
text
like
I
cannot
use
this.
It
doesn't
provide
me
away
for
do
me
to
just
type
some
like
binary
dipper
in
here.
Right
like
it
doesn't
do
that
config
does
not
do
that
in
kubernetes.
Config
maps
are
always
text.
B
There
is
another
primitive
called
a
secret.
So
I
could
change
this
to
secret
and
it
would
be
similar
there's
a
few
things
that
are
a
little
bit
different
with
secret,
like
I
think
you
have
to
say,
like
type
opaque
and
stuff
like
that
secrets
are
a
little
bit
different,
but
some
of
the
things
that
are
different
about
secrets
is
secrets
are
intended
to
hold
binary
data
as
well
as
text.
So
you
can
put
things
in
secrets
that
are
binary
data
or
things
that
are
in
text
generally.
B
You're
gonna
want
them
to
either
be
files
on
disk
or
environment
variables.
I'm,
not
aware
of
other
ways
to
do
to
do
that.
So
maybe
there's
a
follow-up
question
or
there's
some
more
sort
of
discussion.
We
should
have
okay
I
had
a
question
to
come
in
on
twitch
somebody
who's
asking.
If
this,
if
the
demos
you're
showing
in
stuff
around
github,
they
are,
if
you
go
to
get
up,
calm,
slash,
Rhino,
OGG,
slash
presentation,
20:20
community
stand
up
310
all
this
stuff.
Is
there
awesome?
Okay
and
I'll
include
that
machine.
B
Nice
wagon,
so
did
we
get
through
all
the
questions
about
config,
yeah
I
think
so.
Okay
I
see
yeah
okay.
So
let's
talk
about
a
couple
other
things
that
are
going
on
here.
So
another
thing
that's
going
on
here
is:
we've
got
a
readiness
probe
to
find
there's
another
concept
here
called
aliveness
probe
and
these
two
end
up
being
really
similar,
but
different
and
they're
kind
of
for
different
things.
B
B
The
idea
of
having
a
service
monitor
your
service
and,
like
you
know,
send
you
a
page
if
your
service
goes
down
like
that's
a
pretty
old
concept,
an
IT
if
you've
been
around
a
while
kubernetes
defines
readiness,
probes
and
liveness
probes
in
a
very
particular
way
for
things
for
for
different
uses.
So
it's
worth
explaining
why
those
uses
are
important,
so
we
talked
about
last
time,
kubernetes
being
sort
of
a
fault
correcting
or
self-healing
system
and
kubernetes
will
restart
your
pods,
so
I've
got
I've
actually
got
an
example
of
this.
B
I've
got
a
pod
that
can
crash
and
we
could
see
kubernetes
restart
it,
but
we
might
have
done
that
last
time,
I'm,
not
sure,
but
anyway,
kubernetes
will
restart
your
process.
If
it
crashes
right,
it
knows
when
I
did
a
docker
run.
There
was
a
container
that
was
started
if
that
container
dies,
because
its
entry
point
crashed
or
its
entry
point
terminates.
That
thing
is
done.
The
fact
that
I
have
called
this
a
deployment
means
I
wanted
a
long-running
job.
B
So
it's
got
to
be
set
up
by
default
to
say:
okay,
if
that
thing
exited
I
want
to
restart
it,
which
is
what
kubernetes
will
do.
So
if
your
app
crashes
kubernetes
will
restart
you
now,
that's
the
behavior
that
you
get
by
default
is
if
the
process
exits,
you
will
get
restarted,
there's
other
behaviors!
You
could
have
that
you
might
want
so
one
of
those
behaviors
is.
You
can
configure
a
liveness
probe.
Aliveness
probe
is
a
way
for
kubernetes
to
poke
you
and
intervals,
and
then
you
can
tell
you
can
write
whatever
code
you
want.
B
You
can
run
whatever
code
you
want
and
you
can
tell
kubernetes
if
you
think
you're
healthy.
So,
for
instance,
here
I've
got
this
health
check
endpoint
here,
and
this
is
an
aesthetic
or
packaged
health
checks
and
there's
a
whole
bunch
of
extensibility
out
there
there's
a
whole
bunch
of
different
health
check.
Implementations
I
think
the
project
is
actually
called
asp
net
core
health
checks,
it's
by
some
of
some
of
my
friends
from
clean
concepts
and
there's
a
whole
bunch
of
different
health
check,
implementations
that
they
have
written.
That
can
do
things
like
say.
B
If
my
sequel
server
is
not
up,
you
know,
I
am
NOT
healthy
things
like
that
or
if
my
memory
usage
is
beyond
a
certain
point:
I'm
not
healthy
or
if
I
don't
have
write
access
to
this
directory.
I'm,
not
healthy,
so
there's
a
whole
bunch
of
different
things.
You
can
do
for
health
checks
to
sort
of
signal,
your
health
to
the
system.
Now
you
should
be
careful
when
you
think
about
these
kinds
of
things,
because
we
just
talked
about
the
definition
of
health
and
what
kubernetes
does
when
your
process
exits.
B
Well,
liveness
probe
does
the
same
thing,
so
it
says
right
there
in
the
tooltip
container
will
be
restarted
if
the
probe
fails.
So
by
that
I
mean
if
your
liveness
check
fails,
you
will
be
restarted
now.
You
have
some
flexibility
around
that
you
can
do
things
like
configure
the
interval
and
the
number
of
successes
or
failures
you
need
to
see
before.
Something
is
terminated
like
the
whole
thing
is
very
configurable,
but
liveness
probe
generally
means
should
this
thing
be
restarted
or
not
that
that
is
the
flexibility
that
you
get
with
a
lot
of
MIS
probe.
B
You
still
might
I
mean
those
health
checks
are
still
important,
but
maybe
more
in
your
application
level.
Yeah,
not
in
terms
of
having
kubernetes
restart
your
app
yeah.
It's
like,
if
there's
some
kind
of
critical
failure
scenario
within
your
app
that
you
want
to
detect.
There's
a
lot
of
nuance
when
you
get
into
the
topic
of
what
should
I
check
in
my
health
check,
and
the
answer
is
always
the
like.
B
Like
most
good
topics,
the
answer
is
always
it
depends,
and
in
the
case
of
health
checks
it
depends
who's
asking
and
why
they
want
to
know
right.
It's
like
if
you
get
a
phone
call
from
somebody
mysterious
chances.
Are
you
don't
answer
it
in
2020?
But
if
you
get
a
phone
call,
you
know
somebody
says:
hey:
is
this
John
Galloway,
you
might
say
who's
asking
right
like
it
really
depends
who's
asking
and
what.
B
You
say:
no,
it's
not
it's
just
I'm,
just
thinking
Todd
so
like
you're
live.
This
probe
is
like
well
who's
asking
like
the
container
runtime
is
asking:
why
are
they
asking
because
they
want
to
know
if
they
should
reboot
you
or
not,
so
think
about
what
kind
of
failure
conditions
you
would
want
to
just
reboot
the
app
right
it
probably
doesn't
have
much
to
do
with
your
external
dependencies
because
reading
the
app
may
or
may
not
resolve
that
problem
now
another
one.
B
B
Exactly
your
app
startup
time
right,
so,
if
you're
like
app
takes
a
long
time
to
start
up,
you
wouldn't
want
to
put
that
in
and
have
it
keep
restarting
you
just
because
you're
taking
a
while
to
startup
right
exactly
because
it's
not
going
to
solve
the
problem
for
you.
So
one
of
the
things
that
you
can
do
I
was
like
to
show
this
because
the
the
answer
is
simple
and
most
people
will
figure
it
out,
but
I
want
to
make
sure
everybody
feels
like
they
can
figure
it
out.
B
So,
if
I
wanted
to
have
a
separate
health
and
readiness
check,
like
I,
think
I
called
it
health
ready,
like
you,
just
define
that
and
then
the
way
that
this
works
in
health
checks.
Is
you
pass
a
health
check
options
here,
like
it's
Bell
and
can
complete
and
give
us
code
and
inside
my
health
check
options.
B
The
list
of
health
checks
is
global,
but
you
can
do
filtering
so
you
can
set
a
predicate
here
and
you
can
write
whatever
logic
you
want
inside
of
this
predicate
to
filter
on
the
set
of
checks.
So
I
could
say
return
true,
which
is
the
default
for
everything,
but
you
basically
get
called
with
a
like
descriptor
of
the
list
of
health
checks,
and
you
can
say,
like
okay
for
my
sequel
check.
B
I,
don't
want
to
run
that
in
my
liveness
check,
but
I
want
to
run
it
in
my
readiness
check
and
things
like
that,
so
the
system
is
meant
to
be
flexible
and
it's
meant
to
work
pretty
well
when
you
end
up
having
multiple
health
check
endpoints,
because
there
are
systems
out
there
like
kubernetes,
that
sort
of
expect
that
you
can
do
that
sort
of
thing.
So
that's
that's
where
that's
hidden.
If
you've
wondered
about
that,
do
we
have
any
questions
about
this?
Oh,
no,
no
I'm,
okay,.
E
B
E
B
I
think
the
thing
is
is
that
there
there
are
two
criteria
so
so
there
there
are
two
criteria
and
these
things
are
very
configurable
and
they
have
defaults
right
so
like
if
your
app
takes
ten
seconds
to
start
up
like
kubernetes,
it's
not
going
to
kill.
You
I
think
the
default
is
thirty
seconds
they'll
tolerate
like
30
seconds
of
start-up
time
right,
if
you
don't
define,
if
you
define
a
liveness
probe,
that's
and
not
a
readiness
probe.
B
That's
all
you've
got
if
you
define
our
readiness
probe
and
not
aliveness
probe,
then
your
liveness
probe
by
default
is
the
process
lifetime,
which
is
the
default.
Don't
mutate
state
in
your
health
checks?
It's
not
a
good,
not
a
good
plan,
because
you
don't
know
you
don't
know
how
many
times
they're
gonna
be
called
right
and.
B
Being
called
in
local
development
scenarios,
so
don't
don't
do
that
if
you
want
to
do
like
expensive
work
there,
you
just
want
to
check.
If
cash
exists,
you
wouldn't
want
to
build
the
cash
and
I
think
and
I
think
well.
You
would
want
to
start
building
the
cash
as
soon
as
possible
right
rather
than
you
would
want
to
kick
that
off
and
say,
like
a
hosted
service,
so
it
starts
immediately
when
the
app
starts
up.
B
You
wouldn't
want
to
wait
for
somebody
to
ask
you
if
you're
ready,
it's
kind
of
like
you
know,
if
you're
getting
if
you're
going
somewhere
and
your
friend
is
like
your
friend
is
like
hey,
let's
meet
at
7:00
like
you
want
to
start
getting
ready
before
you
need
to
leave
to
get
there
at
7:00,
don't
start
getting
ready
when
your
friend
emails
or
like
text
you
at
7:05
and
says:
where
are
you
right
sure?
Well,
it's
kind
of
how
I
think
of
it
is
like
start.
B
If
you
have
expensive
work
to
do
on,
startup
use
a
hosted
service
from
that
or
kick
it
off
in
some
other
way
from
startup.
Don't
use
a
red
his
check
to
kick
it
off,
because
people
just
gonna
be
waiting
longer
for
you
you're
a
bad
friend,
it's
true,
so
Nick
raver
had
some
questions
and
tips,
and
so
okay,
so
he
said
how
many
masternodes
are
doing
the
readiness
check.
So
the
readiness
check,
I'm,
pretty
sure,
is
done
by
the
couplet,
which
is
the
which
is
the
thing
that's
running
on
your
node
I.
Could.
B
It
is
going
to
be
checking
itself,
it
checks,
it
checks
all
of
its
pods.
That's
what
I
believe
to
be
true,
yeah
that
wouldn't
make
sense,
but
I
could
be
wrong.
Somebody
might
know
better
than
I
do
about
that.
It's
not
something
I
thought
about.
I,
think
I.
Remember,
reading
that,
though,
okay
and
then
Nick
said
and
I'm
reading
this
without
actually
digesting
what
it
means
he
says
tip
some
people
issue
a
query
with
Stack
Exchange
that
Redis,
but
the
connection
multiplexer
has
is
connected,
always
accessible
and
it
already
is
doing
a
heartbeat.
E
B
Have
definitely
alright
so
something
I
have
heard
from
people.
I
have
definitely
heard
stories
from
people,
and
you
can
act
this
by
like
watching
some
good
contacts.
If
you're
curious,
like
there
are
some
good
talks
about
people's
like
top
10
fails
or
like
top
top
eight
like
lessons
learn
from
doing
kubernetes.
So
there
are
stories
out
there
of
people
basically
taking
down
the
world
with
bad
health
checks
so
like.
B
If
you
can
imagine
imagine
you
had
30
services
that
you
need
to
communicate
with,
and
you
wrote
you
wrote
a
health
check
for
all
of
those
30
services
that
you
need
to
communicate
with
and
then
every
check
paying
every
apps
health
check
pinged.
Every
other
service
that
had
talked
to
you
would
have
this
gigantic
overlapping
mash
of
HTTP
and
if
any,
ever
anything
failed,
you
would
just
like
reboot
the
universe
right
so
like
I
have
definitely
heard
horror
stories
from
people
of
of
that
I.
B
If
I
were
running
apps
in
production,
I
would
lean
towards
keeping
it
simple.
Rather
than
saying,
I'm
gonna
check
every
single
thing
I
mean
the
truth.
Is
that
like
the
default
policy?
Is
is
your
app
that,
like
is
your
process
dead,
which
is
probably
okay
for
a
lot
of
scenarios?
You
can
do
one
better
with
the
a
spinet
core
health
checks
package,
which
says:
is
this
app
able
to
serve
HTTP?
So
if
you,
just
if
you
configure
health
checks
by
default,
that's
what
you're
gonna
get
you're
gonna
get
canvas
app
do
HTTP.
B
B
B
So
let's
talk
about
something
else,
so
more
neat
stuff
that
I
can
do
with
deployments.
So
I
have
got
a
deployment
here,
and
this
is
another
version
of
the
app
that's
running
right
now
and
I
have
got
a
different
environment,
variable
value
for
my
app
value.
So
here
I'm
saying
it's
a
very
cool
and
configured
value,
and
here
I'm
saying
it's
the
second
value.
So
it's
it's
clearly.
The
second
value
I
have
defined
a
readiness
probe,
I
have
given
it
a
period
I.
C
B
I'm
back,
where
did
I
leave
off
second
value,
yeah
yeah,
so
this
basically
means
it's
going
to
take
six
sixteen
seconds
for
the
app
to
be
considered
ready
because
I'm
I'm
only
probing
every
eight
seconds
and
I
need
to
see
two
successes.
I
need
to
see
two
successes
before
I.
Consider
healthy,
so
I'm
actually
doing
something
in
this
demo
to
sort
of
just
artificially
make
my
app
slow
to
wake
up
on
purpose
for
the
purposes
of
demonstration.
B
Now,
let's
talk
about
what
I'm
gonna
demonstrate,
so
I
have
defined
for
this
deployment,
a
strategy
of
type
rolling
update.
Well,
what
is
rolling,
update,
rolling
update,
basically
means
we're
gonna,
gradually
roll
out
new
instances
of
this
pod
and
shut
down
the
existing
ones
gradually.
So
you
can
use
recreate,
which
is
sort
of
like
caveman
mode
like
new
claw
all
and
make
new
ones
or
you
can
use
rolling
update,
which
is
like.
Let's
do
this
deployment
with
no
downtime.
There
are
probably
reasons
why
you
would
want
to
use
recreate
I,
don't
know
what
they
are.
B
The
default
is
rolling
update,
as
the
tooltip
is
telling
us
now.
The
other
parameters
that
I'm
doing
here
is
I
have
set
this
to
max
surged,
1
and
Max
unavailable
1,
and
if
you
really
want
to
know
in
detail
about
what
these
things
are,
you
can
read
it.
What
I'm
going
to
explain
to
you
about
this
is
basically
I.
Don't
ever
want
to
go
below
3
replicas,
so
three
replicas
is
the
number
I
don't
ever
want
there
to
be
fewer
than
3
in
the
system.
Right,
that's
what
I've
configured
this
to
do.
B
B
B
V3
is
what
I
call
this
one,
so
I've
done
that
that's
deploying
and
while
that's
running
I'm,
gonna
curl
it
and
then
I'm
gonna
jump
over
to
this
other
tab
and
I'm
gonna
do
a
Kubb
CTL,
it's
spelt
right,
get
deployment,
W
and
W
is
for
watch
and
so
we're
going
to
watch
the
changes.
So
you
can
see
that
we've
got
one
up
to
date.
We've
got
three
available
because
it
hasn't
become
ready.
Yet
then
it
becomes
ready
and
we've
got
four
available.
B
So
basically
I've
deployed
a
new
instance
of
an
app
with
the
new
value.
Its
readiness
check
passed.
It
becomes
available
now.
I
have
four
of
three
ready
and
you
can
see
that
over
time,
up-to-date
is
going
to
scale
up.
The
number
of
new
pods
available
is
going
to
go
up
to
four,
and
then
it's
going
to
go
back
down
to
three
as
we
shut
down
one
of
the
old
ones.
So
it.
E
B
Up
and
shuts
down
the
old
ones
yeah
gradually
it
never
goes
below
three
instances
of
the
running
app.
So
you
can
see
that
we're
mostly
getting
responses.
I
mighta
waited
too
long,
we're
mostly
getting
responses
from
the
second
value,
but
you
can
see
how
we're
getting
a
mix
of
the
original
value
and
the
second
value-
and
there
are
three
instances
here
so
gradually
we
sort
of
transition
from
seeing
a
lot
of
the
original
value
and
some
of
the
new
one.
B
So
now
we're
seeing
mostly
the
new
one
and
now
we're
seeing
completely
the
new
value,
because
all
the
original
ones
have
been
rotated
out
and
have
been
shut
down.
So
you
can
see
if
we
go
back
to
our
other
tab,
we're
at
we're
back
at
three
ready
three
up-to-date
three
available,
so
we
sort
of
just
told
kubernetes,
like
I,
gave
it
this
very
specific
instructions.
But
you
can
see
how
powerful
that
is,
that
with
no
downtime
and
never
dropping
below
the
number
of
replicas.
That
I
wanted.
B
We
successfully
rolled
out
a
new
deployment
and
got
that
all
to
change.
Got
that
all
to
roll
out
without,
like
really
disrupting
anything.
In
the
app
okay,
so
as
a
newbie,
what
I'm,
seeing
here,
what
I'm
picking
up
is
at
the
beginning,
there's
more
concepts
like
again
I'm
used
to
docker,
but
what
you're
showing
is
now
I'm
able
to
do
like
you've
showed
us
several
things
so
far.
One
is
abstraction
over
like
file,
storage
and
volumes
and
configuration,
and
then
now
this
rolling
updated,
like
that's,
that's
huge
to
be
able
to
do
that.
B
B
Okay,
let's
unplug
this
and
plug
this
one
up,
and
there
was
this
very
complicated
like
dance.
We
had
to
do
to
try
and
minimize
downtime
when
we
were
deploying
new
things,
and
we
basically
would
swap
in
a
spare
like
we
just
automated
that
on
kubernetes
with
a
couple
lines
of
config,
what's
actually
happening
mechanically
here
and
that's
what
that
second
diagram
shows
is
any
time
we
make
a
edit
to
the
spec
section
of
a
deployment
a
new
replica
set
gets
created.
B
So
you
can
sort
of
think
about
replica
sets
us
like
immutable,
so
a
new
replica
set
gets
created.
Then
the
deployment
controller
says
okay,
you're
created
you
have
one
two,
then,
when
that
one
becomes
ready,
it
says
it
goes
back
to
the
original
replica
set,
replicates
a
and
says.
Okay,
you
had
three
now
you
should
have
two,
so
please
shut
one
down
and
then
it
goes
back
to
replica,
set
two
or
replica
set
B
and
says:
okay,
you
should
have
two
now
so
go
create
a
second
one
and,
and
it
does
that
sort
of
dance.
B
That's
the
rolling
update.
There
are
other
kinds
of
deployments
that
you
can
do
in
kubernetes
I.
Think
some
of
the
more
interesting
ones
like
Canary
deployment
I
think
requires
external
components.
Canary
deployment
is
a
style
of
deployment
where
you
deploy
a
new
instance
of
a
new
build.
You
then
start
ship
start
sending
some
percentage
of
traffic
to
it.
If
you
start
getting
like
a
flood
of
had
responses
from
the
new
service,
then
you
sort
of
just
shut
it
down.
So
these
are
some
of
the
things
that
you
can
do
with
kubernetes.
A
B
Do
we
want
to
do
next?
It
sits
right
about
4:45,
so
it's
it's
right
around
an
hour
and
then
we
wanted
to
have
Justin's
demo
and
then
I'm
I'm,
hoping
like
right
at
the
end,
when
we
wrap
up,
then
I'll
do
community
spotlight
at
the
very
end.
Okay,
let
me
talk
about.
Let
me
let
me
speed
through
two
more
things
in
30
seconds
here:
I'll,
throw
it
over
to
Justin.
I
can
go
along
it.
Don't
it
doesn't
have
to
be
30
seconds
45
seconds,
one
of
the
other
things
that
I've
done
here.
B
I
promise,
bling
I've
added
more
bling,
one
of
the
other
things
that
I've
done
here
is
I,
have
put
in
this
version.
Three
of
my
deployment
I
have
put
some
resources,
some
resource
limits
on
this
deployment.
So
the
way
to
read
this
I'll
explain
it
request
means
I
am
now
asking
for
your
financial
support.
No,
it
means
this
is
the
minimum
that
I
need
to
run
this
workload,
so
request
means
guarantee
me
at
least
this
much
limits
means
don't
let
me
have
more
than
as
much
so.
B
You
should
sort
of
think
about,
like
I'm
asking
for
I.
Think
this
app
needs
at
least
a
hundred
Meg's
of
memory
to
run
it.
It
will
get
access
to
no
more
than
250.
So
250
is
a
hard
limit.
You
can't
get
more
than
that.
Don
net,
as
of
300,
will
respect
this
value,
so
earlier
versions
of.net,
if
your
pre
three-
oh
it's
not
aware
of
these-
are
called
C
Group
limits.
B
It's
a
Linux
feature
that
docker
uses
that
kubernetes
uses
Don
net
prior
to
three
Oh,
doesn't
understand
or
respect
these
limits,
Don
that
300
and
newer
understands
these
limits
and
will
adjust
its
behavior
accordingly.
So
if
you,
if
you
say
250
megabytes
here,
the
that
process
will
act
as
if
okay
I
know,
I
cannot
have
more
than
250.
That's
a
hard
limit.
I'm
not
gonna,
go
above
it,
so
you
and
yet
say
an
out
of
memory.
B
Exception
like
if
you
really
needed
to
allocate
more
than
250
megabytes,
but
having
limits
on
something
is
a
good
way
to
have
like
a
safeguard
and
to
basically
prevent
the
problem
that
I'm
sure
a
lot
of
people
are
used
to
of
well.
This
service
has
a
memory
leak
and
we
don't
know
the
source
of
the
problem,
so
we
just
rebooted
every
three
days.
B
E
B
But
there's
a
let's:
let's
clarify
what
it
means
to
go
above
it,
so
you
cannot
actually
allocate
more
than
this
value
to
it.
What
will
happen
is
if
a
process
tries
to
allocate
more
memory,
and
it
doesn't
have
that
available,
that
request
for
more
memory
will
fail
at
an
operating
system
level.
So
how
this
could
surface
in
a
dotnet
application
I
could
deploy
this
app.
You
could
write
new
byte
array
and
put
in
a
value
of
more
than
250
megabytes
in
that
byte
array.
That
would
throw
an
exception.
B
It
would
throw
an
out
of
a
memory
exception
right,
because
the
operating
system
would
just
say
no.
You
can't
write,
which
would
then
result
in
your
app
crashing.
It
might
result
in
that
that
error
be
handled
or
something
but
chances
are,
it
would
result
in
your
RAM
crashing
and
it
would
just
be
a
crash
and
it
would
get
restarted
as
a
result.
If
ever
does
that
make
sense,
so
you
can't
actually
go
above
it.
B
One
of
the
things
that
people
see
is
if
you
try
to
use
really
low
resource
limits
in
like
a
to
1
to
2.
App.Net
is
not
aware
of
that
limitation
and
dotnet
will
not
adjust
its
behavior,
so
it
will
not
be
frugal
with
memory
it
will
keep
asking
for
more.
That's
really
an
important
thing
for
good
behavior
in
containers
for
dotnet
is
that
we
can
respect
these
limits
now.
B
So
if
you
are
an
American,
250,
mil
acorss
means
one
quarter
of
a
CPU
now
so
hopefully
about
one
Hawks
head
per
bushel:
yes,
yes,
okay,
80
80
rods
to
the
furlong.
So
what
I
want
to
impress
on
everybody
that
makes
this
further
confusing?
Is
you
might
be
scratching
your
head
and
saying
Ryan?
How
can
I
have
less
than
one
core
allocated
to
something?
What
does
that
actually
mean?
And
the
answer
is
it's
in
abstract
CPU
units?
It's
not
in
course,
so
don't
think
of
like
this
value
of
CP?
B
U
equals
1000
milk,
ores
or
CPU
equals
1.
That
does
not
mean
that
there's
one
core
assigned
to
this
process
does
not
mean
that
your
single-threaded,
it's
a
time,
slicing
multiplexing
aggregate
value.
So
then
250m
means
basically
like
25%
of
the
cpu
25%
of
a
cpu,
a
cpu.
Ok.
So
if
my
machine
has
4
CPUs
in
it,
let's
say
I
get
a
4
cpu
VM.
B
B
They
want
these
numbers
to
be
additive,
and
then
why
is
that
your
you're
supposed
to
be
given
these
right
you're
supposed
to
be
guaranteed
these
you're
not
you're,
guaranteed
this
as
the
maximum,
and
you
guaranteed
this
as
a
maximum
as
well.
What
they
say
about
memory
in
the
kubernetes
docks
is
that
memory
is
a
non-compressible
resource,
meaning
that
once
a
process
has
gotten
access
to
memory,
they
can't
the
operating
system
can't
ask
to
take
it
back,
whereas
with
CPU,
because
CPU
is
an
overtime
thing,
the
operating
system
can
just
throttle
you.
B
B
That
is
a
catastrophic
situation,
so
these
limits
are
pretty
small
limits
like
250
megabytes
and
one
CPU
from
some
I
could
probably
talk
about
these
limits
for
an
hour,
because
I've
done
a
bunch
of
benchmarking
and
testing
with
this,
but
I'll
I'll
move
on
I
wouldn't
recommend
deploying
a
dotnet
web
app
with
less
than
50
Meg's
of
memory
available.
We
actually
performed
pretty
well
in
that
range,
your
mileage
will
vary.
A
lot
dotnet
seems
to
scale
pretty
well
with
CPUs,
so
done.
B
That
seems
to
scale
sort
of
super
linearly
with
CPUs
so
like
if
I
made
this
500
instead
of
250
I,
would
get
more
than
twice
the
amount
of
theoretical
performance
from
that.
Your
mileage
will
vary.
What
these
numbers
are
useful
for
is
and
and
why
they're
important
is?
Why
would
you
do
this
versus
just
not
doing
it
right?
The
answer
is
because
kubernetes
can
do
a
better
job,
allocating
pods
to
machines.
Remember
it
decides
where
your
pod
runs.
B
It
decides
what
other
things
it's
running
with
on
the
same
machine
kubernetes
can
do
a
much
better
job.
If
you
give
resource
constraints
because
then
they
have
some,
then
they
have
some
data
to
work
with
and
so
kubernetes
uses
these
values
to
decide
where
to
run
your
workloads.
Secondly,
and
I'm
not
going
to
have
an
example
of
this
today,
but
there's
a
good
one
in
the
AKS
workshop
kubernetes
can
use
these
values
for
auto
scaling.
B
B
Unfortunately,
all
the
really
useful
information
about
it
is
going
to
be
very
specific
to
your
use
case,
very
specific
to
your
application
and
there's
no
like
broad
strokes
guide
to
it,
because
you
have
to
know
you
have
to
test
your
own
app
to
figure
it
out.
So
I'm
not
going
to
demonstrate
any
of
that
stuff
today.
But
it's
here,
it's
valuable!
It's
important!
It's
a
level
of
control
that
I
don't
think
we
have
today
in
a
lot
of
deployment
environments.
B
B
Do
it
and
then
I'm
gonna
talk
about
what
we're
actually
deploying
I've
got
before
here,
so
here's
v4
and
ooh
I'm
doing
something
here:
I've
got
ingress
extensions,
front-end
created
and
what
I've
actually
done
is
in
my
new
version
of
this
I've
got
my
deployment
like
we've
seen
before
I've
got
my
service
like
we've.
Seen
before,
and
I've
got
this
new
thing.
B
I've
got
ingress
and
I've
got
this
thing
here
that
looks
like
a
hostname,
so
I'm
gonna
go
ahead
and
grab
that
and
let's
bring
up,
let's
bring
up
a
web
browser
and
go
to
that
and
there's
a
security
risk
ahead.
Oh
no
I
don't
need
to
learn
more
and
I'm
gonna
accept
the
risk
and
continue
I
can
explain.
Why
there's
that?
B
So
if
you
ever
want
to
test
something
involving
DNS
and
name
resolution
and
host
names
and
stuff
over
the
internet
and
all
you've
got
is
an
IP
address.
This
service
is
pretty
cool,
so
I've
actually
served
this
over
the
public
Internet
via
that
IP
address
and
one
of
the
things
that
is
pretty
cool
about
this.
If
we
can
open
our,
we
can
open
our
developer
tools
here
and
I'll.
Do
this
again?
Well,
there's
some
interesting
stuff
coming
back
here
in
my
headers
and
I
will
make
this
bigger
in
a
second.
B
B
Now
what
I've
done
here
is
I've
installed
something
into
my
cluster
called
an
ingress
controller
and
I've
used
this
ingress
concept,
or
this
ingress
object
type
to
basically
configure
routing
rules
for
my
app.
So
this
annotation
here
is
a
specific
like
magic
string
that
activates
that
nginx
ingress
controller
and
then
these
are
the
rules.
So
I've
basically
said
anything
that
comes
in
on
this
hostname
I
want
you
to
send
it
to
this
service
on
port
80,
and
remember
that's
this
service
right
here.
B
So
if
this
were
front
end
service,
this
would
also
be
from
in
service
right.
It's
how
we
identify
the
service,
so
I'm
forwarding
all
traffic
that
comes
in
on
this
hostname
to
this
service.
If
I
had
multiple
things
that
I
wanted
to
expose
on
multiple
host
names,
I
could
have
another
rule
for
backends
dot.
This
awful
thing
do
and
route
that
to
a
different
service
if
I
wanted
to,
because
I'm
using
nginx
effectively
as
a
router.
Now,
why
might
I
do
this?
What
are
some
of
the
reasons
why
I
would
want
to
do
this?
B
Why
ingress
is
popular
versa?
Is
a
load
balancer
IP
address?
There's
a
couple
answers
to
that.
One
is
depending
on
what
stack
you're
using
what
technology
you're
using
within
the
kubernetes
o
sphere.
You
might
want
a
proxy
server
in
front
of
your
app
no
matter.
What
so
kestrel
is
a
sort
of
like
edge
safe,
like
Internet,
safe
web
server,
lots
of
technology
stacks,
don't
have
a
internet
safe
as
far
as
security
web
server.
B
B
That
might
be
why
the
other
reason
why
is
cost,
because
every
public
IP
address,
has
a
cost
associated
with
it
and
so
having
multiple
host
names,
mapped
to
a
single
public
IP
address
is
going
to
reduce
your
cost.
Remember
that
every
time
you
create
a
service
of
type
load
balancer,
that
could
be
good
for
testing
and
it
could
be
useful
but
you're,
exposing
that
thing
directly
to
public
Internet
traffic
and
you're
allocating
a
load
balancer
public
IP
per
service.
B
So
if
you
want
to
have
one
load,
balancer
IP
for
hundreds
of
applications,
you
can
do
that
with
an
ingress
and
that's
why
a
lot
of
people
might
want
to
do
that.
So
these
are
some
of
the
things
like.
Hopefully,
the
things
that
you've
seen
today
here
with
like
liveness
probes
readiness
probes
convey
how
to
do
deployments,
how
to
do
health
checks,
how
to
set
resource
limits,
how
to
set
up
ingress,
we're
getting
a
little
bit
more
into
the
sort
of
practical
world
as
far
as
ok,
we're
actually
doing
things
in
production.
B
E
B
E
I'm
gonna
I'm
gonna
cut
some
of
the
parts
of
my
demo
that,
like
art,
yeah
it's
useful
but
like
I'm
gonna,
show
mostly
show
it.
So
a
little
bit
of
context
like
I
found
three
months
ago,
I
didn't
know
like
anything
about
Cooper
day
he's
like
I,
didn't
know
anything
about
it
like
at
all,
so
I
started
reading
a
book
and
stuff
like
that,
whenever
I
should
have
learned
something,
I
tried
to
find
a
place
to
route
it
into
I'm
a
little
more
familiar
with.
E
So
when
I
was
looking
through,
kubernetes
stuff,
like
I,
have
had
a
ton
experience
doing
like
HB
stuff,
doing
servers,
I
work
on
Kestrel
and
is
primarily
so
ingresses
were
a
natural
place
to
meet
so
to
learn
in
order
to
route
myself
with
the
kubernetes
overall.
So
one
thing,
I
decided
to
learn
a
little
bit
about
the
accessibility
of
kubernetes
is
I
decided
to
write
my
own
ingress.
E
E
Ingress
controller
with
that,
so
what
this
is
is
it's
going
to
do
two
things.
One
is
it's
the
actual
ingress
so
with
Rhys
example,
you
showed
nginx.
That
is
the
ingress
there
I
made
my
own
ingress,
that's
based
off
Kestrel
and
HP
client
and
it's
gonna
be
under
here
now.
The
second
thing:
that's
with
an
ingress,
is
some
sort
of
controller.
E
So
when
you
deploy
nginx
right
when
you
run
edge
next
without
running
in
kubernetes
like
that
is
something
that
you
can
do
totally
fine
and
genetics
has
its
own
config
system
has
its
own
way
to
configure
it.
You
need
some
way
to
say:
hey
I,
have
this
like
ingress
resource
that,
like,
for
example,
Ryan
deployed
how
do
I
translate
it
into
things
that
are
understood
by
nginx?
E
So
I
did
a
similar
thing
where
I
mocked
up
a
fake
system
that
uses
like
Jason's
stuff
like
that
to
take
stuff
that
is
in
kubernetes,
has
a
controller
that
reacts
to
it
and
then
actually
put
stuff
for
the
ingress
to
be
used.
So
let
me
just
go
to
command
line
and
just
deploy.
This
I've
already
actually
deployed
it,
but
what
I'm
doing
today
is
I'm
using
something
called
scaffold
as
a
tool,
I
believe
created
by
Google.
E
E
E
These
profiles,
you
can
see
that
I
have
three
replicas
of
it.
I
set
some
limits
here
for
the
max
amount
of
memory
and
CPUs
I
have
some
images
that
I'm
using
I
have
my
own
container
registry
that
I'm
using
and
then
I
also
have
a
service
for
as
well,
and
this
is
a
type
cluster
IP,
because
I
don't
want
it
to
be
publicly
available
to
come
on
the
like.
You
would
have
the
low
bouncer,
for
example,
cool.
E
The
ingress
is
something
I
printers
hacked
together.
It's
a
combination
of
a
few
things.
It's
combination
of
this
ace
peanut
labs,
proxy
pool.
This
is
just
something
we
had
I
think
was
developing
an
intern
few
years
ago.
It's
not
something
we
actually
have
shipped
it's
an
asp
net
labs,
but
there's
a
lot
of
good
stuff
there.
So
I
took
that
and
I
did
a
couple
things
with
it.
One
is
that
I
created
my
own
system
for
configuration.
E
This
is
because
in
kubernetes,
like
I
want
to
be
able
to
supply
to
supply
the
IP
addresses
that
are
available
for
the
for
the
application.
So
when
you
run
your
application,
like
each
of
the
pods
that
are
in
your
back-end
are
gonna,
have
a
specific
IP
mapped
to
it.
I
want
to
be
able
to
see
that
in
some
house,
so
I
effectively
just
create
a
configuration
here
that
has
those
IDs
listed
okay,
so
yeah.
If
there's
any
questions
like
feel
free
to
interrupt
so.
E
So
when
I
run
this,
if
I
were
to
not
use
kubernetes
this
the
program
kind
of
ignoring
for
now,
but
if
I
were
to
run
this,
this
configuration
would
just
to
be
supplied
to
some
stuff.
I
wrote
in
order
to
make
it.
So
when
you
send
a
request
to
the
Ingres
itself,
it
would
send
a
request
to
be
a
HP
client
to
one
of
the
backends.
So,
for
example,
if
I
had
multiple
IPS
here,
like
I,
don't
know
like
10,
20,
30
or
40,
what
this
would
do
is
we'd.
B
E
Oh,
if
we
actually
go
to
the
ingress
gamble
file
itself,
this
is
going
to
look
fairly
similar
to
what
Ryan
was
showing
with
nginx,
where
this
type
inject
or
the
kind
ingress
here
and
I
have
a
rule,
and
it's
a
very
basic
rule.
It
just
says,
given
this
path:
Rao
this
service
name
with
service
port
of
whatever,
so
it
looks
very
similar
to
what
Ryan
had
for
the
nginx
configuration
except
I,
think
he
had
a
house
here
which
I
didn't
do
because
I'm
lazy.
E
So
then,
after
that
I
think
I
could
show
that
the
controller
itself
has
a
couple
things
he
deployed
on
zone.
You
have
a
deployment
in
service
they
provide
here.
I
also
have
some
stuff
for
for
some
authentication,
but
that's
not
important
for
this
demo.
So
yeah
at
this
point,
I
can
probably
just
show
the
app.
So
if
I
were
to
do
Q.
E
Gets
services
I
want
to
get
the
public
ID
for
this
I
didn't
know
the
trick
Ryan
had
if
I
were
to
go
to
this
address.
What
you're
gonna
see
is
it's
gonna
route
to
multiple,
different
backends
with
these
host
names?
So
there's
three
of
them
running
right
now.
It's
alternating
between
this
one,
this
one
and
this
one
I
just
have
a
policy
that
prevents
round-robin
there.
Now,
if
I
wanted
to
I
can
do
a
scale
here
as
well.
E
D
E
Actually
watch
these
pods
and
you
really
see
that
that's
here
just
so,
it
should
be
scaling
up,
looks
like
one
of
these
is
terminating,
but
if
I
were
to
start
refreshing
again,
you'll
probably
see
this
little
seems
to
be
three
should
be
skating,
should
eventually
scale
before
them
or
five.
That's.
Why
not.
E
E
E
May
have
some
bugs
here,
but
anyways.
The
point
is:
is
that
with
this
you
can
start
scaling
the
number
here
and
then
the
ingress
itself
will
actually
listen
to
changes
in
the
IP
list
and
scale
down
the
number
of
or
sorry
it
will
change
the
ingress
to
not
have
those
as
available
back
ends.
So
that's
where
discovered
the
demo
there's
all
the
code
I
use
is
on
my
gab.
Pretty
much
is
just
a
pretty
bare-bones
implementation
of
a
custom
ingress
and
the
controller
with
it.
B
B
Ahead,
one
of
the
really
powerful
things
about
kubernetes
that
we're
starting
to
scratch
the
surface.
Often,
if
we
have
more
time
we'd,
look
at
a
few
more
examples
is
kubernetes
is
so
extensible
that,
like
there's
a
set
of
concepts
that
come
with
kubernetes
that
are
like
built-in
built-in,
like
pods
and
services,
there's
a
set
of
concepts
that
come
with
kubernetes
that
are
typically
implemented
by
other
components
like
volumes
is
probably
implemented
by
your
cloud
provider
or
ingress
is
implemented
by
something
like
nginx
or,
like
secrets,
can
be
implemented
by
default,
for
instance.
B
So,
like
there's
a
set
of
concepts
that
you
can
configure
that
have
implementations
that
can
be
like
overridden
or
replaced
or
specific
to
your
cloud
provider,
and
then
you
can
go
even
further
and
you
can
have
totally
custom
concepts
which
we'll
have
time
to
get
into
today.
So
what
Justin's
done
here
is
he's
he's
basically
taking
a
stab
at
like
alright.
Let
me
see
if
I
can
make
one
like
you
could
make
the
parts
of
your
cloud
if
you
want
to
or
have
choices
of,
multiple
implementations,
so.
B
E
Cool-
and
it
was
mostly
a
way
for
me
to
learn
like
how
to
actually
extend
kubernetes
one
part
I
didn't
really
get
into
is
the
actual
controller
itself,
which
you
know
it's
not
it's
not
that
special.
But
the
idea
is
you.
You
want
to
be
listening
and
watching
and
querying
things
that
are
going
on
within
kubernetes
in
order
to
create
that
list,
endpoints
and
backends
that
you
need
to
route
to
and
doing
all
that's
kind
of
interesting
there's
a
bunch
of
API
you
can
use
that
are
like,
like
Kumud
a
specific
API.
E
Is
that
make
it
easier
to
do
all
that
stuff?
But
it's
all
just
like
there's
a
lot
of
rules
behind
like
how
to
do
that
kind
of
accessibility
that
we
probably
talked
about
in
a
different
different
session,
because
there's
a
lot
of
things,
you
there's
a
lot
of
shoulds
and
should
don't
or
should
not
do
for
those
kind
of
things
it.
B
Seems
like
both
understanding
how
to
interact,
but
then
also
the
API,
like
being
able
to
provide
information
to
kubernetes
and
for
you
know
like
to
be
a
good
kubernetes
citizen
is,
is
also
really
important.
So
as
part
of
like
working
with
this
understanding,
how
to
work
in
a
kubernetes
land
is
important,
yeah.
Well
all
right!
This
is
awesome
what
I.
So
we
definitely
need
to
continue
to
have
some
follow-ons
lots
of
great
questions
on
this.
B
The
next
show,
and
and
so
for
this
week's
show
what
I'll
do
is
just
include
the
links
that
you
folks
show,
because
I
don't
want
to
not
give
those
folks,
you
know
their
free
time
or
they
know
their
type
on
the
show.
So
thanks
a
ton
that
was
really
really
cool.
Definitely
a
lot
a
lot
to
learn.
Thank
you
for
welcome.
B
Quarantine
addition
of
a
speaking
me
Stan,
but
oh
well,
all
right
and
I've
shared
the
the
links
in
the
in
the
chats
and
I'll
include
them
in
the
show,
notes
and
I.
Think
all
I
need
to
do
now
is
switch
to
the
thanks
at
which
time
it'll
go
and
we'll
all
fade
out.
Alright,
thanks
a
bunch
Justin
thanks,
Ryan
yeah.