►
From YouTube: Developer Experience Office Hours
Description
kubernetesbyexample.com walk through and real world tire kicking to fix
A
Good
morning,
good
afternoon,
good
evening,
everyone
welcome
to
the
open
shifts,
developer
evangelist
office
hours.
I
am
Chris
short
technical
marketing
manager
at
Red.
Hat
I
am
also
a
CNCs
ambassador
and
a
executive
producer
of
all
the
twitch
fun.
We
are
also
streaming
to
Facebook
and
YouTube,
so
hello
to
our
friends
on
Facebook
and
YouTube
and
twitch.
Today,
I
am
joined
by
lots
of
people.
I've
got
Ryan,
Jarvan,
Brian
Tanis
and
the
Tali
Vento
I
will
let
them
introduce
themselves
and
take
it
away.
B
Audio
intro
but
Ryan
Jarvan,
and
thank
you
very
much
for
bearing
with
us
and
the
technical
difficulties
I've
been
on
the
dev
rel
team
for
quite
a
while.
My
previous
work
experience
most
recently
core
OS,
so
we're
gonna
dive
into
some
deep
kubernetes
information
for
you
today
and
hopefully
that's
interesting
Brian.
C
D
A
B
B
Blasts
helped
us
get
this
started
the
site
started.
Thank
you
to
Michael
wherever
you
are
these
days,
and
so
basically,
my
plan
for
today
was
to
walk
through
all
of
all
of
you
folks
and
a
couple
folks
from
my
team
walk
through
the
kubernetes
by
example,
website
and
bring
bring
with
us
a
couple
kubernetes
Wranglers,
to
give
it
a
test.
It's
been
a
little
while,
since
I've
been
through
this
website,
so
there
might
be
a
couple
deprecation
issues
that
we
need
to
address.
B
In
fact,
I
am
pretty
sure
there's
at
least
one
we
can
look
into,
and
so
it's
I
think
helpful
for
folks
to
go
through,
learn
the
fundamental
command-line
basics
and
see
what
they
can,
what
kubernetes
really
has
to
offer
and
what
is
the
best
way
to
engage
with
kubernetes.
So,
hopefully
we'll
cover
a
couple
of
those
topics.
As
we
go
through
this
presentation
today,
it's
less
of
a
presentation
and
more
of
a
walkthrough
of
the
site,
and
hopefully
we'll
all
learn
something
along
the
way
that
sound
good
sounds.
A
Great
to
me,
sorry,
everybody
I
did
have
the
title
wrong
on
the
streams
today
yeah
it's
kind
of
been
on
Monday
on
a
Tuesday,
my
my
Karim
little
Reid
disintegrated
in
my
arm.
Her
hands
as
I
was
on
the
call
previously
so
I
had
to
scramble
around
to
find
something
to
hold
the
microphone
and
talk
and
everything
else.
So
yeah
Bolg
I
dropped
there,
my
bad,
but
we
fixed
it
and
we're
moving
on
so
kubernetes
by
example.com
is
what
we're
dealing
with
today
correct.
That
is
right.
Awesome.
B
B
Static
web
pages
on
github
so
super
easy
to
file
issues.
If,
if
you
are
using
this
site
and
you
have
feature
requests,
content
requests,
other
types
of
feedback
for
us.
Definitely,
please
drop
us
a
note
in
the
github
issues.
I
also
had
a
couple
of
slides
that
I
was
going
to
kind
of
flip
through
to
determine.
D
B
Some
of
this-
and
let
me
see
if
any
of
these
are
useful-
let's
see
so
I-
have
some
of
these
links
at
this
bitly
slash
kubernetes
by
example.
If
anyone
wants
to
see
these
particular
slides,
but
I
was
going
to
start
off
with
just
kind
of
asking,
usually
what
I
do
is
I.
Do
this
talk
in
front
of
a
large
audience
with
a
hundred
developers
or
200
developers
and
I,
say:
hey
all
right
audience
come
on
pile
on
in
we're
gonna
set
up
a
large
cluster
and
invite
the
general.
B
You
know,
however,
conference
attendees
to
basically
get
a
user
account
and
go
through,
and
so
this
is
the
questions.
I
ask
them
in
order
to
try
to
frame
that
experience,
so
I
asked
them,
you
know.
First,
do
you
have
experience
with
containers?
Do
you
have
experience
with
kubernetes
and
here's
kind
of
my
goal
for
the
experiences
that
they
walk
away,
saying
that
ok
I
feel
like
I'm,
somewhat
proficient
with
the
coop
CTL
command-line
tools,
and,
and
hopefully
they
can
name
five
basic
kubernetes
resources?
B
So
that's
usually
what
I
have
as
the
challenge
for
them
and
then
I
ask
them
at
the
end,
can
you
name
five
basics
and
we
see
how
how
people
feel
about
it,
whether
they
feel
confident
or
not
right,
so
you
can
get
your
own
workshop
environment.
You
can
follow
along
with
us
today.
I
encourage
you
by
all
means.
Please
do
follow
along
at
some
point
whether
it's
live
with
us
or
on
your
own
later.
It's
super
easy
to
get
your
own
kubernetes
environment.
There's
some
examples
on
the
kubernetes
by
example.
B
C
B
A
A
quick
question
in
chat
that
I'm
going
to
answer
real,
quick
Salah
from
YouTube,
says
installing
operators
and
operator
hub
require
a
subscription,
or
can
you
do
it
before
adding
the
subscription?
My
understanding
is
like
if
operator
hub
is
there
and
working
you
don't
need
a
sub
to
like
actually
get
the
operators.
The
operators
themselves
are
actually
subscription
free.
As
the
support
that,
if
you
want
to
get
supported
operator,
you
would
need
to
have
a
sub
on
that.
A
C
Ahead:
Brian
yeah!
No,
if
you
go
to
operator
hub
that
I,
oh
you
could
go,
you
know
if
you're
not
running
an
open
shift
or
whatnot
or
maybe
you
are,
you
could
go
there
and
there'll
be
steps
to
go,
install
that,
but
no
you
don't
know
require
a
subscription
or
whatever
you
have
that
you
were
able
to
get
it.
If
you
have
an
open
shift,
you
could
click
on
the
operator
hub
on
the
side
panel,
and
you
have
you
know
access
to
some
of
that
stuff
too,
as
well.
Yep.
A
B
C
B
It's
not
so
much
that
that
terminology
operators
it's
more
what
it
represents
is
there's
an
extensible
way
of
adding
functionality
to
a
kubernetes
cluster,
and
not
only
is
that
part
true
and
there's
massive
community
buy-in
from
let's
see
133
different
solutions
here
from
a
wide
variety
of
vendors,
who
are
actual
maintained,
errs
on
on
these
efforts
right
so
you're
getting
the
actual
maintainer
's
involved.
Another
really
interesting
thing
about
this
operator
hub
is
there's
a
there's.
A
system
called
TSA
net.
B
I
think
it's
the
routers
fault
and
they'd
say
no,
it's
it's
a
problem
with
Red
Hat
go
talk
to
those
folks
and
you'd
call
Red,
Hat
and
they'd
say
well.
No,
it's
definitely
the
routers
and
you'd
be
stuck
in
this
situation
between
to
support
programs
right
TSA
net
allows
both
of
these
businesses
to
join
forces
on
that
support.
Ticket
continue
working
together
until
the
ticket
is
resolved
and
then
split
some
of
the
there's
some
kind
of
profit-sharing
or
way
of
keeping
both
of
these
businesses
employable
right
and
that's
the
part.
B
B
To
work
with
a
whole
community
of
maintained
errs
and
engage
with
the
open-source
community
in
a
way
that
hopefully
keeps
them
in
business,
which
is
maybe
different
from
the
way
AWS
or
some
other
vendors
set
up
their
kind
of
cloud
portfolio.
It's
it's
usually
AWS,
first
or
or
Google.
First
or
you
know
it's
a
very
opinionated
cloud,
and
so
we're
hoping
this
is
a
way
of
giving
you
the
open
source
cloud
delivered
to
any
infrastructure
target.
You
like
whether
it's
Google
or
you
know
it
doesn't
matter.
B
B
Another
step
we
have
in
here
is
installing
this
lifecycle
manager,
which
kind
of
handles
dependencies
with
these
operators.
If
you
have
multiple
operators
that
depend
on
each
other
or
operators
that
need
to
be
updated,
that
lifecycle
manager
can
help
out
with
that
Natale
actually
did
a
recent
had
a
open
source
contribution
with
many
Kubb
related
to
the
Natale.
Yes,.
D
I
had
that
done
too
many
I
was
wondering
if
there
is
any
I've
done
for
operator
hub.
So
basically,
I
have
added
an
ad
job
for
operator.
Have
a
static
operator.
Rub
is
a
really
simple
is
basically,
you
know
one
batch
command,
but
in
the
mini
cube
ecosystem
you
works
with
those
add-ons,
so
you
can
enable
this
operator
up
just
doing
a
mini
tube
at
don't
enable
OLM.
B
B
B
There's
a
link
says
issues
right
at
the
top
that'll
send
you
to
the
github
repo
feel
free
to
file
issues
and
pull
requests
or
take
a
look
at
the
sources.
If
you
are
interested
so
I'm
gonna
be
filing
issues.
If
we
find
any
major
things
that
need
to
be
fixed
and
I'll
put
those
in
our
internal
kind
of
notes,
doc
and
so
feel
free
to
check
in
on
here
and
see
our
progress
as
we
as
this
thing
evolves.
So,
let's
see
you
let's
go
back
to
kubernetes
by
example.
B
B
It
recommends
the
top
Sinton
here.
Is
this
OpenShift
playground?
That's
what
I'm
gonna
pick.
Let's
see
if
we
have
volunteers
on
the
call
who
would
be
willing
to
pick
up
mini
cube
or
other
options,
there's
a
plenty
of
other
options
at
try,
openshift
comm
anyone
here
on
the
call
looking
to
even
help
out
with
this.
C
B
C
Got
two
that
are
both
on
the
triode
openshift
comm
I'm
gonna
show
one
of
them,
but
I
got
to
Orion.
One
of
them
is
bare
metal
install,
so
I've
got
three
actual
computers.
They
look
like
Knox,
Intel,
Knox,
really
small
low-power,
pretty
awesome
machines
and
those
are
running
a
full-blown
OpenShift
cluster.
C
You
know
on
the
bare
metal,
you
know
that's
all
setup,
which
is
pretty
awesome.
It's
a
compact
cluster,
currently
in
tech
preview
or
whatever
in
terms
of
open
shifts.
You
know
status
because
I
only
have
three
computers,
but
whatever
hey
it
worked,
is
cool.
What
I'm,
gonna
show
and
talk
about
and
actually
used
to
test.
C
C
A
D
A
A
B
That's
definitely
part
of
the
capabilities
of
operator
hub,
not
required
out
of
the
gate,
for
you
to
have
a
license
key
for
a
lot
of
those
solutions,
though
so
great
so
I
am
going
to
jump
into
the
openshift
playground
example
which
I
have
over
here.
So
this
is
the
interactive
learning
shift
comm
for
folks
that
want
to
follow
along
with
my
path:
I'll
click
on
start
and
I'm
gonna
grab
a
openshift,
4.2
playground.
B
This
has
the
you
had
openshift
for
series
environments,
warning
about
cata
Kota,
it's
great
that
you
can
instantly
spawn
up
a
new
environment
and
have
it
ready
and
available
in
your
browser
a
great
way
to
kick
the
tires.
There's
a
time
limit
on
the
it
they
they
timeout
after
one
hour,
though,
so
you
can
refresh
your
browser
as
many
times
as
you
like
to
get
additional
hours,
but
you
kind
of
lose
your
work
every
hour.
Luckily
we're
not
doing
anything
super
work
intensive
or
where
we're
going
to
lose
a
lot
of
commits
or
something.
B
Today,
it's
mostly
testing
so
I'll
do
a
I'll.
Do
a
quick,
OC
login!
If
you
are
using
mini
Kubb,
you
already
have
a
coop
config
file.
As
soon
as
you
run,
mini
coop
start
that
start
command
should
set
up
your
coop
config
file.
For
you
any
other
openshift
users,
you
probably
have
a
OC
login
command.
You
can
run.
B
Next
step
is
validating
that
your
environment
is
indeed
up
and
running
so
for
folks
who
are
following
along
I
would
encourage
you
to
type
in
coop,
CTL
version.
This
should
give
you
some
indication
that
you
have
a
coop,
CTL
command
line
and
tool
available
to
talk
to
the
API
and
the
second
response
here.
B
Server
version
is
going
to
be
the
actual
API,
responding
and
saying
yes,
not
only
is
a
command
line
working
the
API
is
working
they're
both
available
here's,
the
specific
version
numbers
you
have,
and
so
you
may
get
different
deprecation
warnings
and
see
different
issues,
depending
on
which
specific
versions
of
the
command
line
you're
running.
So
this
is
something
that
I
ought
to
have
test
automation.
B
B
B
B
B
See
good
feedback
and
I'm
also
gonna
try
to
hide
this
left
panel
here
I
might
be
able
to
fullscreen
the
term
anyway
yeah.
We
don't
really
need
the
left
panel
instructions.
There
are
some
kind
of
guided
scenarios
for
folks
who
are
interested
in
learning
specific
topics.
I
have
a
kubernetes
by
example,
topic
on
our
learn
web
site:
okay,
well,
how's,
this
font
size
looking.
C
B
Why
not?
Why
not
make
everything
easy
for
folks
to
follow
along
with
okay
cool,
so
pods
a
pod
is
a
collection
of
containers.
This
is
a
really
important
concept
in
kubernetes.
This
is
kind
of
your
your
fundamental
unit
of
scale.
So
anytime,
you
want
to
to
launch
anything
on
the
cluster.
You
can
have
things
that
are
not
based
on
pods,
but
of
the
things
that
are
replicated
and
scaled
and
auto
managed
for
you.
Pods
are
generally
kind
of
your
fundamental
resource
type.
This
is
a
pod
is
not
just
a
single
container.
B
B
So
the
first
example
we're
going
to
go
through
is
running.
This
coop
CTL
run
command,
we're
going
to
be
running
an
image
from
Quay
and
here's
the
example
here.
There's
a
Quay
IO
you
can
use
as
as
a
just
like
docker
hub.
One
of
the
nice
advantages
of
Quay
is
that
security
scanning
is
provided
on
all
the
free
accounts,
not
just
the
paid
for
accounts.
So
you
get
some
basic
security
analysis
of
the
container
contents.
That's
pretty
nice,
so
I'm
going
to
go
through
and
paste
in
this
command.
Oops
Oh
looks
like
I.
B
Have
a
dollar
sign
dollar
sign,
I
need
to
get
rid
of
here
we
go
coop
CTL
run,
then
the
name
of
the
container
and
the
image
it
looks
like
I
am
getting
some
feedback
right
away
with
the
coop
CTL
version
that
I
have
says.
Coop
CTL
run
is
currently
under
a
deprecation
warning
and
will
be
removed
in
future
versions.
So
folks,
who
are
already
on
coop
CTL
1:18,
are
probably
maybe
saying
this
morning,
maybe
not
saying
this
morning.
D
Be
careful
also
to
check
in
which
namespace
are
you,
because
also,
if
you
do
this
and
mini
queue
or
in
the
playgrounds
there,
the
default
project
is
the
default
one.
So
if
you
want
to
work
on
another
namespace,
maybe
it's
better
to
create
before
that
namespace
and
check
for
your
shift
check
if
your
user
has
access
to
the
default
namespace
or
default
project.
So
before
launching
those
example,
you
can
select
your
favorite
in
space.
If
you
want
so.
D
B
Is
a
big
difference
between
upstream
kubernetes
and
OpenShift
I?
Don't
want
to
say
well,
I
think,
there's
some
there's
some
clear
differences
that
we
can
point
out
in
terms
of
namespaces
OpenShift,
like
I,
showed
you
there's
a
log
in
command
and
you'll
get
a
namespace
kind
of
context
set
up
by
default.
There's
also
an
issue
with
how
kubernetes
organizes
the
API
and
and
permissions
on
the
API
as
a
standard
user,
I'll
type
in
an
example
command
here.
B
Unfortunately,
this
type
of
question
is
someone
that
only
cluster
admin
has
access
to
ask
who
owns
namespaces
and
who
doesn't
that?
That
question
isn't
something
that
can
be
delegated
to
standard
users.
That
particular
API
query,
there's
no
easy
way
to
delegate
it,
and
so
you
end
up
with
standard
users,
not
even
being
able
to
ask
the
API
hey.
Where
do
I
get
started?
What
folders
do
I
have
access
to
and
which
ones
do
I?
Not
have
access
to
so
what
we
did
in
the
OpenShift
world.
Is
we
created
an
analogy?
B
There's
a
new
resource
type
that
we
added
called
project.
A
project
is
basically
there's
a
one-to-one
relationship
with
with
projects
per
name
space
anytime.
You
create
a
project,
you
get
a
namespace
with
it,
but
that
project
can
be
seen
by
normal
users
and
if
you
start
using
a
project
that
project
kind
of
becomes
sticky.
So
all
of
your
coop
cuttle
commands
should
land
in
that
particular
namespace
of
the
project.
B
You're
working
with
so
I
can,
instead
of
coop
cuddle,
get
namespaces,
I
can
OC
get
projects
and
I
should
see
currently
haven't
created
any
new
project
demo,
and
then
that
should
allow
me
to
get
projects
and
I
see.
I
have
a
project
that
I
have
access
to.
So
that's
kind
of
the
workaround
for
enabling
standard
users
to
at
least
figure
out
what
namespaces
they
have
access
to
projects
actually
instead
of
namespaces.
But
here's
my
starting
point:
this
is
the
folder
I.
Have
access
to
I
can
deploy
assets
into
that
folder.
C
Now
no
I
think
yeah
like
whenever
you
create
a
new
project
and
you're
using
the
OC
CLI.
It's
saying
we're
now
using
that
project.
So
just
like
you
said
it's
sticky,
you
don't
have
to
always
specify
namespace
to
put
stuff
into
that
particular
project
or
namespace
right
that
you
get
that
by
default
and
you,
if
you
have
an
existing
one,
you
could
do
OC
project
and
switch
over
to
a
different
namespace,
and
that
way,
it's
inherently
sticky.
That
way
right.
If
you,
if
you
don't
want
to
create
new
ones
all
the
time
so.
A
Just
to
be
clear
for
chat
project
was
created,
I
think
before
namespaces
were
right
or
like
it
added
some
capabilities
to
namespace
that
were
requested.
That
did
not
exist
in
the
upstream
kubernetes
namespace,
so
we
created
projects.
For
that
reason,
and
someone
pointed
out,
yeah
project
is
good.
You
like
kill
off
resources
yeah
if
you're
tinkering
or
something
like
I
always
do
I
always
put
in
a
project.
I
always
put
everything
in
it.
A
B
B
C
B
So
this
is
something
that
I'm
going
to
be
updating
the
site
in
between
now
and
our
next
meeting
come
back
and
check
in
on
us
on
July
7th,
for
any
updates.
I'd
be
happy
to
review
changes
that
I've
pushed
in
those
last
in
those
two
weeks,
but
I'll
have
a
some
updates
specifically
around
this.
That
should
target
the
latest
command
line
version
or
thing
notice.
Go
ahead.
Brian
yeah.
C
B
Yeah
so
some
of
the
so
here's
here's
where
it
gets
extra
interesting
or
at
least
it
did
get
it
extra
interesting.
For
me,
when
I
was
doing
the
prep
walkthrough
of
this
one
of
the
things
I
always
tried
to
focus
on
when
I
helped
out
with
some
of
the
questions
for
the
kubernetes
cloud
native,
kubernetes
administrator
exam.
B
You
know
the
question
was
like:
can
you
produce
a
deployment
JSON
file
that
has
these
type
of
attributes
and
and
then
stored
in
some
folder
right
and
so
the
command
I
would
I
would
have,
as
as
a
that,
I
would
usually
use
to
try
to
address
that.
I'd
use,
coop,
cuddle,
run
and
then
I'd
add
in
some
extra
flags
here.
B
So
you
can
coop
cuddle
run
and
then
you
can
add,
dash
dash
dry
run
and
oh
yeah,
Mille
or
Oh
JSON,
and
then,
instead
of
actually
producing
instead
of
interacting
with
the
API
and
creating
the
deployment
it'll
spit
out
that
Yama
file
so
standard
out.
So
I
can
then
have
now
I
know
how
to
make
these
pas
these
deployments
reproducible.
So
there
was
something
what
this
deprecation
is
telling
you.
This
run
command
used
to
generate
new
deployment
resources
currently
with
the
latest
versions
of
coop
CTL.
B
It
generates
pods
instead
of
deployments
and
the
dry
run
flag
doesn't
seem
to
be
working.
I
like
I
feel
like
I
need
to
file
some
bugs
against
the
upstream
command
line
tool
for
their
recent
releases,
because
I'm
not
entirely
sure
you
can
still
make
a
deployment
resource
with
a
port
number
attached,
like
I'm
used
to
seeing
and
in
a
lot
of
examples,
so
watch
out
what
what
the
output
of
this
may
have
changed.
Your
output
from
cuoco
to
run
might
actually
be
a
pod
instead
of
a
deployment.
C
B
And
if
you
type
in
a
coop
cuddle
help
run
I
think
it
gives
you
some
options
on
how
it
how
it's
supposed
to
be
possible
to
generate
a
deployment.
I
think
there's
some
option
flag
for
making
the
deployment
instead
and
I
think
it
partially
works.
But
I'm
used
to
being
able
to
generate
a
deployment
that
has
a
port
number
specified
inside
it
and
I.
Don't
know
if
that's
pop
possible
anymore
I
couldn't
figure
out
I
spent
like
a
half
a
day,
trying
to
figure
it
out.
Last
week.
B
Luckily,
we're
on
the
pods
page,
and
so
if
coop
cuddle
run,
is
giving
us
pods
instead
of
deployments
we're
on
a
page
that's
about
pods
anyway,
so
we
don't
need
to
get
too
stuck
on.
This
particular
example
just
useful
to
note
that
the
command
line
is
going
to
announce
deprecations
as
there
are
changes
in
the
upstream
API.
So
but
it's
something
that
you
have
to
keep
a
close
eye
on
and
you
may
need
to
change
your
coop
cuddle
commands
and
constantly
maintain
the
those
group
of
commands
as
the
API
changes.
B
C
B
A
B
You
can
also
ask
for
get
deploy
as
shorthand
for
get
deployments.
It
looks
like
since
I
was
running
an
older
command-line
client
I
generated
a
deployment
which,
in
turn
generated
a
pod,
so
I
ended
up
with
a
slightly
different
but
I'm
using
an
older
API
version.
So
your
your
results
may
vary,
hopefully,
you're
learning,
all
the
while
we
should
be
able
to
use
OC
as
a
drop-in
replacement
for
coop
CTL.
B
It's
just
going
to
have
awareness
of
things
like
projects
that
coop
CTL
is
unaware
of,
because
it's
not
a
standard
kubernetes
resource
over
time,
we've
started
using
c
RDS
after
we
kind
of
implemented
those
upstream
we've
started
using
crts
to
extend
the
platform
instead
of
adding
non-standard
resource
types,
and
so
CR
DS
is
really
where
we're
going
for
a
lot
of
our
future
development
in
order
to
level
up
on
top
of
the
basic
kubernetes
api
cool.
So,
let's
see,
let
me
get
back
to
my
next
command
was
to
describe
this
pod.
Let's
see!
B
B
Ash.
Okay,
so
here
is
the
IP
address
that
I
have
available.
If
you're,
using
mini
Kubb
I
think
you
can
run
mini
cube
SSH
in
order
to
get
into
the
mini,
cube,
VM
and
then
once
you're
inside
the
VM,
you
should
be
able
to
ping
these.
You
should
be
able
to
run
a
command
like
this
use,
the
curl
directly
yeah.
You
can
also
use
OC
rsh
to
get
into
the
cluster
as
well,
but
this
this
would
be
run
from
inside
the
cluster.
Here.
C
B
B
Should
have
an
internal
gooood
of
some
sort?
That
indicates
the
deployment
ID
and
the
deployment
also
generates
a
replication
controller,
and
so
you
have
IDs
in
there
a
in
that
pod
string.
This
is
the
ID
of
the
this
in
this
one
is
the
idea
of
the
replica
set-
and
this
is
I-
think
the
ID
of
the
pod
underneath
the
si
se
deployment,
so
the
name
ended
up
going
to
the
deployment
and
then
these
two
identifiers
are
generated
from
replication
controller
and
then
a
unique
hash
for
the
pod
yeah
so
different.
B
B
Here
we
go
okay,
so
this
is
some
raw
pod
gamal.
This
is
kind
of
the
minimal
thing,
so
I
mentioned
five
resources
that
we
would
always
see.
Api
version
kind,
metadata,
spec
and
then
the
last
one
was
status.
There's
no
status
because
the
API
hasn't
handled
this.
Yet
once
the
API
accepts
this
data
stores
it,
it
will
create
a
status
field
and
start
populating
that
status
field
with
the
progress
it
is
made
in
terms
of
deploying
this
it'll
also
add
things
like
a
creation
timestamp
into
the
metadata
and
other
things.
B
We
should
be
able
to
take
a
look
at
that
after
we
type
this
example
command,
so
I'm
gonna
copy
and
paste
this
one
into
my
shell.
It
looks
like
I'll
need
to
get
rid
of
the
extra
dollar
sign,
and
so
Kubb
cuttle
apply
with
a
dash
F
flag
f
is
for
file
input
and
I
could
either
be
a
local
file.
It
could
be
a
remote
URL,
so
you
can
store
these
in
a
github
repo.
B
That's
usually
where
I
put
these
JSON
files
or
yamo
files,
and
it
looks
like
apply,
will
basically
allow
us
to
take
that
file
ship
it
to
the
API.
The
nice
thing
about
apply
is
if
I
make
updates
to
this
file.
I
can
rerun
this
command
in
order
to
try
to
ship
new
changes
to
the
API
and
say
apply
the
new
changes
that
I've
stored
in
this
file.
B
Do
Oh
JSON?
We
can
see
that
now
this
is
actually
let
me
let
me
count
the.
Let
me
count
the
liens
190
lines
of
code
versus
what
did
we
have
less
than
20
initially
before
we
did
the
apply
statement,
you
can
see
that
now
there
is
a
status
field
with
a
lot
of
information
in
it.
Regarding
the
updates
and
container
statuses,
you
can
see
that
there
are.
Let's
see
here
should
be
our
our
two
containers,
so
here's
the
spec
of
what
the
pod
should
look
like
and
there's
an
array
of
containers.
B
B
B
D
Yeah,
that's
our
another
difference.
If
you
run
this,
oh,
not
an
option
shift,
you
will
see
that
the
ID
running
the
process.
It's
not
that
long,
ID
that
we
have
here.
This
is
a
random
UID
generated
by
OpenShift.
In
order
to
keep
you
know
this,
this
is
the
our
random
UID,
so
this
user
running,
this
container
is
temporary,
doesn't
exist.
Actually
in
the
machine
is
created
by
occasional
kubernetes
on
the
fly.
If
you
do
the
same
comment
in
mini
cube,
you
will
see
that
root
user
is
running
that
process,
so
something
to
keep
in
consideration.
D
D
B
Definitely
thanks
for
pointing
that
out.
Natalia
put
up
an
example
here:
I
ran
the
Who
am
I
command
and
it
came
back
not
with
a
user
name
but
with
the
user
ID,
and
it's
some
really
high
number
user,
ID
right
and
I
could
run
the
ID
come
in.
You
can
see.
Here's
my
you
ID
and
you
know
I,
don't
have
any
groups
assigned
it's
a
very
minimal
permissions
context.
C
Being
said,
mine,
I'm
on
code,
ready
containers
and
whenever
I
ran
in
setup
code
reading
containers
on
my
laptop
or
on
my
machine.
It
gave
me
a
login
of
Courte
bad
man
with
a
password,
which
is
the
admin
account
and
that's
what
I'm
using
I
didn't
add
any
more
users,
because
I
know
that
I'm
just
using
this
for
this
demo.
D
You
guys
good
point
Ryan
about
the
service
account,
so
it's
important
to
give
the
right
permission
to
the
service
account
and
kubernetes
are
the
pol
security
constraints
which
are
inherited,
but
opposite
the
security
context
constraints.
So
those
are
security
policy
plus
sign
to
a
specific
service
account,
so
the
proper
way
to
assign
those
permission
are
on
the
service
account.
B
Other
ways
that
you
can
constrain
a
pod
we
have.
The
next
example
here
shows
how
you
can
specify
read
the
resources
field
on
the
pod,
to
spec
out
how
much
CPU
and
RAM
like
that
sleep
example.
I
have
probably
could
use
less
than
64
Meg's
of
RAM.
You
know
you
can
try
to
provide
less
resources
or
an
appropriate
level
of
resources.
I
guess.
B
This
is
a
you
know,
store
the
resource
for
the
first
time,
create
a
new
resource
ID
that
type
of
stuff.
So
we'll
do
this
create
example,
if
you
want
more
specific
information
about
what
we're
creating
open
up
this
raw
gamal
and
take
a
look,
you
can
see
the
specific
resources
resource
limits
that
we're
setting
I'm
still
inside
the
container.
Let
me
get
out
of
here:
okay
back
to
the
hosts,
nope
old
command,.
B
I'll
do
a
quick
at
the
gamal,
so
we
can
see
some
of
the
fields
that
are
being
populated
here.
This
is
all
the
this
is
the
spec.
So
here
resources
we
have
CPU
and
memory
limit
the
limit
and
the
requested
amount.
So
we've
kind
of
set
them
both
to
be
the
same,
but
you
can
have
a
range
minimum
requirements.
Maximum
requirements
can
both
be
set
in
there.
Here's
that
container
port.
This
is
what
I'm
trying
to
figure
out
how
to
set
container
ports
on
the
deployment
using
the
latest
coop
cuddle
run.
B
Please
at
me
on
Twitter.
If
you
figure
out
a
way
to
use,
coop
cuddle
run
to
make
a
deployment
with
a
port
number
specifically
with
the
dry
run
flag.
I'm
still
trying
to
figure
that
one
out
I
need
a
button.
I
can
hit
to
flash
up
a
developer
alert.
I
need
help.
Help
me
solve
this.
This
particular
issue
cool.
B
A
A
B
One
of
the
things
that
does
help
out
quite
a
bit
is
inside
this
JSON
or
yeah
mul.
Each
of
these,
what's
less
I
could
call
them
I.
Think
in
the
docker,
where
these
were
called
manifests,
I
call
them
resource
files,
usually
in
the
kubernetes
world,
but
they're
internally,
versioned,
there's
a
kind
API
version
SPECT
internally.
So
if
you
have
an
old
command-line,
it's
going
to
generate
an
older
spec
resource.
B
If
you
have
a
newer
command-line,
it's
gonna
generate
possibly
a
new
respect
resource,
but
you
can
do
some
command
line
flags
in
order
to
say
no
give
me
the
older
style
pod
so
that
the
pods
themselves,
as
well
as
the
eight
that
are
the
resource
types
and
the
API
endpoints,
are
versioned
independently,
so
that
helps
to
some
degree
handle
backwards.
Compatibility
because
you
know
deploy
this
in
the
old
style,
even
though
I'm
shipping
it
to
a
new
API
right.
B
If
you
wanted
to
ensure
that
they
all
had
a
specific
version
of
coop
cuddle
or
a
specific
version
of
OC-
and
you
never
needed
to
warn
them
and
never
needed
to
give
them
any
updates-
and
it
was
all
just
automated-
you
could
deliver
their
whole
development
experience
via
OpenShift
and
have
it
mostly
be
hosted
just
like
learned
on
OpenShift
is
a
hosted
experience.
I
didn't
install
my
coop
cuddle
command-line
tool,
it's
in
the
cloud
somewhere.
You
know
it.
D
B
B
A
lot
of
things
in
open
shift
like
that,
you
can
use
code
ready
workspaces
to
make
whole
kind
of
hosted
IDE
experiences
available,
but
we
try
to
have
a
lot
of
the
command
line.
Experience
hosted
and
available
via
the
openshift
developer
perspective,
so
we're
not
covering
that
today
we
may
cover
it
in
later
weeks.
We
just
wanted
to
show
you
the
underlying
you
know:
low-level
kubernetes
basics,
and
then
we
can
show
you
how
OpenShift
makes
it
look
nice
on
top,
but
that's
kind
of
a
separate
topic
very.
B
B
A
lot
of
these
things
have
been
kind
of
relocated
on
the
kubernetes
api,
and
so
this
may
be
the
right
way
to
define
a
pod,
because
pods
are
basically
v1
means
it's
not
going
to
have
that
the
spec
isn't
going
to
change.
Very
much
deployments
are
still
changing
a
bit,
and
so
you
can
internally
version
it
in
order
to
reproduce
the
experience
of
an
older
cluster
or
a
newer
cluster
or
whatever
kind
of
behavior
you
want
out
of
your
pods.
B
Well,
good
question:
let's
see
what
other
examples
I
have
here,
so
we
did
two
containers.
We
did
the
constraints.
Pod
last
example.
We
have
is
just
clean
up.
So
if
you
are
in
the
default
namespaces
the
default
namespace
don't
delete
your
entire
namespace
because
you'll
delete
too
many
things.
Okay,
so.
B
A
B
B
B
C
B
B
But
kubernetes
has
such
great
high
availability
features
since
I
made
a
deployment
instead
of
a
pod.
Kubernetes
is
automatically
going
to
notice
that
this
pod
has
gone
missing
and
will
go
and
stand
up
a
new
pod
and
try
to
get
me
to
my
minimum
replication
level
of
one
I.
Think
for
this
deployment
yeah.
It
looks
like
this
one
is
hitting
some
timing
issues,
but
what
I
can
do
is
coop
cuddle
delete
deployment
and
then
the
deployment
will
in
turn
delete
any
dependencies
that
it
has,
including
the
pods.
B
C
The
last
so
the
last
part
of
this
says
delete
the
two
containers
and
the
constraint
pod,
but
that
SASE,
pod
or
deployment,
depending
on
how
it
was
done,
I
mean
not
aren't
really
shown
to
delete
them
in
the
steps.
So
should
we
add
that
to
kubernetes
by
example,
to
say,
hey,
you
should
delete
that
yeah.
B
B
For
me,
the
big
takeaway
is
that
if
you
have
a
very
recent
version
of
coop
CTL,
you
will
be
making
a
pod
instead
of
a
deployment.
So
I
can
clean
up
some
of
the
wording
to
basically
indicate
that
you're
making
a
pod.
Is
it
safe
to
assume
that
everyone's
going
to
make
pods
and
should
I
put
maybe
I
need
a
warning
for
folks
on
older
Sailor.
B
B
C
C
A
Fine
gotcha,
that's!
Why
me
around
excellent.
B
B
So
I'm
going
to
go
right
back
to
the
playground
area
and
hit
start
scenario
again
and
while
that's
loading
I'll
hit
the
next
here
and
next
page
we
have
is
labels.
So
we
could
start
talking
about
this
one.
Well,
while
Quebec
coda
is
loading
up.
Labels
are
like,
if
you
think,
of
kubernetes
as
being
from
google
and
google
having
a
lot
of
processes
web
processes
that
they're
managing
generally,
when
you
have
a
sufficient
number
of
processes,
you
need
to
start
being
able
to
manage
things
in
bulk
and
that's
where
label
come
in
these
labels.
B
Give
you
a
key
value
pair
that
you
can
select
on.
You
can
select
on
the
keys
you
can
select
on
the
values
you
can
select
on
both,
and
so
labels
are
a
great
way
of
doing
bulk
operations
or
bulk
selects
against
the
API.
So
this
is
also
related
to
another
topic.
We
have
services
if
we
get
to
services
today,
services
actually
use
a
label
selector
in
order
to
figure
out
where
to
distribute
traffic,
so
we'll
we'll
get
to
hopefully
get
to
that
topic.
Next,
but
first,
let's
take
a
look
at
coop.
Cuttle
apply.
A
So
go
ahead,
login
as
developer.
There's
a
question
about
labels
versus
namespaces.
Labels
are
just
like
tags,
that's
kind
of
like
what
you
can.
You
can
use
labels
to
do
things
right,
but
namespaces
are
actual
virtual
resource
isolations.
So
that's
more
like
a
physical
versus
virtual
thing.
Right,
like
a
namespace,
is
going
to
lock
something
or
some
code
or
some
processes,
some
pods
into
a
certain.
A
B
Yeah
exactly
I
think
one.
One
thing
to
note
is
that
namespaces,
if
you're,
looking
at
the
where
all
of
these
assets
end
up
getting
stored
in
at
CD
at
CD,
has
like
a
path
like
a
URL
path
where
all
of
these
resources
are
getting
stored
and
namespace
is
very
early
in
the
URL
path.
It's
before
the
resource
type.
B
B
B
Then
this
should
here
we
go
over
on
this
end.
Labels
and
env
equals
development.
Maybe
this
is
relevant
for
me.
Maybe
not
I,
don't
know
you
can
you
can
provide
your
own
labeling
scheme.
There's
actually
been
a
lot
of
discussion.
You
know
in
on
the
kubernetes
api
we
have
pods,
we
have
deployments,
we
have
you
know
namespaces,
we
have
replication
controllers,
we
don't
have
a
top-level
resource
for
applications
or
what
is
an
app?
You
know,
I,
don't
know
some
people
are
really
bothered
by
that
and
some
people
aren't.
B
But
what
is
an
app
is
something
that
the
kubernetes
community,
specifically
the
cig,
apps
community
group
within
the
kubernetes
community,
has
been
chewing
on
that
kind
of
question.
What
is
an
app
and
how
do
we
describe
it,
and
so
they
had
a
app
specification
working
group
effort
that
the
output
of
that
effort
was
basically
to
decide
on
a
group
of
label
selectors
that
you
can
use
in
order
to
identify
your
app
and
in
order
to
express
the
relationship
between
different
microservices
within
your
application
scope.
B
So
you
can
use
namespaces
as
a
way
of
bucketing
an
app
and
indicating
that
everything
in
this
namespaces
is
MIAC
right.
You
could
also
use
labels
and
say
you
know,
label
it
with
app
equals.
I,
don't
know
whatever
you
want
to,
but
there's
a
more
formal
way
of
using
these
labels,
specifically
for
application
components.
B
And
so,
if
you
happen
to
be
using
a
shift
and
the
odo
command-line
tool,
we
will
automatically
attach
all
the
appropriate
labels
in
order
to
make
your
microservices
app
be
labeled
in
terms
of
a
helm,
3
application
component
labeling.
So
that's
kind
of
a
nice
perk
is,
if
you're
just
getting
started
with
learning
kubernetes
and
you
don't
know
specifically
which
labels
to
set.
But
you
kind
of
want
to
follow
best
practices.
B
A
C
B
So
you
can
also
apply
labels
to
things
that
are
already
up
and
running
so
I'll
drop
in
this
example
command
coop,
cuttle
label
pods-
and
this
is
a
label
example,
so
we're
selecting
I
think,
let's
see,
label
all
right,
here's,
the
here's,
the
resource
type
and
the
resource
ID,
and
what
we're
asking
is
to
apply
this
new
label
onto
this.
This
resource
ID
of
this
resource
type.
So
now,
if
we
rerun
get
pods
and
show
labels
now,
I
can
see.
I
have
two
key
value
pairs:
the
N
equals
development
and
owner
equals
Michael.
Thank
you.
B
Well,
hopefully,
everyone
was
able
to
reproduce
that
step
now.
We
can
see
an
example
of
how
to
use
the
selector
flag,
so
you
can
do
the
same,
get
pods
selector
and
we
can
say
only
the
ones
who
have
the
key
value
pair
owner
equals
Michael
and
looks
like
we
have
our
our
only
pod
that
we've
created
so
far,
but
we
were
able
to
find
it.
B
If
I
change
the
owner
to
Ryan,
it
should
very
clearly
yeah,
no
nothing
yet
alright,
so
cool,
so
selector
can
be
also
abbreviated
to
L
and
a
useful
instead
of
selector,
you
can
just
do
get
pods
L
good
example.
Here,
I'm
gonna
use
the
apply
command,
so
here's
an
example
of
using
apply
to
update
an
existing
resource.
We
saw
that
this
was
labeled
with
environment
equals
development.
B
If
you
wanted
to
do
something
where
you
suddenly
promoted
these
workloads
from
development
to
production,
you
can
have
a
service
that
uses
this
label
selector
in
order
to
figure
out
hey.
What's
production,
what
are
production
workloads?
Let
me
ask
the
API:
the
API
will
know
because
I've
labeled,
all
of
my
production
workloads,
let's
label
this
workload
as
a
production
workload,
so
the
load
balancers
can
find
it.
This
is
kind
of
the
how
I
would
walk
through
this.
So
we
know
we've
created
it
with
a
certain
set
of
labels.
B
A
A
A
B
If
you
want
to
make
sure
certain
workloads
only
land
on
certain
hardware
or
certain
nodes,
you
can
use
taints
and
Toleration
x'
in
order
to
direct
categories
of
workloads
that
have
been
labeled
a
certain
way
to
land
on
only
certain
you
know
yeah,
but
that's
cool
yeah,
their
topic,
yeah
I,
think
this
is
what
I
have
seen
as
kind
of
the
state
that
the
top
level
thing
the
most
critical
is
or
the
main
standard
that
I've
seen
is
app
equals.
And
then
you
have
some.
B
For
your
app
and
then
you'll
you'd
label
each
things
that
are
in
your
app
group
that
way,
but
then
there's
also
additional
labels
to
specify
how
the
components
within
that
application
group
relate
to
each
other.
We
can
definitely
cover
that
as
a
follow
up
topic
generally,
when
I
do
my
introduction
to
open
shift
featuring
odo,
that's
a
lot
of
what
we
look
at.
Is
we
use
odo
to
create
a
couple
pods
or
deployments,
and
then
we
look
at
what
labels
did
it
get?
Are
these
labels
useful?
Are
they
significant
am
I?
B
Cool,
so
let's
see
we
got
the
we
promoted.
We
promoted
this
workload
to
production.
Here's
a
really
interesting,
so
the
selector
functionality
has
been
enhanced.
You
used
to
only
be
able
to
do
this,
get
pods
L.
This
is
our
shorthand
for
label
selector.
You
used
to
only
be
able
to
select
on
key
value
pairs.
Now
you
can
say
select
on
anything.
That's
in
this
set
right.
You
can
do
some
kind
of
interesting
set
based
queries
on
the
API.
B
Give
me
the
things
that
have
been
labeled
production
or
development
as
long
as
they
have
an
environment
assigned,
and
it's
one
of
these
two
environments.
You
know
maybe
I
only
care
about
production
and
pre-production,
but
not
about
dev
I.
Don't
you
know
I
can
select
on
whatever,
whatever
is
relevant
to
me
and
that's
the
takeaway.
B
Let's
see
one
more
example,
we
have
here
so
a
couple
ways
to
do
cleanup
from
this
section.
We
can
delete
pods
by
ID.
We've
already
seen
how
to
do
that.
Let's
use
the
big
hammer
and
it
will
delete
anything
matching
this
label
selector
be
very
careful.
You
know
you
probably
want
to
run
a
coop
cuddle,
get
pods
with
us.
First
make
sure
you
know
what
you're
about
to
delete.
I
already
did
the
get
so
I
know
I'm
only
matching
on
on
these
two
things.
B
D
B
B
C
B
A
B
I
guess
I
could
just
do.
Is
this
the
same?
If
I
just
neglect
the
resource,
ID
I
think
the
reason
I
need.
It
is
because
I
had
the
resource
type
as
a
wild-card
and
then
the
resource
ID
needs
to
be
double
right,
maybe
cool
yeah,
but
definitely
instead
of
using
this
all
namespaces
you
generally,
if
you're
using
coop
cuddle,
it's
a
best
practice
to
always
add
in
in
and
then
whatever
namespace
you're.
B
Actually
trying
to
work
with
in
this
is
I
ought
to
just
add
this
on
to
every
single
command,
I
type,
what
you
know,
but
I'm
used
to
using
OC
and
OpenShift
just
makes
this
namespace
sticky
for
me,
so
I
only
need
to
swap
namespaces
as
needed.
So
I
am
not
as
good
about
having
the
muscle
memory
to
automatically
append
of
my
namespace,
but
it's
a
good
practice
to
try
to
scope
your
namespace
into
your
commands
and
not
just
assume
that
you
have
the
right
context.
B
B
First
example:
we
have
deployment,
is
a
supervisor
for
pods,
giving
you
fine-grained
control
over
how
and
when
a
new
pod
version
is
rolled
out,
as
well
as
allowing
support
for
roll
backs
to
a
previous
state.
So
let's
start
this
off,
we
can
copy
this
whole
thing
and
I.
Think
first,
I'm
gonna
open
this
up
in
a
browser,
so
we
could
take
a
look
at
the
yeah
mall.
So
this
is
before
here's
our
before
and
I.
Think
it's
I
don't
know
less
than
30
lines.
B
Looks
so
one
thing
to
note:
I
mentioned
that
we
would
have
kind
of
five
fields:
API
version
kind,
metadata,
spec
and
then
the
last
one
is
status.
We
don't
have
a
status
yet
because
this
has
not
been
shipped
to
the
API,
the
API
populates,
the
status
field,
but
we
can
see
we
have
two
replicas.
By
default.
We
have
a
selector.
We
have
some
label
selectors
in
here
already
and
then
a
template
and
then
there's
a
second
spec
field.
B
B
C
B
The
nice
thing
about
pods
is
a
pod
allows
me
to
encapsulate
multiple
containers
or
multiple
processes
and
the
pod
kind
of
abstraction.
If
a
single
process
fails,
that
pod
ought
to
be
able
to
recognize
that
the
process
failed
and
report
back
pod
down
right
when
that
happens
generally,
that
that
pod
will
have
its
status
reported
to
the
API
as
failed
or
something
crashed
right
and
the
way
you
recover
from
that
scenario
is
you
have
a
deployment
that
automatically
will
stand
up
a
new
pod
to
replace
the
old
pod
generally?
B
B
B
If
everything
goes
away,
you
know
the
deployment
is
still
stored
in
the
API,
so
the
deployment
is
going
to
keep
attempting
to
achieve
its
whatever
was
in
its
spec
statement.
It's
gonna
create
new
replica
sets
new
pods
as
many
times
as
it
needs
to
in
order
to
try
to
achieve
its
goal
scenario.
Hopefully
that
helps
yeah.
C
B
B
So
you
know
it's
it's
doing
that
rollout
and
rollback
is
when
you're
swapping
between
different
replication
controllers
or
replica
sets
we'll
see
that
in
this
example,
I
think,
let's,
let's
go
through
some
of
these
steps
and
and
revisit
the
question
at
the
end
of
this
section
and
see
whether
we've
figured
it
out
by
then
all
right
cool.
So
we
looked
at
the
before
picture
now:
I'm
gonna
copy
and
paste.
B
B
So
we
saw
in
the
initial
spec
we
had
replication
factor
set
to
2
as
our
what
we're
requesting
from
the
deployment.
That's
why
we
have
two
results
here
in
the
list
of
pods.
You
can
also
see
from
the
replica
set
which
you
can
shorten
to
RS.
It
shows
how
many
are
desired.
How
many
are
current,
how
many
are
ready,
and
so
what
we'll
do
if
we
bump
this
container
from
v1
to
a
v1
point,
one
we
would
end
up
will
do
a
rolling
deployment
from
one
replica
set
to
the
next.
B
B
B
So,
let's
see
we
have
so
we
have,
it
looks
like
what
we're
going
to
be
doing
is
changing
this
environment
variable.
So
the
the
container
has
an
environment
variable
set
to
value
0.9
and
we're
going
to
be
updating
it
to
value
is
1.0
another
thing
we
could
do.
Instead,
we
could
also
be
changing
the
image
version
here.
B
That's
another
thing:
I,
don't
think
we
have
a
0.60
release,
but
that
could
be
another
thing
that
we
would
maybe
be
modifying
in
this
apply
statement
or
a
person
could
be
modifying
as
part
of
their
apply
statement.
So
it
looks
like
what
we're
doing
specifically
is
changing
an
environment
variable
and
we
should
be
able
to
observe
the
result
as
we
make
that
change
so
I'm,
going
to
run
the
apply
and
I'm
also
going
to
run
a
coop
cuddle
watch
on.
B
Let's
see,
let's
do
watch
coupe
cuddle
watch,
I
will
I'll
do
COO,
cuttle
get
Rs
and
I'm
gonna
add
a
w
flag.
What
this
tells
the
API
to
do?
You
can
also
do
watch
what
this
does.
Instead
of
just
asking
the
API
once
and
saying
get
me
the
replica
sets
it
will
continually
stream
responses
from
the
API
so
anytime.
There's
a
change
replica
sets
within
this
namespace
will
automatically
get
an
update
on
the
streaming
update
on
the
progress.
B
B
It
looks
like
we're
all
rolled
out
and
fully
available,
so
here
you
could
see
the
earlier
deployment
and
the
later
deployment,
and
it
looks
like
we
did.
We
started
off
with
on
the
earlier
replication
controller,
we
had
two
pods
and
zero
on
the
on
the
newer
one,
and
what
these
deployments
allow
you
to
do
is
if
your
app
is
a
kind
of
stateless
app
without
a
lot
of
in
resident
memory,
that
it
depends
on
or
disk
stores.
B
You
can
essentially
very
quickly
scale
up
a
spare
on
on
the
new
replication
controller
and
then
add
new
labels
to
move
it
behind
the
load
balancer.
So
it
starts
getting
traffic
once
this
one
starts
getting
traffic
will
scale
down
the
old
replication
controller
to
try
to
keep
us
at
our
requested,
replication
level
of,
and
we've
got
one
in
one.
Then
we're
gonna
scale
this
down,
as
we
scale
up
a
new,
a
new
one
on
the
new
replica
set.
So
hopefully
the
the
goal
is.
B
B
I
want
to
know
who's
keeping
track
of
this
change,
who's
responsible
for
it,
who's
gonna,
be
who's,
gonna,
ensure
that
it
happens
this
way
next
time
and
so
I'm
a
little
bit
scared
of
this
edit
command,
because
it
lets
me
edit
resources
directly
on
the
API,
but
it's
really
nice
for
learning
and
experimenting
so
I'll
do
this
edit
with
the
resource
type
and
ID
you
can
see.
There
are
a
lot
of
annotations
and
also
here's
our
label
selectors
and.
B
Yeah
got
more
labels
in
here,
okay,
so
creation
time
stamp.
We've
got
everything
everything
all
set
in
there,
but
I
could
and
and
I
didn't
make
any
changes.
So
it
was
smart
enough
to
recognize.
No
changes
were
made.
We're
not
gonna
push
anything
to
the
API,
but
I
could
very
easily
use
coop
cuddle
edit
and
whatever
my
default
editor
is
and
go
ahead
and
change.
The
replication
in
here,
I
can
say:
replicas
I
would
like
three
replicas,
please.
B
B
We
checked
the
replica
sets,
we
saw
that
there
was
two
and
we
were
able
to
do
a
rolling
deployment
if
you
have
access
to
the
inside
the
cluster
via
mini
coop,
SSH
or
other
means.
As
we
covered
in
the
pod
section,
you
should
be
able
to
curl
that
workload
directly
and
see
that
your
version
number
should
report
1.0.
B
B
So
even
if
we
do
a
rollout
undo-
and
we
say
we
want
to
roll
back
to
our
previous
release,
that
can
be
done,
but
you
should
see
that
now
we're
at
revision
number
three,
because
even
though
we
went
back,
it
was
another
change
we
needed
to
record
and
keep
track
of
so
now,
I
should
be
able
to
cuddle.
Get
pods
looks
like
some
of
this
is
still
in
process.
B
Could
take
a
look
at
Oh,
JSON
and
see
if
this
has
the
right
yeah?
Here
we
go.
It's
been
rolled
back
to
0.9,
so
we
have
the
right
status
in
our
pods
on
the
API
managed
not
by
us
manually
but
by
the
deployment.
Did
the
updates
for
us.
So
pods
are
really
nice
fundamental
to
be
aware
of
in
terms
of
architecting
your
solutions.
B
But
generally
it's
like
it's
like
when
I
went
through
math
class,
it's
like
I
spend
a
whole
month
learning
how
to
do
long,
division
by
hand
and
then
they're
like
at
the
end
of
the
course.
Oh
by
the
way,
here's
a
calculator-
and
you
know
it's
good-
to
know
the
theory,
but
there
are
easier
ways
of
managing
some
of
these
solutions,
and
you
know
some
of
it
is
still
evolving
and
growing.
But
hopefully
this
gives
you
some
nice
low-level.
B
Introduction.
Last
step
we
have
here
is
to
delete
the
deployments.
I
could
delete
the
pods
first,
as
we
saw
from
our
pods
example
deleting
the
pods
first.
Those
will
automatically
get
recreated
by
the
replica
set
if
I
delete
the
replicas
set
that
should
automatically
get
recreated
by
the
deployment
I
delete
the
deployments.
That's
my
top
level
resource
that
should
do
a
cascading
delete
of
everything
else.
That
was
managed
underneath
any
questions
on
that
section.
C
Sure,
if
this
is
possible,
but
do
you
know
if
there's
a
way
to
do
like
a
recursive
like
the
pod,
is
owned
by
the
replica
set
which
is
owned
by
the
deployment
like?
We
know
that
we're
working
with
the
pod,
but
how
do
I
figure
out
the
the
deployments,
the
top
level
I,
don't
know
if
that's
possible
or
not,
but.
B
Yeah,
yet
well,
just
just
by
looking
at
the
name
of
the
pod.
If
the
pod
was
created
by
a
replica
set,
it
should
have
the
name
of
the
replica
set
in
its
in
the
name
of
the
pod,
and
if
the
replica
set
was
created
by
a
deployment,
the
name
of
the
deployment
will
also
be
embedded
in
the
name
of
of
the
pod.
So
if
I
scroll
up,
let's
see
if
I've
got
an
example
here,
okay,
so
the
this
list
of
pods
here
this
is
the
name
of
the
deployment
here.
D
B
Here
we
go.
Here's
our
pods
up
here,
and
these
are.
These-
are
both
part
of
the
same
replica
set,
so
they
both
have
different
pot
IDs.
Here's
the
replica
set
ID,
here's
the
deployment,
ID
and
I
just
did
a
delete.
You
know,
I
just
did
a
cleanup
on
the
API,
but
I
should
have
been
able,
theoretically
to
just
copy.
B
B
Space.
Sis
II
should
get
me
the
deploy
that
the
pod
it
belongs
to.
So
you
get
some
of
that
just
based
on
naming,
and
if
you
have
a
service
account,
you
can
ask
the
API
and
get
more
information
by
querying
the
API
as
well
right,
and
each
of
these
pods
should
have
a
service
account
mounted
by
default
into
that
I'm,
not
sure
how
much
credentials
has
been
granted
to
that
service
account.
B
But
that's
something
that
if
you
wanted
your
pods
to
be
super
aware
of
how
they
related
to
the
kubernetes
api,
you
can
use
labels
in
order
to
make
those
pods
aware
of
the
relationships.
That's
one
way
they
can
kind
of
get
that
through
a
downward
API
that
doesn't
require
a
lot
of
permissions
or
adding
service
account
permissions.
You
can
have
that
pod
be
able
to
ask
the
API,
but
then
you're
kind
of
tying
the
functionality
of
that
pod
to
the
API
and
that's
kind
of
a
like.
B
Do
you
want
to
create
a
dependency
on
the
kubernetes
api?
I
don't
know
it's
kind
of
a
another
issue,
but
yeah.
It's
definitely
all
possible
to
make
these
workloads
more
aware
of
the
management
system
and
engage
with
the
management
system
in
order
to
automatically
scale
other
workloads
or
automatically
adjust
the
system
to
tune
performance
for
users.
B
Cool
good
question:
I
think:
let's
see
we
only
have
we
are
getting
close
to
ten
o'clock,
which
I
think
we
had
budgeted
two
hours.
So
now
is
the
point
where
I
can
either
wrap
up
on
services.
Let
me
scroll
down
and
see
how
long
this
page
is
or
we
can
wrap
on
the
whole
topic
and
come
back
and
cover
services
later
I
think
what
I
would
like
to
do?
Chris
you
have
suggestions
so.
A
A
D
A
More
information
learned
on
OpenShift
comm
right,
like
everybody's
learned
today,
that
that
is
a
very,
very
important
piece
of
the
puzzle
and
forgetting
their
their
kubernetes
and
OpenShift
education.
There's
also
demoed
openshift
comm
that
my
team
has
built
to
help
if
you
need
to
like
rollout
Timo's
to
try
and
work
on
some
scenario,
type
things
if
you
need
to,
and
then
obviously
kubernetes
by
example,
which
teaches
you
the
individual
components
and
so
forth.
So
yeah,
it's
pretty
awesome
all
the
stuff
we
got
going
on
the
there's
some
questions
in
chat.
B
The
one
I
would
point
to
most
readily
would
be
the
learned
OpenShift
comm,
that
one
you
do
not
even
need
to
sign
up
for
it.
No
credit
card
required,
no
email
required,
no
login.
The
downside
is
it
times
out
every
hour,
but
you
can
reload
your
browser
as
many
times
as
you
like
for
additional
hours,
but
you
lose
your
work
so.
D
B
Want
to
keep
your
work
longer,
the
suggestions
I
have
would
be
go
down
to
that
DIY
page
on
kubernetes
by
example.
Openshift
playgrounds,
a
good
example
that
one
we
just
mentioned.
Many
Kubb
is
solid,
also
tri-dot,
openshift
comm.
If
you
log
in
here,
you
should
see
the
ability
to
set
up
openshift
on
just
about
any
hardware.
Any
cloud
you
like
another
solution
in
there
is
to
use
a
vm
and
set
it
up
on
your
laptop
using
code
ready
containers.
So
that's.
D
B
B
A
B
B
If
you're
interested
in
seeing
what's
up
and
coming
in
4.6
tune
back
in
two
weeks
from
today
same
time
same
place,
same
channel,
Serena
will
be
around
I'll
be
around
and
a
couple
other
folks
from
the
team
as
well.
We
would
love
to
show
you
kind
of
the
latest
and
greatest
new
functionality,
but
we'll
also
be
covering
kind
of
remedial
topics
like
low-level
kubernetes
fundamentals
and
we'll
probably
hit
that
in
on
July
7th,
we'll
have
a
follow-up
on
this
session
and
more
more
more
topics.
B
A
We
have
a
feedback
form.
Let
me
drop
that
in
the
finder.
Real,
quick
and
drop
it
in
the
chat.
I
think
it's
as
simple
as
I
think
it
is
but
yeah.
So
the
the
next
time
we
are
on
live
here
is
actually.
This
is
the
last
show
this
week
because
we're
in
meetings
all
day
the
next
few
days
and
then
Friday
is
a
Red
Hat
quote
recharge
day,
so
we're
taking
or
most
of
the
company
is
taking
the
day
off
to
recharge.
A
Given
the
current
scenario
that
we're
in,
but
on
Monday,
we
are
coming
back
strong,
Monday,
June,
15th
all
day
stream,
open
shift,
Commons
gathering
all
day
long
non-stop
Diane
has
some
amazing
folks
lined
up
to
speak.
Please
please,
please
check
out
our
calendar
our
streaming
calendar
that
I
will
definitely
drop
link
into
because
I
have
that
one
memorized
and
subscribe
to
that
by
hitting
the
little
plus
button
in
the
bottom
right
hand,
corner
of
the
Google
Calendar
logo,
there's
a
little
plus
sign.
You
can
grab
that
and
if
you
head
to.
A
A
So
if
you
had
to
read
dot
HT
/
stream
feedback,
we
will
get
that
feedback
and
you
can
actually
make
suggestions
for
shows
and
such
there
if
you
would
like
and
if
you're
able
to
come
on
and
present-
and
you
know
maybe
you're
a
partner
or
a
customer
that
wants
to
present
some
unique
thing
on
OpenShift
we're
happy
to
have
you
home,
so
you
know
reach
out
through
that
feel
free,
but
yeah,
that's
it!
Thank
you
all
for
joining
us
today,
I.
A
A
Can
I
build
a
cluster
for
free
check
out
code,
ready
containers
for
cluster,
build
up
how
many
master
nodes
and
Etsy
D
nodes
and
in
four
nodes
can
we
have
to
check
it
out
at
CD?
Only
three,
three
three
three,
four
ed
CD:
that's
why
we
only
run
three
masters,
because
at
CD
runs
on
those
you're
in
four
nodes
same
deal
at
CD
only
on
three
nodes:
how
can
I
build
a
cluster
for
free
without
a
credit
card
again
coder
80
containers
will
help
you
on
that
journey.