►
From YouTube: Ask the Product Manager Office Hours: Top 5 problems with Kubernetes and how we are fixing them
Description
Top 5 problems with Kubernetes and how we are fixing them with Mike Barrett
OpenShift is used by over 1,000 customers. Those customers call Red Hat support when they have questions. I'm going to take you through the top 5 issues that come up the most.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
openshift
tv,
we
are
joined
today
for
the
ask
the
product
manager
show
by
the
one
and
only
michael
barrett,
michael
barrett,
is
the
senior
director
of
product
management
for
openshift.
He
answers
to
all
masters
right
like
he
has
to
keep
the
customers
happy.
He
has
to
keep
the
business
happy
and
he
has
one
of
the
hardest
jobs
here
at
red
hat
I
feel
like
mike.
Would
you
like
to
introduce
yourself
to
the
audience?
Please
yeah.
B
Thanks
chris
so
ellie
chris
said,
I'm
the
senior
director
of
product
management,
I've
been
with
openshift
since
the
you
know
the
2.0
release.
So
for
quite
some
time
going
on
seven
years,
I
do
have
the
fortunate
experience
of
being
kind
of
the
blend
between
customer
escalations
between
the
product
direction.
Between
you
know,
revenue
facing
obligations,
it
all
mixes
together
kind.
C
B
At
my
level,
so
it's
it's
a
great
job.
It's
a
great
place
to
be,
I
love
being
a
product
manager.
I
love
that
discipline.
You
know
it's
it's
just
a
fantastic
situation
for
me.
How
I
I
got
into
it
is
is
even
more
fortunate.
So
I
started
my
career
at
sun
microsystems,
oh
wow,
okay,
in
the
late
90s,
and
when
I
went
into
sun,
I
went
in
that.
B
B
B
And
you
start
growing
and
picking
up
more
and
more
and
that's
when
I
started
to
get
into
flying
fix
and
you
know
doing
some
sort
of
sustaining
work
a
little
bit
and
then
I
went
into
product
management
after
that
nice
and
but
I
was
always
attracted
to
resource
management
on
the
operating
system.
I
was
attracted
to
like
we,
we
called
l-doms
back
then
and
slash
zones
and
ford
slayer.
They
were
called
l
nodes,
so
that
area
I.
B
It's
like
whoa,
the
stuff
you
can
do
so
then
I
you
know
I
started
pushing
those
technologies
pretty
aggressively
and
I
went
out
and
I
flipped
the
world
to
slayer's
zones
right.
It
was
a
strategy
that
the
sun
at
the
time,
was
doing
away
from
mel
doms
away
from
hardware,
hypervisors
yeah
and
then,
after
a
ride
before
the
oracle
acquisition.
B
I
went
and
switched
them
all
back
to
l
ldoms,
because
we
were
trying
to
like
save
the
company
and
make
money
and
spark
was
more
expensive
and
that
was
a
higher
profit
margin
and
we
never
really
got
the
you
know
the
the
the
community
around
open,
solaris
that
say,
linux
had
on.
B
So
it's
it
was
weird
weird
switching
forward
and
backwards,
and
then
I
went
to
oracle
from
the
acquisition
and
they're
like
well.
You
know
a
lot
about
zones
and
ldoms
and
infrastructure
virtualization.
Why
don't
you
accelerate
oracle
applications
with
those
technologies?
B
And
that
sounds
a
heck
of
a
lot
like
a
pass
at
the
time.
Yeah
yeah.
B
So
we
start
getting
the
pads.
You
know
do
that
for
a
couple
years
you
start
looking
at
hey,
heroku
and
all
these
other
competitors,
and
that's
when
I
was
researching
a
competitor
and
found
openshift
and
wow.
I
went
to
the
openshift
website
and
I
was
able
to
get
a
login
and
a
password
and
deploy
an
app
within
like
a
couple
minutes,
and
I
was
like
holy.
C
B
A
A
Yeah
and
we're
so
happy
to
have
you
here
seriously,
not
just
on
the
show
but
like
at
the
company
in
general.
Right,
like
you,
bring
a
wealth
of
knowledge
and
just
industry
experience
that
it's
rather
intensive
and
deep,
especially
in
this,
like
you
know,
openshift
realm
or
kubernetes
realm
that
we're
living
in
now.
I
say
realm
because
you
know
solar
zones
are
still
out
there
somewhere.
You
know,
there's
still
lxd
and
lxc
containers
out
there.
Still
you
know
there.
A
A
So
yeah
you're,
you
decided
the
topic
today,
which
I
thought
was
brilliant
right,
like
what
are
the
top
five
bucks
we've
had
this
month.
All
right
like
this
is
a
brilliant
idea
and
like
going
forward,
I
would
like
to
see
more
stuff
like
this
right,
like
let's
help
our
customers
directly
right,
like
let's
use
the
data
we
have
as
open
shift
as
red
hat
the
company
and
let's
use
this
channel
as
a
medium
to
deliver
those
hot
fixes,
those
quick
fixes
for
those
customer
problems.
B
I
love
it,
so
let's
kick
it
off.
You
know
before
I
get
into
the
bugs.
I
think
it's
important
to
know
what
we're
working
on
in
general
and
how
the
bugs
fit
in
and
then
I'm
going
to
talk
about
how
a
bug
even
gets
fixed,
and
you
know
what
releases
is
that
possible
to
get
a
fix
on
so
there's
a
lot
going
on
in
the
next
two
years
and.
C
B
This
slide
is,
is
all
over
the
place,
but
if
you,
if
you
start
from
the
bottom
and
go
up
just
really
really
briefly,
you
know
the
the
world
wants
to
blend
public
providers
with
on-premises
experiences.
Like
you
see
it
with
tanzu,
you
see
it
with
ibm
cloud
satellite.
You
see
it
with
outpost
with
azure.
B
So
openshift
has
to
work
in
all
those
and
we're
working
aggressively,
obviously
with
the
ibm
cloud
satellite,
but
we're
also
working
very
aggressively
with
azure
arc.
You
saw
that
at
summit
the
demo
with
microsoft,
so
those
are
the
the
two
hot
irons
in
the
fire
right
now,
but
all
of
them
have
to
reconcile
with
us
and
in
how
we
do
networking
and
how
we
fit
into
their
their
larger
scheme.
B
So
you
know,
most
of
them
are
running
an
agent
agent
is
a
bad
word
now
you
call
them
a
pod
right
on
on
your
cluster
and
then
once
that
pod
is
in
your
cluster,
you
can
incorporate
it
into
that
larger
solution.
So
so
you
see
a
lot
of
work
going
on
there.
The
next
set
up.
You
see
a
ton
of
work
going
on
what
we
call
kni
kubernetes
native
infrastructure
right,
and
this
is
pushing
kubernetes
primitives.
B
How
you
explain
how
you
write
your
pod
specs,
how
you
query
the
kube
cuddle
minus
c
apply
yaml
file,
making
sure
you
can
do
that
to
infrastructure
components
now
bring
on
a
bare
metal
server.
Do
those
types
of
interesting
things
and
that's
where
you
see
us
doing
kubevergs,
you
see
us
doing,
cata
containers
you
see
us
doing
some
bare
metal
metal
cube
is
the
project
upstream.
B
You
see
us
getting
into
high
performance
networking
more
so
than
we
ever
have
with
srov
and
offloading
so
a
lot
of
cool
stuff
there.
The
next
one
up
is.
You
know,
customers
have
come
to
us
now
that
we're
we're
a
couple
years
in
on
being
sort
of
this
agnostic
layer
that
you
run
everywhere
on
on
all
the
public
clouds
they're
starting
to
realize.
Well,
hey
I
actually
like
maybe
how
they
do
permissions
on
aws,
and
maybe
I
don't
want
that
to
be
agnostic.
B
Maybe
I
actually
want
to
dig
in
a
little
more
aggressively
to
what
aws
does
there,
and
so
that's
that's
what
we're
going
back
now
and
we're
adding
some
of
those
opportunities.
B
So
if
you
wanted
to
dive
deeply
into
aim
roles
and
have
that
integration
expose
you
to
that
to
in
the
open
shift
process,
we're
going
to
add
that
this
year
to
it
and
we'll
do
it
for
google
and
we'll
do
it
for
azure
after
that.
So
you
you
see
us
having
the
agnostic
abilities,
but
also
where
you
have
demanded
it
will
start
exposing
you
more
to
those
underlying
platforms
right
and
that
gets
into
routing
and
load
balancing,
and
all
that
all
that.
C
B
Fun,
networking
stuff
yeah
lots
of
really
cool
stuff,
going
on
around
the
scheduler
around
how
you
evict
workloads
around
how
you
balance
and
rebalance
a
distributed
system.
So
a
lot
of
theory
coming
to
actual
code,
there
c
groups
version
two
is
pretty
exciting:
that's
coming
out,
kms
is
fine.
B
Now
yeah
kms
is
finally
getting
some
legs,
so
we're
going
to
do
some
vault
integrations
this
year
with
that
the
next
layer
up
a
lot
of
serverless
a
lot
of
event
based
api
platforms
right
and
how
the
gateway
now
facilitates
an
easier
flow
with
your
serverless
topologies.
So
a
lot
of
cool
stuff
there.
On
top
of
that,
you
have
layer,
seven
right,
the
what
you
used
to
buy
your
hard.
B
Your
firewall
appliances
are
now
baked
into
envoy,
and
you
have
this
istio
control
plane,
and
now
people
want
to
do
more
with
that
in
their
applications.
So
you
see
that
coming
then
we
have
this
blows
my
mind.
I
have
a
lot
of
maybe
they're
on
the
on
the
phone
today.
B
A
lot
of
customers
are
starting
to
move
away
from,
say,
http
and
moving
to
grpc,
and
that
is
just
mind-blowing
to
go
through
that
that
translation,
because
you
know
what
they're
noticing
is
you
take
the
jvm
and
it's
got
all
these
modules
around
it
that
make
it
fatter
and
fatter
and
fatter
logging
monitoring
how
it
does
process
handling.
B
You
don't
need
any
of
those
if
you've
agreed
that
your
platform
is
kubernetes
right.
If
you've,
if
you've
made
that
mental
choice
that
I'm
going
to
run
these
apps
on
kubernetes,
you
can
replace
some
of
those
components
of
the
jvm
with
the
kubernetes
services
yeah
and
when
you
move
to
grpc
you're
doing
that
with
a
lot
of
really
lightweight
javas
and
that's
really
speeding
up
a
lot
of
applications
on
the
side,
there's
even
exciting
stuff
going
on
right.
A
Yeah,
like
cube,
cuddle
and
oc,
are
going
to
become
like
the
ssh
right
like
if
you
have
to
use
that
there's
a
problem.
Kind
of
scenario
is
where
we're
trying
to
take
the
platform
with
get
ops,
and
you
know
our
teams
teaming
up
with
all
the
get
off
platforms
of
the
world,
so
you
can
agnostically
choose
which
way
you
want
to
get
ops.
You
know
so
yeah.
B
Yeah
and
then
the
last
one
is
the
autonomous
platform.
So
the
you
know,
we've
gone
through
a
lot
of
engineering
work
to
connect
the
open
shore
four
platform.
Should
you
allow
it
to
be
connected?
B
It's
still
up
to
you
back
to
us
and
then
applying
what
we
call
insights
to
the
telemetry
that
we're
seeing
and
it's
it's
been
really
been
remarkable,
even
the
even
the
customers
that
don't
know
that
they're
helping
are
helping,
because
when
an
engineer
sees
error,
messages
and
events,
and
now
all
the
engineering
scrum
teams,
part
of
their
sprint
process,
is
looking
at
telemetry
and
deciding
whether
or
not
the
application
is
behaving
as
they
coded
it.
B
That's
a
it's
a
fundamental
shift,
that's
part
of
their
sprint
process
now
and
we
generate
probably
on
any
given
I'd,
say:
release,
which
is
a
three
month
period,
we'll
generate
probably
15
to
20
fixes
in
the
product
that
no
customer
even
called
on
just.
A
Just
because
they're
sharing
data
with
us
right
and
it's
a
minimal
amount
of
data
at
that
that
they're
sharing,
but
we're
able
to
do
a
lot
with
it,
which
is
why
we
encourage
people
to
turn
on
telemetry,
because
we
can
forecast
problems
before
they
ever
become
a
problem
to
an
extent
right.
That's
it.
B
So
that's
really
exciting
so
that
you
know
that's
a
lot
of
work
and
that's
happening
in
the
next
two
years.
There.
Let's
talk
about
patching
so
when
you
get
openshift
and
you
decide
to
buy
openshift
you're
buying
a
support
contract
in
reality
right,
you
could
have
gotten
the
software
it's
open
source
from
okd
or
from
the
kubernetes
upstream.
So
what
you're
really
interested
in
is
support.
Life
cycling
fixes
things
of
that
nature.
B
C
A
Oh
yeah,
someone's
gonna,
run
this
be
running
this
in
2022.
Right,
like
the
intention
is
for
everybody
to
update
every
release,
but
sometimes
that
doesn't
happen,
and
we
know
that
as
an
enterprise
software
customer
that
those
releases
sometimes
take
time-
and
you
have
interdependencies
that
rely
upon
that
and
we
believe
in
supporting
what
we
create
and
it's
a
great
it's
a
great
place
to
be,
as
far
as
you
know,
accompanying
ethos
right.
So
it
is.
B
You
know
I
talked
to
a
couple
competing
platforms
out
there
in
the
market
because
I'm
interested
how
they
pull
this
off,
because
it's
it's
it's
you
know
for
so
for
most
of
my
career,
I
worked
in
a
proprietary
environment
and
then
I
went
to
red
hat
and
it
switched
to
being
purely
open
source
and
as
a
product
manager
it
it
it's
hard
to
get
used
to,
because
you
can't
get
you
can't
snap
your
fingers
and
get
a
feature.
B
The
feature
has
to
go
through
the
upstream
people
have
to
agree
that
the
future
is
needed.
They
have
to
agree
how
it's
going
to
be
coded,
so
it
adds
some
additional
time
there.
But
then
you
can't
just
fix
things
either
you
have
to
put
them
back
in
the
upstream
or
else
you're
going
to
fork
yourself,
and
so
you
ask
a
lot
of
competitors
how
they're
doing
in
small
startups
and
a
lot
of
them
are
just
downloading
the
binaries
and
downloading.
Whatever?
A
It
and
if
you
look
at
red
hat's
engagement
across
the
kubernetes
community,
we're
deeply
involved
right
like
to
to
say
that
folks,
on
this
call,
aren't
working
on
kubernetes
would
be
very
false
right.
Like
I
work
on
the
community
kubernetes
side
of
things,
you
have
to
touch
the
kubernetes
side
of
things
quite
often
and
when
I
say
the
kubernetes
side
of
things
I
mean
like
github.com
kubernetes
right,
hey,
that's
what
I
mean
when
I
say
kubernetes
right
like
that
organizational
structure.
A
B
And
then
that
brings
the
4.x
and
right
now
there's
three
versions
of
4.x,
and
that
falls
in
line
with
this
link
at
the
bottom
of
the
slide.
If
you
want
to
click
on
it,
your
free
time,
that's
the
upstream
compatibility
and
upstream
carries
three
versions
of
kubernetes
as
well
at
any
at
any
time.
So
we're
supporting
4.5
is
the
current
release
and
then
it's
a
it's
a
n
minus
two.
So
you
have
4.5
4.4
and
4.3
getting
support.
So
there's
there's
four
code
branches
that
our
engineers
are
maintaining
right.
B
A
B
Yeah
I
mean
it's,
it
started
and
look
it
wasn't
all
altruistic
right
when.
C
B
We
were
solving
that
too,
but
we
also
wanted
to
make
sure
that
anybody
could
upgrade
at
a
very
consistent
pace
and
stay
innovative
with
the
kubernetes
release
cycles,
so
yeah
a
lot
of
work
there.
I
I
will
say
this
on
4.6.
You
see
it
supported
all
the
way
out
to
may
of
2022,
and
that's
because
4.6
is
going
to
be
our
extended
release.
Interesting.
A
Version
there,
so,
oh
yeah,
how
involved
have
you
been
in
the
kubernetes
lts
discussion?
I
asked
because
I
think
it
was
what
was
it
whenever
we
were
in
seattle
for
cubecom
2018
2017..
I
forget,
but
yeah
like
the
lts
discussion,
was
like
a
birds
of
a
feather
discussion
that
I
just
started
and
decided:
hey
I'll
help
moderate
this
and
it
was
the
most
liveliest
discussion.
A
B
Yeah,
no,
I
have
been
listening
over
the
shoulders
of
people
and
I'm
happy
that
upstream
decided
to
go
for
a
year
for
the
119..
I
think
it's
a
great
move.
I
and
I'm
happy
that
they
decided
to
to
do
that.
A
B
Yeah,
so
you
notice
on
the
last
slide,
there's
four
months:
there's
a
four-month
gap
there
between
kubernetes
release
and
the
open
shift
release.
And
what
are
we
doing?
Well,
we're
we're
going
through
our
internal
processes
right.
We
we're
contributing
and
we're
taking
part.
We
have
tons
of
engineers
and
all
the
sigs
upstream,
but
when
we
decided
downstream
something
it's
got
to
work
in
our
ecosystem
with
our
isv
partners,
with
our
internal
other
engineering
teams
within
red
hat,
and
that
obviously
will
find
bugs
find
more
issues
that
were
in
that
kubernetes
upstream.
C
A
We
are
a
certified
kubernetes
distribution
by
the
cloud
native
computing
foundation.
Right
like
we
cannot
lose
sight
of
that
fact.
Right,
like
yeah,
we
are
kubernetes,
it's
not
some
fork.
It's
not
something.
Kubernetes
is
underneath
and
we've
built
stuff
on
top
right,
like
we've
built
stuff
around
it
inside
of
it,
on
top
of
it
to
make
it
enterprise
ready
and
sorry.
I
just
bumped
my
mic.
My
bad
yeah.
B
And-
and
we
go
through
a
lot
of
sort
of
internal
soul,
searching
to
make
sure
that
we're
not
in
disagreement
with
the
upstream
like
we've
and
that's
been
hard
yeah,
it's
been
difficult
because
you
know
some
of
the
upstream
may
only
have
a
cloud
provider
point
of
view
and
they
could
care
less
about
something.
That's
happening
on
premises
with
like
down
at
the
the
little
super
low
level,
storage
or
super
low
level
networking.
B
So
it's
hard
to
push
some
of
those
things
at
times,
but
we've
always
tried
to
stay
consistent
with
whatever
the
larger
community
has
has
decided
we're
not
by
ourselves
by
any
means
by
staying
one
release
behind
you'll
notice
across
the
board
that
most
people
will
stay
one
release
behind.
B
In
fact
I
mean
looking
at
this,
the
only
one
that's
consistent,
we
doing
it
at
the
exact
same
time
would
be
the
ibm
kubernetes
service.
The
iks
is
right
sometimes
so
that
is
that
it's
good
to
know
what
we're
going
through,
because
when
you
open
up
bugs
and
by
the
way
anybody
can
open
up
an
account
on
bugzilla.redhat.com.
B
So
you
can
do
that
today
and
then
on
issues.redhat.com.
In
the
month
of
october,
you'll
see
us
putting
more
and
more
open
shift
content.
That's
our
jira
out
on
issues.readout.com,
so
you'll
see
a
mix
between
jira
and
bugzilla.
There
why
this
is
important
is
because
you'll
be
interacting
with
us
over
at
bugzilla
and
you'll,
be
saying.
You
know
why
the
heck!
Isn't
that
engineer,
paying
attention
to
me
right
now?
Why
can't
I
get
his
or
her
attention
and
sometimes
they're
just
trying
to
close
something
in
their
process?
B
So
if
you
look
at
a
given
release
of
open
shift,
we
have
around
four
to
five
sprints
and
a
sprint
is
three
weeks
long.
So
you
know
just
like
everybody
else's
agile
process.
We
have
a
grooming
week,
we'll
have
a
burn
down
week
and
we'll
have
a
an
innovation
or
a
code
week
in
that
sprint
process.
We'll
do
that
five
times
at
the
beginning,
we're
obviously
going
to
pick
a
kubernetes
release
that
we're
going
to
be
based
on
right,
so
4.6
is
based
on
kubernetes
119..
B
So
that's
what
we're
going
through
the
rebase
is
obviously
the
first
thing
to
do.
Then
we
start
coding
right
and
we'll
code
all
the
way
for,
like
three
sprints
all
the
way
to
feature
complete
and
that's
that,
beginning
of
that
yellow
line
there
after
feature
complete.
B
We
we
tell
the
engineers,
hey,
you
gotta,
get
this
feature
to
actually
work
like
they're
they're,
integrating
it
back
to
trunk
they're,
going
through
unit
testing
they're
going
through
their
process
and
now
it's
time
to
hone
down
and
make
that
thing
rock
solid
and
to
look
at
all
the
other
bugs
that
have
been
created
because
of
the
new
code
and
to
start
burning
down
those
bugs
to
get
ready
for
the
code.
Freeze
right
right
in
that
yellow
slot
is
when
we'll
start
planning
the
next
release.
B
So
right
after
they
go
into
feature
complete,
we'll
start
thinking
about
the
next
release
as
they're
running
down
and
getting
the
code
to
work
extremely
well
and
then
we'll
go
into
code.
Freeze
along
this
line,
you'll
pick
you
can,
as
a
user,
go
to
our
nightly
site
and
download
openshift
nightlys
and
starting
right
after
that,
first
rebase.
You
would
be
on
the
actual
next
release
of
the
product
if
you
wanted
to
start
experiencing
it
super
early.
So
you
have
an
opportunity
to
look
at
that
next
release
in
our
nightly
drops.
B
Then
we
go
through
a
ga
process
and
then
we
push
it
out.
So
let's
look
at
what
it
means
to
fix.
One
of
these
bugs
so.
B
We're
in
september,
if
you
look
at
kubernetes,
we'll
release
a
1.19.2
patch
set
in
september
and
there's
obviously
the
next
release
of
kubernetes,
which
has
a
branch
and
has
code
going
into
it,
we'll
release
in
october
4.6,
which
is
going
to
be
based
on
kubernetes
119
and
then
in
november.
We'll
you
know
it's
the
end
of
october
for
4.6.
So
at
the
beginning
of
november,
you'll
see
the
first
nightly,
which
is
okd
drop
for
4.7,
which
will
be
based
on
kubernetes
120.,
and
then
we
go
into
december.
B
But
what
if
we
hit
a
bug
at
the
end
of
november,
so
it's
a
customer
using
4.6
and
they
might
even
be
using
4.5
or
4.4
at
that
point
right,
but
they
they
hit
a
bug.
We
have
to
immediately
look
at
the
bug
and
see
if
it's
in
the
existing
branch
that
hasn't
released
yet
in
kubernetes,
so
that
would
be
kubernetes
1.20
upstream
yeah.
B
Upstream
now
we
have
to
fix
it
in
kubernetes
1.20,
and
then
we
have
to
cherry
pick
that
fix
back
to
the
version
of
kubernetes
that
we're
on
which
is
at
that
point
kubernetes
119.2,
and
then
we
have
to
decide
whether
or
not
kubernetes
is
going
to.
Let
us
fix
that
in
the
back
ported
version
like
if
we're
going
back
to
4.4
or
if
we're
going
back
to
3.11.
B
Sometimes
the
upstream
won't
backport
stuff
to
those
order
releases
if
it's
outside
their
policy,
and
so
we
would
be
carrying
that
code
in
okd
for
the
for
the
back
port.
But
nothing
starts
unless
it's
fixed
in
the
upstream.
If
it's
fixed
in
the
current
code
base
and
that's
that's
what
keeps
us
not
forking,
because
as
long
as
we're
always
putting
back
to
the
code
base,
the
upstream
then
we're
in
the
same
code
stream.
B
So
that's
when
we
cherry
picked
it
back
to
4.6,
then
we
have
to
downstream
that
you
know.
Kubernetes
1.19.2
right
has
to
get
down
streamed
into
openshift's
github,
which
is
our
downstream,
and
then
every
week
we
put
out
a
z
stream
now
for
the
4.x.
So
literally,
every
week
we're
putting
out
code
fixes
to
openshift.
B
And
that's
that
over
the
air
update
process
right,
that's
the
thing
that
you
had,
I
think
rob
zumski
and
scott
dotson
on.
So
that's
that
machine
that
we
have
to
to
push
out
those
fixes
in
a
pretty
rapid
area.
I
will
say
right
before
we
release
a
ga,
we'll
turn
on
our
hosted
services
on
that
release.
So
right
now
we're
working
on
4.6
we're
working
with
our
openshift
dedicated,
hosted
service,
our
sre
group
to
start
using
openshift
4.6
before
we
ga
it
so
we'll
do
that
across
all
the
releases.
B
Some
cool
links
at
the
bottom
that
talks
about
the
kubernetes
patch
releases.
If
you
want
to
see
where
the
patch
sets
are
coming
and
then
how
you
can
contribute
to
those,
so,
let's
get
into
the
bugs
after
now,
we
understand
the
backdrop
right.
You
understand
what
we're
working
on
long
term
in
general,
how
we
have
our
releases,
you
know
how
we
introduce
patches
before
I
get
into
the
bugs.
Did
you
see
any
chat
questions
I
haven't
been
looking
so.
A
B
B
There's
not
a
direct
correlation,
it's
you
know
the
biggest
change
that
I've
seen
is
we
used
to
have
okd,
be
something
completely
different
than
a
nightly.
That
would
be
the
ocp
nightly
and
when
we
moved
to
4.x,
both
ocp
and
okd
are
pulling
from
the
nightlys.
B
So
the
nightly
almost
becomes
the
new
okd
in
in
that
regard,
but
we
do
still,
you
know,
pull
down
an
okd
and
make
sure
that
you
have
a
stable
milestone,
because
if
you're,
just
feeding
from
nightlys
you're
feeding
from
some
pretty
crazy
stuff
like
like
every
night
code,
goes
into
that
it
could
completely
break
the
build
it
could
like
do
crazy.
B
B
So
this
is
a
good
reminder
that
some
bugs
are
just
around
the
complexity
of
all
these
things
coming
together
at
the
same
time,
so
a
customer,
a
lot
of
them
called
in
in
september
and
said:
hey
this
really
cool
feature.
This
new
feature
that
you
had
in
your
logging
solution,
which
we
use
cabana
and
elastic
and
fluenty.
B
We
added
an
ability
to
have
auditing
and
infrastructure
logs
like
from
the
core
framework
itself
and
application
logs,
and
those
are
three
index
types
or
three
things
items.
B
They
noticed
that
if
you
created
a
user
locally
and
you
gave
them
cluster
admin
permission
that
or
her,
that
user
would
be
able
to
see
all
three
of
those
things.
But
if
you
added
the
user
in
an
ldap
group
and
then
came
in
through
the
ldap
group
that
the
person
wasn't
able
to
see
the
auditing
in
the
infrastructure
logs
and
it
was
because
kubernetes
has
this
thing
called
subject-
access
reviews
sars
and
we
were
lacking
this
last
group.
B
B
In
fact,
none
of
these
in
september
have
been
bad
as
in
like
product
earth
shaking
yeah.
B
The
next
one
was
a
lot
of
people
noticed
in
their
error
logs
this
you
know
kube
api
firing
all
the
time
and
it
they
weren't
noticing
anything
broken
in
the
cluster.
Just
like
these,
just
the
alert.
A
B
Just
and
it's
a
scary
alert
right
yeah
and
you
get
tons
of
them
like
it's,
not
just
showing
up
once
it's
like
it's
like.
B
Dynamic
little
service
attack
on
you
with
these
things
wow,
so
the
they
chased
it
down
after
quite
a
bit
of
time
that
it
was.
It
was
around
the
api
server
rebooting.
So
if
your
control
plane
rebooted
and
not
your
kubelets
or
if
you
had
some
network
outage
between
the
the
kubelet
and
the
api
server,
they
kuba
would
have
a
hard
time
renegotiating
and
it
would
throw
the
wrong
handshake
and
so
a
little
bit
of
code
on
both
sides
of
the.
A
B
A
B
Yeah
this
one
was
a
an
issue
on
openshift's
code
base,
but
a
good
reminder
to
kubernetes
operators
in
general.
So
we
have
this
thing
called
the
machine.
Config
right
and
the
machine
config
is
is
really
yeah.
I
think
it's
revolutionary
you.
What
you
do
is
you
describe
how
you
want
your
operating
system
to
look
to
kubernetes
and
kubernetes
is
putting
that
change
down
on
to
the
operating
system
like
when
you
want
to
enable
or
disable
ssh.
You
know
that.
B
Yeah
and
it's
it's
in
kubernetes
primitives,
so
you've
taught
kubernetes
about
the
infrastructure
and
that
that's
a
big
part
of
our
k.
I
initiative
the
kubernetes
native
infrastructure
initiative,
but
in
this
case
we
forget
to
put
a
toleration
on
the
node
that
when
a
user,
because
users
are
going
to
want
to
take
nodes
for
their
own
application
purposes,
they're
going
to
want
to
say
hey,
you
know
that
node!
I
don't
want
my
my
erp
system
ever
landing
on
that
node.
So
I'm
gonna
try
to
taint
it
away.
B
We
didn't
have
a
toleration
where
we
were
supposed
to
have
it
after
we
deployed
our
machine
config
operator,
daemon
that
lives
as
a
container,
and
so
when
the
customer
put
their
own
taint
on
that
particular
node,
it
would
cause
our
our
container
to
go
away
and
whoops
and
so
machine
machine
config
operator
wouldn't
be
pushing
down
these
changes.
Yeah
and
you
have
some
nodes
get
out
of
sync
because
of
that
so
good
good
fix
to
have
in.
B
This
next
bug
the
this
one
was
much
like
the
other
one
in
that
it
was
more,
it
wasn't
causing
any
outages
or
problems.
So
you
know
anytime,
we
get
bugs
where
it's
not
like.
My
arm
is
broken
and
blood
is
coming
out
of
it,
but
it's
like
my
arm
hurts
once
a
while
on
tuesday
and
it
doesn't
hurt
this
tuesday,
and
I
don't
know
why,
but.
B
Yeah
anything
where
something's
not
specifically
broken
it
may
take
a
while
to
hunt
down,
and
in
this
particular
case
we've
etsy
d
was
throwing
these
error
messages
right,
yeah,
and
what
we
found
was
that
etsy
d
had
the
wrong
test
to
determine
whether
or
not
its
members
were
healthy
or
not.
Wow.
That's
quite.
A
B
Yeah,
so
if
you
look
through
those
etcd,
I
o
pull
requests.
You'll
you'll
go
through
how
we
want
to
maybe
determine
better
before
throwing
in
the
error
g
rpc
nc,
no
leader
error,.
A
B
Putting
in
some
more
robustness
around
that
particular
arrow,
that's
awesome
yeah.
B
This
next
one
took
a
while
to
figure
out
at
first
we
thought
people
were
putting
in
passwords
that
had
characters
that
we
didn't
understand,
or
maybe
the
secret
mechanism
in
kubernetes
was
messed
up,
so
lots
of
research
went
on,
but
it
presented
itself
as
hey.
I
know
I'm
using
openshift
on
vsphere
and
all
of
a
sudden
I
can't
create
any
pd's.
Like
my
dynamic
volumes,
aren't
aren't
getting
created.
B
We
do
use
the
entry
kubernetes
cloud
provider
to
create
storage
from
vsphere.
Now
vsphere
has
other
opportunities
right.
They
have
their
csi
driver,
which
works
with
their
vsan
product
line,
their
additions
to
the
sphere
so
we'll
see
the
market
going
in
that
direction.
But
for
the
most
part,
a
lot
of
people
are
still
using
the
entry
kubernetes
for
their
storage
mechanism
but
in
this
case
the
secret
itself,
when
it
changed.
B
So
when
when
you
go
and
you
create
the
cloud
provider
for
vsphere,
you
have
to
tell
it
your
vsphere,
username
and
password,
and
that
goes
in
a
config
file
and
that
password
becomes
a
secret.
B
If
you
updated
that
file
and
updated
the
secret,
this
particular
code
on
the
storage
side
wasn't
updating
and
seeing
that
the
secret
had
changed
and
so
had
you
changed
your
password,
you
would
find
that
you
wouldn't
be
able
to
create
the
storage
so
that
one's
all
these
are
fixes
in
flight.
But
those
are
right
now
in
the
month
of
september,
where
we
have
the
most
customers.
A
I
think
I
think
that
one
sentence
you
said
there
all
of
these
fixes
are
fixes
in
flight
right
now.
We
are
actively
working
with
the
various
communities
to
get
these
fixes,
resolved
and
baked
into
openshift.
So
when
someone
someone
asked
in
stream,
are
we
going
to
code
and
fix
these
bugs
in
the
stream
or
not?
A
Well,
I
can
show
you
how
to
fix
it,
but
it's
not
going
to
be
like
something
that
you
can
then
take
to
work
with
you
right,
like
it's
fixing
underlying
code,
and
we
would
essentially
have
to
fork
kubernetes
and
openshift
and
everything
else
to
make
that
fix.
It's
one
of
those
things
where
we
have
to
work
with
the
upstream
communities
open
source
right,
open
source
is
all
about
what
red
hat
is
about
so
and
vice
versa.
A
So
we
have
to
work
with
the
upstream
communities
to
get
these
fixes
into
the
open
source
products
projects
so
that
we
can
put
them
in
our
closed
source
or
not
closed
source,
but
our
commercial
product
right
project
versus
product
right.
We
contribute
to
the
projects
so
that
we
can
build
better
products.
If
that
makes
sense.
B
Yeah
and
so,
if
you're,
if
you're
a
customer
on
openshift,
4.5,
4.4
and
4.3,
you
can
call
in
and
get
fixes
on
those
bugs
as
they're
coming
in
yeah
and
we're
in
the
process
of
going
through
the
upstream
and
downstreaming
and
you'll,
see
us
like.
Maybe
we'll
do
4.5
first
and
get
that
out
and
then
we'll
backport
to
4.4
and
4.3.
B
So
you'll
you'll
see
us
use
different
cadence.
Sometimes
there
literally
won't
even
be
a
customer
asking
for
it
on
4.3,
and
so
we
won't
backport
it
to
4.3.
So
it
depends
on
on
the
bug.
Yep.
A
A
Vsphere
volume
right,
like
that's
supposed
to
be
able
to
happen,
so
you
know,
like
that's
a
bug.
That's
like
one
of
those
wait.
This
is
this
worked,
and
now
it
doesn't
kind
of
thing
what
broke
versus
something
where
it's
like
spewing
errors,
every
tuesday
kind
of
thing
randomly
depending
upon
which
you
know
customer
you
are
or
which
you
know
setup.
You
have
that
kind
of
thing.
A
A
B
B
Yeah,
I
think
you'll
you'll
find
that
in
in
most
categories,
they're
the
same
and
you
don't
even
have
to
even
you
know-
talk
about
kubert
you're
you're,
specifically
talking
about
a
kbm
hypervisor
against
an
esx
hypervisor
right.
So
if
you
go
to
the
internet
and
you,
google
kbmv
csx
you're
going
to
find
a
ton
of
information
on
what's
exactly
the
same
and
where
there
may
be
differences.
B
A
A
We
do
kernel
tuning
and
everything
else
as
part
of
openshift
releases
right
like
so
that
performance
question
is
to
me
a
good
question
for
the
channel,
because
it's
something
that
we
definitely
keep
an
eye
on
right
like
we
are
aware
what
the
customers
are,
what
our
competition
is
up
to
or
what
our
co-op
petition
is
up
to,
and
you
know
we
try
to
make
things
as
fast
as
possible.
We
do
actually
do
performance
testing
and
you
know
we're
in
an
active
process
of
trying
to
get
the
footprint
of
openshift
down.
A
A
That
might
be
a
different
story
for
openshift
virtualization,
but
as
reese
in
chat
just
mentioned,
the
key
thing
with
openshift
vert
is
that
it's
built
on
top
of
10
years
of
experience
of
virtualization
at
red
hat.
It's
built
on
the
same
technology
as
red
hat,
virtualization
and
openstack.
Remember:
openstack
people,
but
with
openshift
vert
we're
taught
kate's
openshift
how
to
orchestrate
kvm
based
vms
with
libvert
and
qemu,
we're
always
keeping
track
of
performance
with
kvm
plenty
of
testing
bug,
fixing,
optimization
et
cetera.
There's
plenty
of
performance
testing
results
out
there
between
the
various
pro
platforms.
A
B
A
Yeah
yeah,
I
mean
the
fact
that
a
lot
of
people
are
running.
You
know
their
clouds
on
top
of
kvm
versus
the
vsphere
is,
you
know
telling
sometimes
right
like
they
saw
a
performance
reason
for
that
and
they're
running
their
entire
cloud
on
top
of
it.
For
that
reason,
so
yeah
it
happens,
but
there's
also
the
same
scenario
where
people
choose
vmware.
A
B
Yeah
yeah,
so
that
was
just
a
comparison
of
kvm
and
esx,
but
we
we
love
vmware,
I
mean
we,
we
work
with
them
aggressively
just
about
every
week
you
have
people
in
your
staff,
chris
that
are
assigned
to
work
exactly
very
directly
with
a
vmware
counterpart
to
work
on
reference
architectures,
so
we're
trying
to
keep
both
of
our
mutual
customers
extremely
happy
with
their
technologies
and
we'll
continue
to
do
so.
A
Yeah,
if
you
have
virtualization
questions
in
general
right,
like
andrew
sullivan,
runs
a
weekly
or
bi-weekly
show
about
open
shift
administration.
It's
on
wednesdays,
subscribe
to
the
calendar,
I'll
drop,
a
link
in
chat
right
now
to
the
calendar
and
tune
in
for
andrew
sullivan's
office,
open
shift
administrator
office
hours,
and
we
can
get
all
your
openshift
questions
answered
because
he's
a
virtualization
expert
too.
So
double
bonus.
There
awesome
mike
what
else?
Let's
see
gosh,
we
can
talk
for
hours
about
all
this
stuff.
B
Yeah,
that's
a
that's
a
rough
one.
There's
a
lot
of
challenges
right,
yeah!
I
think
that
what
we're
what
we're
trying
to
do
is
make
sure
that
we're
not
losing
sight
of
the
the
forest
in
the
trees
or.
However,
this
the
saying
goes,
you
know
we
want
to
make
sure
that
we're
moving
towards
not
just
containerization
as
a
technology
or
kubernetes.
You
know
those
words
are
awesome,
but
right
at
the
end
of
the
day,
we
want
faster
applications
to
market
right.
We
want.
B
During
the
creation
and
the
architecture
of
the
platform
as
you're
standing,
it
up
you're
fulfilling
those
needs
and
then
you're
letting
this
magic
world
exist
inside
it,
which
doesn't
necessarily
have
to
trigger
all
those
processes,
because
the
larger
platform
itself
has
already
done
that,
and
that's
that's
probably
a
lot
of
work
right
there.
For
me.
A
Yeah
yeah,
I
bet
and
just
there's
many
differing
opinions
within
the
kubernetes
community
too
right
like
any
time,
you're
submitting
a
pull
request.
Potentially,
I
imagine
there's
a
lot
of
why,
behind
the
the
the
code
that
you're
submitting,
sometimes
because
it
is
like-
where
did
this
bug
come
from
right
like?
How
often
do
you
see
not
pushback,
but
just
like
delays
or
lag
or
anything
like
that
or
are?
Is
our
relationship
pretty
awesome.
B
Well,
I
mean
we
touched
on
a
little
bit
before
sometimes
the
the
largest
part
of
our
ecosystem,
our
partnering
ecosystem,
our
contributors
to
kubernetes,
they're,
extraordinarily
focused
on
cloud
providers
and.
B
Running
kubernetes
on
a
cloud
provider,
sometimes
it's
hard
to
get
them
to
see.
The
importance
of
you
know
this
crazy
variable
that
you
need
when
you're
just
using
emc
storage,
and
that
would
only
happen
on
premises
right
and
so
that
type
of
back
and
forth
will
happen
the
most
around
those
situations
or
there's
this
belief
that
you
know
because
you're
on
a
public
cloud,
you
can
get
new
virtual
machines
very
easily
and
they're
almost
freaking
quickly,
yeah
yeah.
B
So
that
bleeds
through
a
couple
areas
of
kubernetes-
and
you
know
sometimes
we'll
request
things,
because
it
is
hard
to
get
a
new
virtual
machine.
It
is
hard
to
get
a
new
piece
of
equipment
and
on-premises,
and
so
we'll
want
to
change
something
to
allow
for
that
sort
of
lag
time
or
that
consideration.
B
And
you
know
you
might
get
your
hand
smacked
a
couple
of
times
as
you're,
pushing
that
idea
forward.
But.
A
Yeah,
we
definitely
try
to
work
with
not
against
always
right
like,
but
it
is
hard
sometimes
to
realize
you
know
from
someone's
point
of
view
of
you
know
I
work
at
aws.
You
know
I
work
on
kubernetes
for
aws,
it's
hard
to
see
that
force
for
the
trees
and
and
I'm
not
picking
on
aws.
It
could
be
alibaba.
It
could
be
anybody
right
like
how
does
this
help
my
company?
Well,
it
doesn't.
But
it's
you
know
in
the
kubernetes
community,
it's
always
community
over
company.
A
So
you
have
to
keep
that
in
mind
as
you're
contributing
to
that
it's
not
just
the
the
company
you
work
for
it's
also
the
greater
community
of
users
and
that's
what
matters
most
to
me.
I
feel
like.
B
You
know
everybody
was
running
around
all
the
kubecons
last
year
saying
you
know
the
most
important
part
of
kubernetes
isn't
happening
in
slash
kubernetes
anymore,
it's
happening
in
cncf
and
all
the
projects
that
are
sort
of
growing
around
kubernetes,
and
that's
only
because
kubernetes
is
extensible
right.
You
can
have
a
custom
resource
definition
and
you
can
use
the
declarative
management
that
api
service
and
just
tell
it
to
do
different
things
in
your
new
project
and
red
hat,
had
a
big
part
of
adding
that
sort
of
ability
to
customize
kubernetes.
A
Yeah
we're
we're
opinionated
to
a
point
right,
like
we
have
a
standard
set
of
tooling
that
we
could
run
for
you
through
operators
and
everything
else.
But
if
you
want
to
plug
in
your
own
thing,
that's
fine
too,
that's
the
beauty
of
kubernetes
is
is,
is
its
extensibility.
That's
how
hard
like
that's
actually
is.
Its
extensibility
is
surprisingly
hard
to
say
anyways.
A
A
Awesome
can't
wait
for
it.
So
thank
you
very
much
mike
great
having
you
on
look
forward
to
having
you
back
someday
and
thank
you.
Everyone
for
joining
us,
we'll
be
back
here
at
noon.
Eastern
1600
wait.
Did
I
read
that
right?
Yes,
1600
utc
for
openshift
commons.
Ask
me
anything:
oh
wait
for
it.
Wait
for
it
api
data
protection.
Oh
so
that
should
be
fun.