►
From YouTube: 17 What's Next AMA Panel with Red Hat Engineers, Project Leads, Product Managers and guests
Description
OpenShift Commons Gathering @ Kubecon/NA San Diego November 18 2019
A
A
I'm
gonna
take
two
of
them
and
ask
one
of
my
colleagues
and
I
will
go
up
and
down
and
answer
so
I
know
this
is
between
you
and
the
beers.
But
please
this
is
your
opportunity
to
ask
them
any
questions
that
you
might
have
come
up
today
and
there's
I
can
feel
the
stage
moving
with
the
weight
of
the
knowledge.
That's
here
so
I'm
gonna
put
one
down
here,
because
I
know
you
guys
talk
a
lot,
and
so
you
guys
are
good
and
I
saw
that
you
have
one
and
I
have
one.
C
C
D
C
I
joined
Red
Hat
eight
years
ago.
I
was
the
product
manager
for
openshift,
1.0
and
2.0,
and
no
offense
of
the
Broadcom
guys.
It
wasn't
that
bad,
but
with
open
ship
3
we
made
a
huge
bet
on
kubernetes.
Obviously
you
see
that
we've
continued,
that
into
open
ship
for
enabling
all
sorts
of
new
workloads
and
new
capabilities
with
open
ship.
For
we
made
two
new
bets
right.
C
We
made
a
bet
on
an
entirely
new
way
to
manage
the
platform
using
operators
and
machine
controllers
and
RAL
Koro
s
and
all
the
great
stuff
you
heard
about
this
morning,
and
then
we
also
made
another
bet.
We
made
a
huge
bet
on
bringing
a
whole
new
set
of
services
to
empower
developers
and
DevOps
folks,
things
like
sto
and
K
native
and
Tecton,
and
all
the
operator
back
services
from
our
partners
and
so
forth.
So
we're
going
to
continue
these
bets.
Hopefully
you
learned
a
bit
about
that
today.
C
If
you
want
to
learn
more
we'll,
be
doing
some
sessions
over
at
the
conference
and
obviously
come
to
Oprah
shift
calm.
But
at
this
point
we
just
want
to
open
it
up
for
questions.
We
have
many
of
our
product
managers
and
some
of
the
engineers
and
others
on
the
stage.
So
if
you
have
questions
just
raise
your
hand
grab
a
mic
and
we
will
get
it
to
the
right
person.
F
E
E
G
H
For
using
OpenShift
container
platform
in
a
public
cloud,
is
there
anything
coming
down
the
wire
that
would
allow
me
to
light
like
run,
install
and
manage
the
cluster
myself
yet
pay
for
it
by
the
hour,
as
opposed
to
having
I.
Think
I'm.
In
my
talks
with
reddit
with
Red
Hat
for
licensing
I
had
to
pay
for
my
peak
for
the
entire
year.
So.
C
I
And
the
only
thing
I
would
add,
and
so
mention
we
need
to
get
the
technology
in
place
and
starting
in
December
there's
a
new
SAS
service
that
will
be
up
at
a
cloud.
I
write,
a
column
called
cost
management
and
you'll
be
able
to
see
more
visibility
of
the
cost
of
your
cluster.
Once
we
have
that
in
place,
we
can
turn
on
the
backend,
which
is
our
procurement
process
and
that's
a
whole
nother
ball
of
yarn.
B
Around
the
service
mission,
this
geo
products
that
when
Steve
baked
was
talking
about
you,
know
extending
sto
into
the
VM
space.
Is
that
part
of
the
product
supported
roadmap,
as
opposed
to
the
upstream,
is
to
you
or
is
the
sto
service
mesh
delivered
through
open,
shipped
constrained
to
kubernetes?
So.
C
I
guess
I
was
looking
for
Brian
he's
not
here
so
so
the
question
was
whether
we're
planning
to
support
service
machinist
EO
outside
of
a
container
based
environment
right
now.
We
don't
have
plans
for
that
right,
so
so
openshift
service
mesh,
which
is
generally
available
now
by
the
way,
go
out
and
you
can
use
it
with
open
shift
4.2
right
now,
it's
included
as
a
service
with
OpenShift,
so
we
don't
actually
sell
it
separately.
We
also
don't
support
it
today,
outside
of
a
kubernetes
based
environment
through
open
chef.
C
There
are
some
interesting
use
cases,
obviously
for
service
mesh,
with
virtual
machine
based
environments
and
other
non
kubernetes
environments.
It's
not
in
our
product
plans
to
offer
that
as
supported
today,
but
it's
it's
something
that
we
do
contribute
to
through
the
work
being
done
upstream.
That
Stephan
and
Brian
talked
about
earlier
today.
So.
D
Position
on
cube
vert
is
in
openshift
4
on
the
documentation,
links
you'll,
see
down
the
bottom
container
native
virtualization,
which
is
our
productized
variant
of
convert,
keep
it
for
those
who
don't
know
allows
me
to
deploy,
manage
and
run
virtual
machines
on
an
open
shift
cluster,
particularly
a
bare-metal
open
shift
cluster.
It
is
in
technology
preview
still
right
now,
but
certainly
on
42.
You
can
go
out
and
try
it
today
and
we'd
encourage
people
to
do
that
and
on
Twitter
I'm
at
excess.
Gordon
would
love
to
hear
feedback
yeah.
C
One
of
the
projects
that
I'm
really
excited
about
the
most
couvert
again,
you
know
a
lot
of
people
who
run
open
shift
in
Kurban
a
say,
run
it
in
a
vert
environment.
Obviously
what
you
heard
this
morning-
and
some
talks
is
afternoon-
is
open
shift,
runs
great
on
bare
metal
I
think
it's
the
best
way
to
run
kubernetes,
but
when
you
run
on
bare
metal,
what
do
you
do
with
the
workloads
that
still
run
in
VMs?
Well,
you
bring
those
VMs
to
kubernetes
instead
of
bringing
kubernetes
to
vm.
C
So
that's
the
idea
behind
Cooper
running
a
mix
of
container
and
VM
based
workloads
all
managed
with
a
curve
maze
control
plane
on
the
shared
platform.
It's
very
cool.
It's
in
Developer
Preview.
Right
now
it's
called
open
shift
container
native
virtualization.
So
hopefully
everybody
here
who's
interested,
can
go
and
try
that
out.
K
F
So
open
shift,
everything
in
open
shifts
has
open
chef
for
has
been
open
source
the
whole
time.
Our
current
thought
is
pretty
much.
Everything
that
would
be
in
OCP
is
in
the
core
platform
would
be
part
of
ok,
D,
there's
a
few
components
that
don't
run
on
anything
except
rel
chorus
right
now,
some
of
the
work
around
the
bare
metal
stuff.
Those
are
all
roadmap
items
to
go.
Make
sure
that
those
work
well
with
the
metal
cube
upstream
but
it'll
take
us
some
time.
There's
a
number
of
operators
above
where
the
community
versions
should
work.
F
On
top.
We
haven't
quite
gotten
to
that
point.
Yet
I
think
the
focus
for
the
next
two
three
months
is
going
to
be
stabilizing
making
sure
we
have
a
good
repeatable
dev
cycle,
making
sure
that
folks
in
the
community
who
want
to
contribute,
are
able
to
do
so
and
then
I
suspect.
You
know
we'll
start
as
we
get
into
early
next
next
year.
F
There'll
be
some
bigger
discussions
that
Commun
we
have
a
road
map
that
was
set
out
in
August,
or
so,
where
we're
trying
to
break
this
down
into
chunks
and
make
sure
that
we
have
a
good
repeatable,
dev
workflow,
but
there's
no
read,
there's
no
belief.
I
think
that
we
have
that
we
would
hold
anything
or
change
it.
It
would
just
be.
Are
those
components
well
integrated
with
ok,
D
or
not,
and
that's
up
to
those
upstream
communities
yeah.
C
I
think
some
some
people,
you
think
of
well,
it
isn't
ok,
D
the
upstream
for
open
shift.
I
mean
the
upstream
for
OpenShift
is
ultimately
kubernetes.
Much
like
the
upstream
for
rel
is
Linux
itself
right,
and
so
all
we
do
all
of
our
work
in
kubernetes
first
and
then
build
stuff
around
that
to
manage
the
clusters.
But
okd
is
important
for
many
of
our
community
members.
It's
sort
of
that
sort
of
pre
commercial
distribution
of
OpenShift.
We
had
some
work
to
do
mainly
on
the
Linux
side.
C
We
always
plain
melenik
skies,
but
to
get
a
non-commercial
version
of
rel
core
OS,
which
is
Fedora
core
OS
that
that
okd
is
based
on
it's
out
there
today
and
as
as
it
was
in
3x,
it
will
sort
of
iterate
and
also
be
the
place
where
we
try
out
some
new
things.
You
know
before
it
comes
into
sort
of
a
Bader
dev
preview
in
the
in
the
commercial
offer
yeah.
F
L
I
I
Csi
kubernetes
is
going
through
this
project
of
taking
the
entry
storage
providers
and
bring
them
out
to
a
containerized
storage
interface
implementation.
So
a
lot
of
the
work
is
taking
the
existing
entry
devices
like
I
scuzzy,
fiber
channel,
all
the
elastic
plot,
Google
compute,
all
those
storage
that
we
support
in
kubernetes
and
re
instrumenting
them
out
on
the
CSI
interface
and
a
lot
of
the
ISV
vendors
here
are
also
imported
to
the
CSI
interface
as
well,
and.
F
C
This
is
actually
something
I
wanted
to
mention
to
everybody
right.
So
one
of
the
cool
new
things
we
introduced
with
open
shift
for
X
is
the
availability
of
something
called
open
shift.
Knightley's
right,
so
open
chef
nightly
is
once
we
start
publishing
them
are
essentially
early
views
at
the
next
release
right,
so
right
now,
open
shift
for
2
is
generally
available
to
everybody.
It's
fully
supported
by
Red
Hat,
but
I
think
we
we
already
began
publishing
or
the
open
ship
4.3
Knightley's.
C
F
Is
actually
so
I'm
gonna?
This
is
actually
a
really
important
point
for
us,
so
as
part
of
the
health
monitoring
program
folks,
when
they
opt
in
to
that
service,
you're,
actually
sending
back
the
versions.
I've
actually
seen
that
quite
a
few
people
try
out
Knightley's
ahead
of
the
ga
version.
We
can
actually
do
more
to
make
that
communication
clearer.
F
So
if
you
turn
on
CSI
drivers
and
causes
your
test
cluster
to
crash,
that's
actually
feedback
that
will
come
back
through
the
health
monitoring
service
and
that
helps
us
prepare
for
the
release.
Because
there's
a
lot
of
variety
and
customer
and
test
configurations
that
we
don't
always
have
the
exact
info
about
how
you'd
like
to
use
that
and
so
participating
in
the
Knightley's
and
opting
in
to
the
health
data
monitoring
and
sending
us
that
usage
data
is
actually
incredibly
valuable
for
us
to
support
you
better.
J
Just
a
good
question:
I
think
there
was
a
great
discussion
earlier
this
morning
around
digital
transformation
and
how
it's
not
just
a
technology
problem
and
after
all,
all
these
platforms
says
they
are
talking
about
from
infrastructure.
Standpoint
is,
is
leading
to
business
delivery
right
if
there
is
no
business
delivery,
there
is
no,
you
know,
meaning
for
the
for
these
platforms.
So
is
there
any
strategy
around
application?
Development
I
think
there
is
accelerating
the
infrastructure
recycle,
but
application
development
takes
most
of
the
time
when
it
comes
to
business
delivery.
N
Guess
some
of
the
work
that
we
are
doing
with
K
native
service
mesh
tech
tone,
it's
all
really
addressing
some
of
the
problems
that
you're
talking
about
right.
It's
really
bringing
some
of
this
lifecycle
for
application
development
to
a
place
to
a
state
where
it's
really
easy
to
do
on
top
of
kubernetes,
which
traditionally
has
been
challenging
for
an
average
developer
to
get
started
right.
This
is
something
that
again,
it's
already
embedded
in
the
platform,
so
things
like
the
developer
console
that
will
ship
now
with
4.2,
and
we
continue
to
enhance
in
4.3.
N
They
add
to
that
so
that
you
can
start
building
applications
using
the
console.
You
can
deploy
the
application
straight
from
the
console
without
that
much
knowledge
about
the
platform.
Things
like
code,
ready
workspaces,
so
that
the
IDE
is
integrated
to
the
API
on
kubernetes
side,
so
you
can
deploy
from
there.
You
can
run
the
ID
in
the
browser.
Already,
some
of
those
things
already
I'd
say
addresses
some
of
the
challenge
that
you
are
having
I
mean.
J
J
Think
the
book,
the
more
important
point
is
from
business
standpoint,
enterprise,
the
application
development
takes
most
of
the
cycle.
If
you
will
infrastructure
is
yes,
it's
a
foundational
block,
but
I
can
accelerate
ten
percent
or
20
percent
of
this,
but
business
delivery
is
really
dependent
on
the
enterprise
application
releases.
So
what
is
that
we
are
talking
about
I'm?
Not
just
talking
about
the
developer
focus
I'm
more
talking
about
from
business
focus
standpoint.
What
is
that?
Is
there
any
strategy
to
bring
that
into
a
conversation
from
OpenShift
standpoint.
C
Yeah
I
mean
yeah
I,
think
those
are
conversations
that
we
seek
to
have
with
with
all
our
customers.
I
think
when
you
come
to
conferences
like
this,
you
know
OpenShift
Commons,
gatherings
or
summit,
and
you
hear
tough
customer
talks.
You
know
it
sure
they'll
talk
about
OpenShift
and
some
features
and
capabilities
that
they're
excited
about.
But
you
know
you
know
more
than
ever.
What
they
talk
about
is
the
challenges
of
getting
this
working
in
their
environment
and
showing
business
value
to
their
customers.
C
I
think
the
the
exxonmobil
team
did
an
excellent
job,
showing
how
how
they're,
providing
real
business
value
by
accelerating
or
enabling
the
collaboration
between
their
data
scientists
and
accelerating
their
ability
to
to
do
machine
learning
and
data
science
on
the
platform.
So
it's
not
ultimately
about
the
platform
or
even
the
developers
who
are
building
it.
C
It's
about
the
value
that
those
things
combined
provide
to
the
business
and
can
we
do
a
better
job
of
bringing
it
into
the
conversation,
we're
sure
trying
and
that's
why
we're
bringing
folks
like
jave
and
and
the
panel
this
morning
to
red
hat,
to
drive
those
non-product
conversations,
because,
ultimately,
you
know
based
on
just
what
customers
are
talking
about.
That's
that
that's
the
biggest
challenges,
the
people,
the
process,
the
cultural
challenges
you.
P
Some
comments,
actually
the
I
would
say,
starting
with
I,
questioned
some
of
the
assertions
that
you
made
about
how
it's
really
about
development,
because
if
you
look
at
what
the
cost
is
of
managing
most
of
your
portfolio,
the
the
cost
of
operations
is
dwarfing
the
cost
of
development
in
all
cases,
because
it's
not
free
to
run
things,
then
you
just
run
the
clock.
It's
eventually
going
to
cost
more
right.
That's
the
that's
the
nature
of
a
service.
P
The
other
thing
I'd
say
that
this
goes
back
to
something
that
the
Clayton's
said
on
stage
earlier,
that
we're
essentially
in
some
ways
we're
part
of
your
sre,
with
the
emphasis
on
the
e
right,
so
you're
gonna
have
to
bring
people
to
fill
in
the
other
parts
of
that,
but
we're
building
this
platform
together
with
our
customers,
so
that
they
can
provide
this
reliable
underpinning
and
I'd
say
nothing
transforms
the
development
possibilities
for
your
organization.
More
than
improving
your
operations,
yeah.
J
P
Just
one
last
thought:
they're
like
I
agree,
but
the
way
to
reduce
the
speed
to
market
is
make
doing
the
right
thing.
The
easy
thing
so
that
you
have
these
guardrails
that
allow
your
developers
to
make
the
right
decisions
on
the
business.
The
domain
that
they're
working
on
instead
of
trying
to
configure
all
the
random
things
for
their
platform.
So.
O
And
I'm
gonna
add
the
DevOps
is
10
years
old
this
year.
Right
and-
and
you
know,
we
use
this
in
bad
ways
and
good
ways,
but
the
point
is:
there's
some
things
we've
learned
and
that
some
things
about
sort
of
DevOps,
metrics
flow
and
so
I
think
part
of
what
the
panel
you're
sort
of
spawning
this
team
is
to
try
to
figure
out
what
is
the
next
ten
years
of
DevOps
look
like
like
we
we've
got
all
the
patterns
now
the
books,
the
presentations,
and
so
what?
What
have
we
learned?
What
do
we
do
right?
O
What
do
we
do
wrong
and
I
think
you
know
that
leads
us
into
sort
of
you
know.
I
like
ideation
is
a
big
part
like
like
we're
really
good
in
DevOps
of
commit
to
production,
like
we
nailed
that
ten
years
we
got
that
down.
You
know
other
thing,
I
know
I'm,
not
saying
we're
gonna
do
this
definitely,
but
that's
part
of
what
j,
myself
keV
and
robe
out
is
to
try
to
figure
out
like
what
happened
in
DevOps
in
the
last
10
years.
What
does
it
look
like
in
the
next
10
years?
So.
Q
Wonderful
program
today,
ladies
and
gentlemen,
thank
you
very,
very
much.
Scott
pulled
him
with
ZDNet.
I
sat
in
on
a
webinar
a
couple
of
months
back
that
had
to
do
with
service
mesh.
It
was
not
a
red
hat
webinar,
but
there
were
several
implementers
in
there
and
one
of
the
things
they
all
agreed
upon
and
I
thought
was
interesting
as
they
were
implementing
service.
Q
Mister,
the
point
where
they
all
said-
and
they
agreed
on
this-
that
they
foresaw
a
time
when,
for
the
purposes
of
service
discovery,
they
would
no
longer
have
to
use
DNS
that
they
could
completely
rely
on
their
service
mesh
to
deliver
service
discovery.
I'm
wondering
if
you
folks
think
that
that
is
a
rational
goal
for
service
mesh
or
whether
you
think
there
may
be
some
danger
in
there.
So.
F
F
I
mean
there's
been
a
couple
of
talks,
even
a
cube
con,
though,
of
adding
more
service
mesh,
keep
some
of
the
capabilities
at
a
lower
level
in
kubernetes
that
exist
in
sto
today,
because
kubernetes
wasn't
designed
as
a
we're
gonna
build
the
basic
primitives
in
the
lab.
Let
everybody
else
solve
the
problems,
but
we
want
to
work
with
the
communities
and
find
patterns
that
are
generally
applicable.
I
think
the
argument
I
would
make
would
be
that
the
DNS
system
or
the
domain
name
system,
the
DNS,
is
always
so
awkward
to
say.
F
The
DNS
is
so
broadly
applicable
to
so
many
environments
that
I
question.
Anyone
who
believes
that
it
will
go
away.
Certainly,
a
service
mesh
provides
a
ton
of
advantages
when
you
think
of
it
as
an
application,
interfacing
layer-
and
you
know,
if
there's
one
trend-
I-
think
that
we've
seen
is
moving
more
and
more
of
the
things
that
we
talked
about.
Guardrails
service
mesh
is
a
great
guardrail
because
it
allows
you
to
use
patterns
and
primitives
that
take
common
problems
and
make
them
the
responsibility
of
mature
operations.
R
Testing
one
two:
three,
the
question
was:
can
I
describe
the
differences
between
Quay
and
Harbor?
The
number
one
difference
right
now
is
that
Quay
is
a
live
service
that
serves
over
a
billion
images.
Right
now
has
over
a
million
repositories
and
serves
hundreds
of
thousands
requests
a
minute,
and
we
know
it
works.
R
Now,
since
the
community
version
of
way
will
have
code
being
merged
in
constantly
if
you're
running
the
community
version
you're
not
running
a
particular
release,
you'll
also
be
testing
it
along
with
us,
but
if
you're
running
redhack
way,
you
have
the
benefit
of
that
experience
of
running
a
registry
at
scale
that
pretty
much
no
one
else.
With
the
exception
of
the
major
cloud
providers,
Google,
Amazon,
etc
have,
and
we
know
for
a
fact,
as
someone
who's
constantly
on
call
for
Cueto
I
can
tell
you
we
know
it
works.
R
Otherwise,
I
wouldn't
sleep
at
all
and
that's
a
huge
benefit.
On
top
of
that.
Quay
is
also
the
only
registry
product
on
the
market.
I
mentioned
this
earlier,
but
we're
the
only
sheep
registry
product
on
the
market
that
has
a
guarantee
of
backwards
compatibility.
So
we
to
this
day
still
support
docker
API
version
1.
So
if
you
took
a
docker
client,
dr.
R
0dr
0.4,
which
you
know
came
out
in
what
2013
and
you
were
to
try
to
push
an
image
against
a
version
of
quay
with
the
right
feature
flag
that
enabled
right
now
it
would
work,
and
then
you
could
pull
that
image
with
the
most
modern
version
of
pod
Man
and
it
would
just
work
seamlessly
and
that's
part
of
our
commitment
to
the
enterprise
space
that
no
other
registry
product
is
really
committed
to
whether
or
not
they
see
the
difficulty
in
doing
so.
I
don't
know
beyond
that.
R
We
just
we
have
we've
demonstrated
a
consistent
ability
to
innovate.
So
Claire
was
the
FIR.
We
were
the
first,
of
course,
the
private
docker
registry
available.
We
were
the
first
one
with
security
scanning
claire
is
built
by
our
team,
so
our
integration
is
extremely
efficient,
as
will
the
new
integration
with
the
new
version
of
Claire
that's
coming
along
I,
don't
want
to
speak
ill
of
others,
but
some
integrations
have
been
less
than
well
done.
R
In
my
experience
and
moving
forward,
this
will
be
this
level
of
innovation
will
continue
long
way
because
we're
continuing
to
be
kind
of
a
pioneer
when
it
comes
to
the
container
registry
space
I
know
that's
a
very
high-level
description,
there's
a
lot
of
little
subtle
things
to
like
our
feature
list
wise,
we're
kind
of
ahead
of
the
curve
in
quite
a
few
areas
sorting
for
squashed
images,
automated
builds,
which
we
had.
First,
we
have
our
upcoming
roadmap.
L
R
C
A
big
list,
but
a
lot
of
different
areas-
yeah,
it's
also
important
to
know
like
you-
can
use
whatever
registry
you
want
with
openshift.
So
if
you
want
to
use
harbor,
if
you
want
to
use
docker
trusted
registry,
some
people
pick
artifactory
or
nexus
because
they're
managing
other
types
of
artifacts.
We
focus
on
containers,
but
if
you
want
the
best
most
scalable,
most
feature-rich
registry
on
the
planet
for
containers,
that's
quite
that's
right,
act
way,
so
so
check
it
out.
S
O
S
S
D
Of
Cooper
so
Qbert
currently
is
technology
preview.
We're
not
currently
forecasting
a
ga
for
that.
We
are
trying
to
build
that
road
map
based
on
customer
feedback.
So
what
we're
looking
for
really
is
trying
to
establish
what
is
the
right
level
of
feature
functionality
that
we
need
to
reach
to
support
the
customer
workloads
that
people
want
to
bring
onto
the
platform.
In
the
recent
release
we
did
add
life
migration,
which
will
work
with
the
upcoming
release
of
openshift
container
storage.
D
So
we
do
have
some
of
those
traditional
enterprise
virtualization
features
in
the
platform
there's
others
that
were
touched
on
earlier,
like
snapshots
clones
that
we'd
like
to
bring
in
as
well,
but
really
again,
I
can
only
just
encourage
people
to
try
the
tech
preview,
we'd
love
to
get
feedback.
We'd
love
to
have
conversation
about
what
features
you
would
like
to
see
in
that
for
us
to
make
that
graduation
to
general
availability.
C
I
Windows
containers
is
on
a
trajectory
probably
around
the
4.4,
like
the
CSI
took
Ogier
on
us,
we're
hoping
this
week
the
end
of
this
week,
beginning
next
week,
well
REM
our
debt
preview
program,
where
Windows
containers
will
run
on
the
4x
platform
right
now
have
them
run
it
on
the
3x
platform
and
that
will
simplify
people's
usage
and
allow
more
people
to
try
it
out
more
rapidly.
Yeah.
C
And
I'd
say
this
is
one
of
the
most
common
questions
we
get
as
product
managers
is
when
is
X
Y
or
Z,
going
to
become
generally
available,
I
mean
I,
think
I
think
everybody
knows
this,
but
that's
a
big
deal
for
us
right
when
we
say
something's
generally
available
we're
saying
you
can
run
it
in
your
mission-critical
production
systems.
We
will
support
it
not
just
now,
but
for
the
next
several
years,
we'll
patch,
it
maintain
it
in
the
case
of
couvert
there's
some
maturity
that
we're
still
need
to
see.
C
Vm
workloads
are
hard
right
and
you
know
we're
looking
for
you
guys
to
tell
us
at
which
point
do
we
do
you?
Do
you
feel
like
it's
it's
to
the
point
where
it's
going
to
provide
you
value
in
production
environments,
even
if
it's
not
doing
everything
that
a
vSphere
would
do
or
whatever
same
thing
with
Windows
containers.
C
Frankly,
you
know
the
the
the
maturity
of
the
Windows
OS
container
itself
is
still
something
that's
you
know,
I
think
maturing
and
then
and
then
certainly
we
still
have
work
to
do
in
the
Windows
kubernetes
sig
to
to
make
ourselves
comfortable
that
it's
something
that
we're
gonna
say.
Yes,
this
is
ready
for
full-on
production
use.
We
do
expect
them
both
to
be
generally
available
in
the
upcoming
year
in
the
upcoming
calendar
year.