►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
I
figured
that
I
have
probably
the
easier
job
today
here,
because
I'm
going
to
talk
about
very
nice
things
features
that
come
into
open
chef.
So
then
help
me
a
lot
by
talking
one
of
them,
so
that
means
they're
going
to
have
talked
less
about
it
and
some
of
the
features
that
I'll
talk
a
little
bit.
You
also
have
a
dedicated
session
by
my
colleague
as
well
for
more
later
on
right.
A
So
the
objective
of
this
is
to
introduce
you
to
a
few
new
things
that
we
are
developing
and
thinking
about
developing
the
OpenShift
for
the
next,
let's
say
one
or
two
or
three
releases
that
we
think
I
had.
We
often
try
to
think
ahead
at
maxsa
for
a
year,
because
we
know
that
things
change
a
lot.
So
that's
a
that's
what
we
do.
My
name
is
Diogenes
rhetoric.
That's
my
twitter!
A
I'm
a
product
manager
right
I
had
responsible
for
openshift,
and
my
main
areas
of
responsibility
in
the
product
are
essentially
things
that
run
on
the
platform
and,
let's
say,
application
services
related
capabilities
on
OpenShift,
which
shall
talk
a
little
bit
about
them
as
well.
Very
happy
about
this.
We
all
know
right
that
red
had
decided
to
acquire
core
OS.
A
So
before
the
tea
is
actually
finished
and
the
transaction
is
completed,
we
cannot
make
any
comments
about
it
other
than
what's
already
stated
in
our
blog,
so
maybe
in
like
a
few
weeks
or
so
we'll
be
able
to
state
our
plans.
Our
objective
is
the
technology,
but
we
need
the
transaction
to
be
completed
before
again.
We
can
make
any
claim
so
anything
that
you
hear
from
anyone
is
probably
not
true
until
the
transaction
is
finished.
But
again
that's
it's
a
definitive
agreement
to
acquire.
A
We
can
doe
say
that
we're
very,
very
happy
with
this
decision
and
to
have
awesome
Engineering's
engineers
that
are
going
to
continue
to
work
together
and
the
technology
that
they
really
love
right
and
let's
get
into
the
the
interesting
things
about
this
right
now.
It's
right
things
that
we're
doing
an
open
shift
to
make
open
shift.
Even
more
awesome.
I
know
probably
a
lot
of
you
here.
Our
open
shift
users
open
shift
developers
open
shift
contributors.
A
So,
let's
talk
a
little
bit
with
other
things
that
we're
doing,
and
you
know
that
open
shift
has
kubernetes
underneath
so
I
think
it's
it's
valid.
To
talk
us
a
little
bit
about
the
things
that
are
in
queue.
Burnett
is
1.9
which
are
going
to
come
to
open
shifts
right.
The
this
kubernetes
1.9
really
was
called
as
a
stabilization
release,
but
there
are
lots
of
capabilities
coming
in
there
as
well,
but
from
from
the
perspective
of
the
community.
A
It's
seen,
let's
say
is:
let's
say:
let's
get
lots
of
bug
fixes
in
let's
get
some
capabilities
to
be
a
stable
stage,
let's
migrate,
some
capabilities
from
a
from
alpha
to
beta
or
from
beta
to
GA
state.
So
this
is
a.
This
was
the
main
target
for
this
keyboard.
9
release
the
worker,
the
workloads
API,
it's
probably
the
big
one
that
capabilities
move
from
a
a
better
to
a
GA
stage
and
I'm
especially
happy
to
see
stateful
set
there
right.
How
many
of
you
know
what
stateful
said
is.
Thank
you,
bananas.
A
Yes,
we
have
about
25%
of
the
room.
I
think
it's
worth
explaining
a
little
bit
so
the
when
Q
Benares
was
first
created
that
tray
a
little
over
three
years
ago.
The
objective
was
to
address
mostly
cloud
cloud
native
workloads,
mostly
ephemeral,
workloads,
things
that
will
go
away
right.
You
have
a
new
container.
If
that
container
is
bad.
You
killed
that
container
and
you
wait
for
a
new
one
to
come
up.
You
assume
that
that
container
is
probably
a
cloud
native
application
that
does
not
care
necessarily
much
about
its
hostname.
A
Doesn't
care
much
about
his
identity
right.
So
the
objective
of
stateful
sets
is
to
assign
an
identity
to
a
running
container.
That
is
going
to
be
maintained
across
lifecycle
of
that
across
that
the
deployment
lifecycle
of
that
container.
So
if,
for
some
reason
that
container,
let's
say
dies
when
it
comes
back
up,
it
comes
back
up
with
the
exact
same
identity.
So
a
use
case
that
I
think
loves
that
databases,
databases,
love
their
hostname.
Databases,
love
their
IP.
Databases
love
their
storage.
They
are
attached
to
it.
So
that
is
a
very
important
feature.
A
That's
coming
to
GA
+
1,
9,
+,
2
OpenShift
as
well
right.
So
most
of
the
use
cases
that
are
again
are
have
a
very
warm
and
relationship
with
an
identity
with
where
they
are
running
they
are,
they
can
be
run
more
successfully
with
stateful
sets.
Another
thing
about
stateful
sets
as
well
is
that
it
has
an
order
for
when
things
need
to
come
up.
A
A
Demo
site,
you
probably
know
demo
said
they
won't
set
it's
a
container,
that's
going
to
run
and
Evernote.
So
if
you
need
some
links
to
running
nodes,
for
example,
mostly
we
see
the
log
scrapers
that
have
to
run,
you
never
know,
that's
a
demo
set
and
the
others
they
are
very
popular.
We
already
know
them.
Windows
support
lots
of
contribution
from
from
from
Microsoft
in
this
right
head
is
also
involved.
A
So
the
windows
support
in
beta
for
cuban
artists-
and
this
is
good-
because
this
is
allows
you
to
have
hybrid
clusters
right
so
testers
that
will
be
some
nodes
running
Windows
containers,
some
nodes,
running
Linux
containers
right,
remember,
a
Windows
container
can
run
either
run
Windows.
A
little
thinner
can
only
can
only
run
on
Linux.
So
when
we
say
cuban
air
is
support
for
windows
that
is
going
to
meet.
That
means
that
you're
gonna
have
a
windows
package
container.
A
If
you
know
a
project
called
cube,
vert,
which
is
targeted
at
running
virtual
machines
in
sake
inside
kubernetes,
so
these
are
all
capabilities
that
we
expect
from
virtualization
solutions
right
to
be
able
to
add,
say
to
paint
your
specific
CPU
to
use
the
device
device
plug-ins
in
the
hosts.
So
that's
a
that's
important
capability
as
well,
and
that's
more
and
that's
that's
lots
more
in
questions
about
this
good,
so
you
probably
know
that
these
features
are
going
to
come
in
open
ship
3.9.
A
Now
the
from
our
let's
say:
community
organizational
perspective
in
the
Cuban
Alice
community,
the
community
is
growing
and
in
order
for
for
that
to
grow
in
a
healthy
state,
some
process
have
to
be
established,
for
example,
how
you
submit
a
proposal
for
a
new
capability,
so
that
has
been
established
as
community
norm
as
well,
so
that
if
you
want,
if
you
have
a
new
idea
that
you
want
implemented
in
kubernetes,
there's
somewhat
of
a
process
that
you
have
to
follow
for
that
idea
now,
I
continue
to
involve
any
creation
of
more
six.
Now.
A
This
is
a
if
there
is
like
one
slide
that
normally
I
would
take
feature
would
be
this
one.
So
these
are
the
things
that
we
Red
Hat
are
investing
across
the
container
ecosystem
right.
So
this
is
not
necessarily
OpenShift.
This
is
not
only
open
shifted.
This
is
where
Red
Hat
is
putting
or
affords
from
marketing
and
from
engineering
perspective,
to
make
openshift
marks
successful.
But
that
means
investments
not
only
in
open
chests
right
from
the
and,
hopefully
I'm
not
going
to
go
through
all
of
them.
A
I'm
just
gonna
cover
a
few
of
them
that
are
especially
very
dear
to
my
heart,
which
is
cloud
native
work
runtimes
for
for
open
shift,
so
Red
Hat
is
providing
runtimes
to
run
on
applications
that
understand
kubernetes
environments.
So
you
want
to
configure
a
java
application
using
config
max
for
configure
Maps.
For
example,
we
have
libraries
that
will
understand
that
environment
also
service
cataloging
brokers,
so
for
more
I'm,
not
sure
he's
here
is
it
gonna,
be
here
soon
you're
gonna
have
session
dedicated
on
Khan
service
broker
and
catalog
not
going
to
address
that
again.
A
Windows
containers
also
a
data
platform
based
on
SPARC
and
another,
very,
very
nice
technologies
that
have
a
little
bit
more
about
it
called
sto.
How
many
of
you
are
following
SEO
or
service
mesh?
Okay,
pretty
good,
so
we
have
one
a
good
that
you're
following
it
now,
because
we
have
one
of
the
SEO
community
members
here
that
happens
to
be
a
Red
Hat
employee.
That's
going
to
be
answering
questions
on
sto
on
the
panel
very
soon,
so
thanks
for
coming
I
think
the
big
item
for
us
trials
already
mentioned.
A
But
it
is
an
item
here
called
cluster
operation
right
we
read
hat.
We
manage
ourselves
a
lot
of
clusters
right,
like
let's
say
close
to
a
hundred
clusters.
I
would
say
that
we
have
Q
burn
areas
and
openshift
cluster
data.
A
few
manage
for
many
various
purposes.
We
have
around
open
chip,
dedicated
business
that
customers
can
acquire
a
managed
cluster
from
us.
We
have
OpenShift
online,
which
is
also
many
many
clusters,
and
it's
our
interest
to
make
that
operation
even
more
automatable
right,
so
a
technology
that
we
are
going
to
invest
in
the
next
year.
A
It's
called
cluster
operator
it's
going
to
follow
the
Q
Burnett
as
a
model,
so
the
objective
of
cluster
operator
is
that
you're
going
to
define
a
state
of
how
you
want
your
cluster
to
be
and
in
the
same
way
that
you
define
in
Q
Burnett
is
a
state
of
how
you
want
your
application
to
be
and
kubernetes
maintain
that
state.
It's
going
to
be
the
same
for
a
closer
operator,
so
you
will
describe
your
cluster
and
then
the
cluster
operator
will
always
keep
that
that
that
cluster
in
that
state.
A
It
will
also
help
you
with
automated
upgrades
automated
downgrades,
automated
addition
or
remove
off
nodes.
So
it's
going
to
be
a
very
big
project
for
us
again
to
allow
our
customers
to
have
a
more
automated
operation
of
clusters
and
to
allow
Red
Hat
itself
to
become
more
capable
of
automating
multiple
clusters.
Of
course
it
will
be
open.
Source,
of
course,
will
be
available
to
you
all.
It
will
continue
to
use
the
underneath
ansible
sort,
the
clip
books
that
we
already
have
and
then
interacting
with
those
sensible
playbook.
A
A
I'm
gonna
have
a
little
bit
more
details
about
this,
so
one
of
the
things
that
we
do,
of
course,
is
to
focus
in
stability
and
I
think
just
on
and
q1
dot,
801
I
think
we
between
between
Q
the
cube
release
and
our
release,
like
we
had
to
fix
more
than
I,
think
180
bugs
and
Q
Burnett
is
that
we
thought
it
was
critical.
So
our
work
as
in
the
community
is
to
chop
wood
and
carry
water,
so
it's
about
fixing
bugs
making
kubernetes
table
making
kubernetes
consumable
in
the
enterprise,
so
that
is.
A
There
is
a
lot
of
the
work
that
we
do
and
for
2.7
s
you,
which
is
launched
already.
We
also
made
features
to
a
to
more
stable
stage,
as
as
we
learn
a
lot
from
around
online
clusters,
we
discovered
that
some
of
the
way,
some
of
the
let's
say
the
API
calls
or
the
or
the
colleg
regression.
Are
they
the
data
coming
back
from
api's?
A
They
tend
to
be
a
very
let's
say,
large
payload,
and
we
learned
a
lot
I
think
it's
right
to
say
that
we
are
probably
running
one
of
the
most
diverse
lead
ants
clusters
of
OpenShift
out
there,
because
we
have
openshift
online
and
you
can
have
all
sorts
of
workloads
there
from
from
different
runtimes
from
different
types
of
application
from
Bitcoin
miners,
for
example,
they
are
using
it
which
we
try
to
shut
down.
We
do
shut
down
many
of
them
every
single
day.
A
You
know
that's
what
happens
when
you
put
free
compute
capacity
on
the
internet
right
so
by
that
for
our
for
the
community
for
the
Erb
for
the
openshift
community.
That
is
a
the
best
thing
that
could
happen
because
we
learn
so
much
from
that
experience
of
having
around
let's
say,
eating
around
dog
food,
of
having
open
chips.
They're
running
that
as
part
of
the
open
ship
development
process,
things
don't
go
to
the
product
if
they
have
not
been
baked
in
online
right.
A
A
We
have
fakes
lots
of
security
bugs
because
you
know
we
don't
want
people
getting
into
a
a
OpenShift
node
on
line
stealing
your
AWS
credentials
and
going
crazy
about
it.
So
I
mean
it.
I
would
say
that
safeguard
the
fact
that
we
run
it
first
that
were
willing
to
shoot
ourselves
in
the
first
first
before
you
do
that
right
and
for
openshift
Pro,
which
is
the
paid
offering
for
OpenShift.
A
A
So
again
we
made
changes
to
facilitate
and
to
to
make
pulling
content
from
the
API
server
in
a
more
organized
way
that
allowed
us
to
scale
right,
literally
hundreds
of
turns
of
container
tons
of
thousands
of
containers
that
that's
a
lot
of
metadata,
that
I
could
that
the
cluster
generates,
and
we
need
to
access
that
metadata
to
making
smart
decisions
about
where
to
run
those
containers,
for
example.
So
that
is
one
of
them
also
on
that.
In
that
sense,
the
diversity
and
density
of
the
cluster
start
deliberate
about
this.
A
If
you
seen,
we
all
know
that
permeate
is
popular,
so
we're
bringing
from
easiest
to
open
ships
before
we
had
our
ela
focus
in
a
technology
called
ocular,
but
I
think
we've
been
pretty
good
at
joining
successful
community.
So
we
decided
to
join
this
very
successful
community
called
primitives
as
well,
and
open
parameters
is
going
to
become
the
supported
monitoring
technology
for
open
ships,
we're
already
shipping
it
with
the
product,
but
as
a
tech
preview
stage,
and
it's
a
two
way
to
choose
that
program.
A
Let's
say
for
this
is
that
first,
we
want
to
allow
the
cluster
itself
to
be
monitored
using
Prometheus,
so
that
a
cluster
operator
or
a
cluster
administrator
can
see
the
state
of
a
cluster
with
Prometheus,
and
the
next
step
will
be
individual
applications
to
use
per
meters
to
monitor
the
applications.
Customers
have
already
been
doing
this.
The
users
have
already
been
doing
this.
We
want
to
provide
a
supportable
and
sustainable
path
for
our
users
to
continue
to
do
this
now,
if
the
intent
to
acquire
core
OS
we're
gonna
have
also
very
bright
engineers.
A
So
auto-scaling
has
been
in
openshift.
Thank
you
for
quite
some
time.
I
think
it
was
1.1
or
1.2
that
it
first
got
in.
There
has
been
lots
of
changes
in
that.
So
with
the
the
current
version
that
we
have-
and
this
is
one
of
the
six-
that
we
need-
the
cig
auto-scaling-
it's
led
by
a
colleague
of
mine
back
in
Boston-
we
have
the
ability
to
have
a
custom,
metrics
API
to
do
custom,
metrics
based
auto
scaling,
which
is
what
makes
sense.
A
You
know,
CPU
works
for
maybe
like
80
to
90
percent
of
the
use
case,
but
sometimes
your
application
might
require
you
to
alter
scale
based
on
a
business-related
metric
right.
For
example,
a
you
want
to
SLA.
You
want
a
HTTP
transaction
response
time
or
you
want
other
other
types
of
metrics
that
that
will
trigger
an
auto
scale
in
your
pod
right.
So
this
is
a
available
and
Kira
Nerys.
Already
it's
coming
to
open
ship
3.9.
So
it's
through
the
HP,
a
custom,
auto
a
scalar.
A
A
Flex
volumes,
so
this
is
again
allowing
us
to
run
other
types
of
workloads
on
openshift
as
well
network
continued
network
ipv6,
it's
it's
interesting.
Some
industry,
especially
the
taco
industry,
requires
ipv6
a
lot
some,
not
so
pretty
much
we'd
say
it's
between
a
major
one,
asked
by
taco
I,
almost
say
a
showstopper
for
telco
industries.
So
that's
why
our
investments
there
as
well
continue
work
on
network
policy.
A
How
much
do
you
know
how
much
of
you
here
know
what
network
policy
is
good,
so
I
think
it's
worth
a
little
bit,
maybe
like
five
percent,
so
network
policy
allows
you
to
have
fine
grain
control
over
the
network,
communications
that
happen
inside
your
cluster
right.
So
if
you
have
two
projects
or
two
namespace
or
Q
bananas
face
or
projects,
you
won't
say:
I
want
a
pod
in
a
single
project
to
interface
with
pod
in
another
project,
and
only
that
and
no
other
network
connection
between
these
two
pods.
A
A
There
are
other
ways
you
can
do
that
as
well,
but
this
is,
to
let's
say,
create
a
network
level
protection
and
granularity
in
control
over
what
you
can
do
with
with
that,
so
a
great
contribution
from
from
from
Tiger
ax
and
this
they
were
the
ones
that
first
came
up
with
this.
So
this
is
just
powerful
to
see
the
community
helping
everybody
become
more
successful
right
here.
I
think
it's
a
common
that
comes
member,
if
I'm
not
mistaken,
so
did
you
see
that
happening
so
storage?
This
is.
A
This
has
been
a
long
waited
by
some
of
our
customers.
Is
that
request
that
I
got
like
in
the
early
days
was
I?
Have
my
database
and
I
want
my
database
to
run
in
a
note
that
I
have
SSD
attached
and
I
wonder
database
to
keep
running
on
that
note
forever,
because
because
I
want
to
do
it,
but
I
want
to
run
in
in
content
right,
so
you
kind
of
have
a
storage
based
pod
scheduling.
A
You
know,
I
want
my
application
to
land
on
nodes
that
have
this
specific
type
of
storage
and
the
storage,
it's
a
locally
available
storage,
because
that
application
requires
very
low,
latency
and
fast,
fast
storage.
So
this
is
one
of
the
capabilities
that
we
we
are
working
and
soon
come
up
to
a
MIDI,
a
stage,
but
still
alpha,
but
again
its
ability
to
store
local
storage
based
scheduling
of
applications.
A
A
This
is
so
Paul.
Moore
is
going
to
talk
a
little
bit
about
this
Paul
Moore
is
right.
Having
here,
he
is
the
lead
for
the
Service
Catalog
work
in
cuber,
Narus
and
I.
No,
he
has
a
very
nice
demo
to
show
you
but
I
want
you,
how
many
of
you
actually
have
seen
this
The
Container
catalog
or
service
broker,
so
good,
Red,
Hat's
objective
of
this
is
that
we
want
your
application
catalog
of
things
that
can
run
both
inside
and
outside
of
the
platform
to
be
consumed.
From
from
the
openshift
catalog
right.
A
We
have
developed
a
broker
API
that
allows
you
to
publish
applications
and
again
the
can
run
either
inside
of
outside
of
OpenShift,
and
you
can
trigger
the
execution
of
this
applications
from
inside.
If
you
saw
the
announcement
we
made
when
we
launched
3.7
and
the
beginning
of
December,
we
announced
the
AWS
service
worker.
Actually
AWS
announced
a
doubler
service
broker
for
openshift,
and
that
is
a
way
for
you
to
consume
AWS
services
from
an
open
chef
cluster.
It
doesn't
matter
where
that
open
ships
cluster
is
right,
so
you
have
open
she's
running
on-premise.
A
You
want
access
to
an
RDS
database
or
you
want
access
to,
let's
say
an
s3
bucket
or
SNS
or
sqs.
You
can
consume
that
from
your
local
IP
openshift
cluster.
Of
course
the
service
is
always
running
on
the
cloud,
but
the
negotiation
and
creation
of
the
service
is
done
by
this
via
the
service
broker
and
interfacing
with,
in
that
case,
cloud
formations
templates
on
AWS,
and
you
don't
necessarily
have
to
to
to
to
to
know
this
right.
You
just
go
to
your
open
chief
cluster.
You
say
I
want
to
ask
as
an
answer
ask
us.
A
For
example,
you
can
pre-configure
AWS
credentials
or
you
can
use
your
SS
credentials
exactly
at
the
moment,
and
then
you
have
a
representation
of
a
queue
or
a
topic
or
a
database
inside
your
local
open
shift
clusters
so
that
your
application
can
bind
to
so
that
you
can
have
and
share
those
SNS
or
sqs
credential
in
connection
formation,
sharp,
just
your
production.
So
this
is
powerful
where
I
see
this
going.
A
A
So
we
all
know
that
organizations
have
policies
that
they
need
to
to
enforce
and
and
so
far
the
services
that
are
published
in
the
catalogue
they're
available
to
anyone
right
but
I,
know
and
I've
done.
This
is
that
you
don't
want
necessarily
a
production
database
exposed
into
a
development
environment
right.
So
the
work
we're
doing
upstream
now
is
to
create
governance
around
the
services
that
are
exposed
and
available
in
the
catalog.
So
you
would
say
this
user.
This
namespace
can
see
services
related
to
production
services.
A
A
Exceptions
apply
that
we
might
change,
but
we've
been
pretty
stable
and
releases,
and
this
is
on
the
governance
side
so
because
if
we
assume
that
this
catalog
is
going
to
become
the
cap,
the
enterprise
catalog
we're
automatically
saying
that
we're
gonna
have
hundreds
or
thousands
of
services
published
in
the
catalog
used
by
different
groups
within
the
company.
So
we
want
governance
around
that.
A
On
the
automation
side,
we
want
to
have
the
same
easy
experience
that
we
had
even
in
openshift
too,
is
that
you
could
just
have
your
application
and
say
connect
to
this
database
and
triggered.
You
will
need
to
do
anything
else.
All
the
credential
and
connection
information
will
be
shared,
so
we're
going
to
this
will
be,
let's
say
the
first
step
towards
automating
and
to
evolve
a
little
bit
this
use
case
with
you.
Now
you
have
a
java
application
than
is
connect
to
an
oracle
database.
A
What
does
you
job
application
need
any
JDBC
driver
for
or
coit,
so
you
can
include
that
as
part
of
your
as
part
of
your
build
process
in
your
image,
but
we're
also
going
to
invest
in
creating
binding
base,
build
and
deployment
triggers.
So
that
we
can
notify
your
build
process
that
this
binding
requires
something
that
you
mind,
necessarily
half
right.
If
we're
going
to
bind
to
an
application,
that
requires
a
specific
library,
we
want
the
build
process
to
be
notified.
A
So
that
means
that,
if
you're
doing
to
your
building
OpenShift
your
build
process
will
see
and
maybe
have
a
image
source
or
or
a
a
volume
mounted
that
will
contain
the
dependencies
that
the
buy
needs.
So
this
is
this
is
the
level
of
automation
we
want
to
go.
Is
that
you?
The
platform,
knows
a
lot
and
you
shouldn't
be
needing
to
tell
the
class
from
the
things
it
knows
already
right.
So
that's
a
that's!
We're
going
to
go!
Install
an
upgrade.
A
The
we
want
in
order
to
do
that,
it's
not
only
a
matter
of
automating,
but
it's
a
matter
of
also
creating
artifacts
that
allow
for
easy
deployment
of
of
nodes
right.
So
we're
going
to
be
creating
golding
images
for
OpenShift
nodes,
for
example,
so
that
you
don't
need
to
install
rail
and
then
install
OpenShift
and
then
do
something
else.
You
have
this
very
nice
image
there.
A
You
know
and
remember
that
we
need
this
ourselves
for
our
own
dedicated
business
and
online
business
so
and
everything
that
we
do
there
it's
again
going
to
be
available
to
anyone
at
the
exact
same
time,
it's
going
to
be
developed,
of
course,
in
deal
so
again,
the
objective
is
to
facilitate
how
you
stand
up
open
check
clusters,
how
you
destroy
them.
How
you
add
knows
how
to
remove
nodes
with
building
images
of
openshift
nodes,
continue
to
work
on
the
cloud
forms
tool
or
manage
IQ
the
tool
that
we
we
use
to
manage
openshift
itself.
A
It's
called
called
farms
and
the
open
stream
upstream,
open
source
project
is
named
as
manage
IQ
we're
working
also
on
the
allowing
you
to
to
have
reports
that
show
the
consumption
of
usage,
your
specific
image.
So
if
you
need,
for
example,
to
to
show
to
your
organization
that
or
to
charge
a
group
of
your
organization
based
on
the
usage
of
a
specific
image
that
has
a
product,
that's
licensed
and
you
whoever's
using
that
image
to
to
or
to
cost
Center
to
pay
for
that,
that
is
going
to
be
available
and
platforms.
A
A
Yes,
it's
going
to
be
origin
only
and
we're
going
to
from
product
perspective
or
combine
you
to
release
now.
I
say
skipping
is
like
a
bad
thing
right,
so
we're
going
to
combine
the
3/8
in
$3
I
release.
You
know
the
good
thing
about
this
is
that
sometimes
people
complain
that
a
redhead
is
behind.
Openshift
is
behind
cue
releases.
So
the
day
we
launched
Creek
OpenShift
3.9
cube
1.9
was
to
be
there.
You
know
so
it's
kind
of
you
know.
Okay,
you
were
saying
we're
behind,
not
anymore.
A
A
There
will
always
be
servers
right
there,
just
not
yours,
so
we
understand
that
there
are
many
ways
you
can
define
an
application
today
in
kubernetes.
So,
for
example,
you
can
use
charts,
for
example,
to
define
application.
You
can
use
like
key
compose
if
you're
bringing
in
from
darker,
you
can
use
open
ships,
templates,
I.
Think
a
colleague
did
they
actually
research.
There
are
18
ways
to
define
an
application
inside
open
shift
or
kubernetes
right,
and
although
each
of
this
way
is
they
have
a
reason
we
want
to
try
to.
As
with
most
standards.
A
A
So
it's
going
to
be
a
red
hat,
not
reinforcing
a
wrong
wheel,
but
really
working
with
the
community
I
think
so
far,
there's
been
a
lot
of
good
discussions
around
a
next
version
of
helm,
helm,
3,
so
I
would
say
we're
trending
towards
helm
3
at
the
moment,
even
though
it
only
exists
in
paper
and
not,
it
does
not
exist
in
technology,
but
that's
what
we've
been
thinking
about.
So
far,
so
me
male
here
Dan,
you
need
to
work
on
that
and
then
service
mesh.
So
you
have
opportunity
to
ask
Christian
post
around
service.
A
Mesh
micro
has
required
very
nice
questions
about
service
mesh
as
well,
but
the
the
objective
of
service
mash
is
to
transfer
to
the
platform
capabilities
that
were
also
once
available
in
in
the
language
platform.
So,
for
example,
if
you
want
to
add
circuit
breaking,
say,
fault
injection,
a
baby
routing
or
any
specialized
routing.
A
So
that
means
that
the
proxy
knows
where
things
are
going
and
where
things
are
coming
from
and
there
is
a
control
plane.
On
top
of
that,
were
you
able
to
say,
okay,
Padre
can
talk
to
Part
B.
If,
if
someone
else
wants
to
talk,
that's
not
allowed
or
if
I
want
to
do,
let's
say
circuit,
breaking:
okay,
I,
try
contacting
that
application
three
times.
I
could
not
do
that.
A
A
Hardly
we
already
investing
in
this,
and
the
intention
is
to
be
a
to
show
this
running
introduction
a
Red
Hat
summit,
which
is
in
about
13
and
a
half
weeks.
Development
left
that
we
have
pretty
tough,
will
get
there.
It
will
be
available
on
open
shift,
so
it
will
be
something
that
you
install
on
top
of
open
shift
and
kubernetes.
It's
really
brannis,
but
the
majority
of
our
works
are
fought.
A
So
far
has
been
making
sure
the
capabilities
and
the
components
in
each
geo
do
not
require
you
to
escalate
privileges
in
the
container
eight,
we
try
to
have
a
security
mentality.
First,
it
often
involves
hard
work
to
do,
but
that's
what
we
want
to
do.
So
if
you
were
to
try
these
two
today,
some
of
the
capabilities
require
you
to
to
assign
elevated
privileges
to
container.
We
don't
think,
that's
a
good
thing.
A
We
can't
do
this
on
open
shift
online
right
because
remember
we
run
it
ourselves,
so
we
couldn't
just
let
containers
have
a
root
access
in
the
host
or
in
the
node.
So
these
are
the
things
that
we're
going
to
fix
first
and
then
continue
evolving
on
the
others.
If
any
of
you
here
is
interested
in
being
part
of
an
early
adopter
program
for
issue
for
open
shift,
please
come
talk
to
me
directly.
A
A
A
A
Cluster
Federation,
so
any
work
on
cluster
Federation.
Yes,
the
cluster
Federation
project
took,
let's
say,
a
different
spinner
strategy,
so
they
are
working
today
on
a
smaller,
more
with
small
scoped
comparison,
Federation
called
cluster
registry,
which
is
to
first
have
a
registry
of
clusters
and
then
work
on
allowing
the
resources
to
be
distributed
and
avail
and
deploy
the
multiple
clusters.
So
it's
let's
say
the
Federation
team
thought
that
it
was
too
much
of
an
undertaking
to
do
and
they
said,
let's
take
a
step
back.
Let's
have
a
more
focused
approach.