►
Description
AMA Panel: Upstream Project Leads, Speakers, and Red Hat Data Scientists
Filmed on October 28th 2019 in San Francisco.
A
B
B
C
D
Yeah
so
key
native
is
is
for
those
of
you
who
don't
know
it's
part
of
the
community
efforts
to
do
service
and
Red
Hat
and
Google,
and
others
have
been
kind
of
some
of
the
foundational
members
of
that
key
native
and
serverless
is
the
core
part
of
the
openshift
platform,
and
we
have
released
this
as
tech
preview
in
4.2,
which
is
the
most
recent
release
and,
and
so
part
of
that
is,
is
really
you
know.
So
we
are
very
excited
about
that
I
mean
we
have
working
with
I
know.
There
are
plans
also
to
there's.
D
There's
talk
in
the
community
about
what
to
do
about
the
foundation
itself
and
you
know,
should
it
belong
to
CN
CF,
so
those
discussions
are
ongoing,
but
you
know
just
in
to
answer.
In
short,
I
mean
that's
really
the
foundational
platform
for
our
service
ecology
and
it's
we're
going
to
be
well
fully
behind
it
and
we
are,
as
I
said,
tech
preview
and
we're
looking
forward
to
GA
it
in
the
upcoming
releases.
E
That
from
a
model
serving
perspective,
we've
been,
we
Seldon
have
been
part
of
a
joint
effort,
which
was
was
a
project
called
KF
serving
also
includes
Google,
Microsoft,
Bloomberg,
IBM,
Red
Hat
may
be
involved
some
degree,
but
basically
it's
a
service
model
serving
platform
which
is
built
on
K
native.
We're
really
excited
about
it
from
a
kind
of
auto
scaling,
more
efficient
scaling
and
scale
to
zero
and
yeah.
It's
currently
in
a
sort
of
you
know,
nought
point
one
early
release
and
it's
an
independent
project
which
it's
part
of
the
key
flow
project.
E
F
Given
my
background
in
data
and
security
and
customers
threatening
the
suit
I
would
like
to
see
more
with
operator
hub
in
terms
of
how
do
we
secure
our
data?
We
talk
a
lot
about
AI
everyone
understands
without
data
you
don't
have
AI.
So
more
of
a
focus
on
you
know
nurse
that
will
help
us
figure
out
that
whole
security
aspect
of
it
and
just
make
it
as
easy
as
it
is
to
deploy
a
Kafka
or
Selden
or
prometheus.
B
D
Just
add
to
it:
I
mean,
if
you
think
another
angle
to
it.
If
you
think
about
the
operator
maturity
model
right,
like
we
have
I
mean
the
various
levels
that
is
kind
of
simple,
install
and
configure
to
you
know
what
we
call
full
lifecycle.
Then
there
is
this
beyond
that
and
the
question
is
we
talked
about
AI,
ops
and
intelligent
operator,
and
so
that's
some
of
the
the
you
know
the
cutting
edge
that
you
know
which
is
kind
of
missing,
but
I,
think
I'm
sure
people
I
know
of
people
working
on
it.
B
G
Actually
so,
following
up
on
that
question
about
operators,
one
of
the
things
I
heard
a
lot
about
today
was
the
complexity
that
a
lot
of
these
solutions
require.
This
idea
of,
like
a
meta
operator,
an
operator
that
deploys
out
other
components
that
are
operators
essentially,
but
today,
as
far
as
I
understand
that
there's
no
real
operated
operator
awareness
is
that
something
you
think
that
will
become
part
of
maybe
an
intelligence
layer
within
OPA
platform
or
within
the
kubernetes
layer
to
have
that
operator
to
operate
or
communication.
G
B
H
Actually
done
that
before
so
with
an
open
shift
in
the
operator,
lifecycle
management,
you
can
actually
create
subscriptions
and
I
know,
that's
a
really
confusing
word
and
completely
overloaded,
especially
by
Red
Hat.
But
the
idea
is
it's
bad
anyways,
so
you
can
create
basically
a
subscription
on
these
servers
that
creates
those
dependencies
to
then
install
what
you
need.
H
G
A
G
There's
also
this
context
of
discovery
because,
for
instance,
I
need
these
operators
to
exist
so
I'm
going
to
deploy
them.
But
what,
if
they're
those
subscriptions,
already
exists?
How
do
I
share
keys
or
have
some
sort
of
operational
knowledge
between
vendors
right,
because
I
mean
I
mean
I,
where
I'm
from
an
ISV,
where
a
monitoring
solution,
there's
lots
of
database
size
fees
that
have
the
best
practices
about
their
database?
It'd
be
great.
G
I
I
The
whole
they're
a
bunch
of
them
and
we're
exactly
at
that
point
now
like
we
just
had
the
problem.
We
want
to
install
two
of
them
right
and
they,
of
course,
all
install
the
same
components,
CAF
guys
in
everything
and
I.
Think
it's
it's
a
good
time
right
now
to
actually
bring
that
up
on
the
community
side
and
and
defines
the
best
practice.
So
we
should
follow
up
and-
and
you
know
see
like
how
we
loop
you
into
that
discussion
and.
D
I'll,
give
you
easy
one
that
this
one
is
more
I
mean,
so
if
also
if
operator
is
dependent
or
rather
operator
so
far
with
with
four
or
two,
it
knows
the
dependency
that
there
is
a
dependence
you
know
in
it
and
it
deploys
that
operator,
which
was
not
there
previously.
So
that's
just
another
way
of
operator
kind
of
being
cognizant
of
another
operator
being
there
previously
you'd
have
to
actually
do
it
yourself,
I
mean.
I
You
know
I
think
the
one
with
the
vision
we
want
is
that
you
can
have
in
any
ways
it
was
a
data
beer.
We
are
trying
to
get
there
specifically
as
having
a
reference
architecture
that
can
broker
the
relationships
between
multiple
vendors
right.
Our
vision
for
the
data
bits,
not
the
data,
becomes
a
vendor
for
an
AI
platform
right.
What
we
want
is
that
OpenShift
is
a
platform
that
customers
can
deploy
flexible,
AI
platforms
and
in
that
get
an
experience,
that's
integrated
right
across
the
whole
ecosystem,
and
then
they
get
you
know
so
box.
I
But
you
know
at
the
end
like
right
now
you
have
the
hyper
scalars
with
their
integration
capabilities
and
centralization
right,
which
is
it's
just
a
very
heavy
model
and
there's
not
much
room
for
the
rest
of
the
software
industry
right,
but
they
set
the
standard
of
what
customers
expect
from
an
experience
and
integration
point
of
view,
my
and
and
with
with
operators
and
communities.
We
have
the
capability
to
counter
that
with
a
distributed
integration
model.
I
J
My
question
is
about:
we
are
doing
a
so
AI
on
online
or
offline
data.
We
are
doing,
I
mean
how
how
real-time
is
it
like
in
in
open,
safe
per
se,
and
also
data?
We
are
collecting
from
let's
say
how
AI
can
be
used
on
data
collector
from
IOT,
so
various
devices
and
we
are
collating
and
putting
data
in
open,
sift
and
then
doing
a
IML.
On
top
of
that.
E
Okay,
yeah
so
first
one
before
the
IOT,
so
I
oto
on
the
IOT.
That's
that's
a
really
big
challenge
really,
ultimately,
and
one
of
the
solutions
to
this
will
be
to
compute
at
the
edge-
and
you
know,
use
the
kind
of
federated
learning
techniques
that
we
discussed
earlier
and
what
was
the
first
question
again,
so
you
missed
that
yeah
most
most
data,
most
models
are
batch
and
they're
trained
and
then
sort
of
released
and
then
and
then
retrained
and
used.
E
K
I
want
to
add
a
little
bit
to
this
as
well,
so
when
it
comes
to
the
IOT
side,
my
argument
on
the
IOT
space
and
on
the
edge
and
the
edge
space
is
that
the
infrastructure
like
infrastructure
is
very
important.
That's
why
everyone
is
here.
The
infrastructure
when
it
comes
to
the
edge
is
a
whole
new
level,
because
we're
now
talking
about
infrastructure
that
is
owned
on
an
edge
data
center
or
an
edge
device.
How
do
you
connect
to
it?
K
How
do
you
make
sure
that
what
you're
connecting
to
is
who
it
says
what
it
is?
How
do
you
set
up
the
policy
behind
it
and
that's
before
you
even
start
going
after
the
technical
sides
of
it
and
before
we
even
get
to
proper
IOT,
where
you
can
literally
have
devices
move
and
be
anywhere,
there's
also
issues
with
the
radios
that
we
have
today
like.
K
If
you
look
at
a
standard,
4G
deployment
to
the
density,
you're
gonna
have
per
square
mile
I
think
is
around
4000
devices,
which
is
why,
when
you
go
to
a
conference
or
or
to
a
concert,
your
phone
stops
working.
When
you
bring
that
to
the
5g
space,
then
the
5g
space
is
I,
think
it's
three
or
four
million
devices
per
square
mile
that
were
able
to
have
from
a
densities
from
a
density
perspective.
K
Eventually
we're
gonna
have
all
of
the
ethical
issues
and
various
other
things
that
we
spoke
about
that
are
going
to
pile
on
as
well
as
we
start
to
develop
these
and
so
like.
We
have
a
whole
lot
of
work
that
needs
to
be
done
there
and
a
lot
of
this
stuff
hasn't
been
defined.
Yet
so
there's
there's
a
huge
gap
there
of.
I
Course
you
know
people
are
doing
data
collection
at
the
edge
today
right,
but
you
know
you
will
see
progressively
there's
this
open
source
work
happening
to
figure
out
how
to
have
filtering
how
to
push
decisions
out
to
the
air
track.
Ultimately,
you
know
like
we
looking
at
like
just
open
shift
metrics
itself
right.
I
We
collect
a
lot
of
metrics
and
open
up
for
if
you
don't
opt
out
and
recommend
you
don't,
because
it
actually
really
beneficial
for
you,
because
we
can
tell
you
whether
your
clusters
broken
or
not
right,
and
we
can
predict
certain
things
based
on
what
we
see.
You
know
red
heads,
businesses,
it's
it's
generating
knowledge
about
open
source
software.
I
That's
a
cop,
a
customers
pay
us
because
we
know
things
or
we
can
figure
them
out,
and
then
we
can
fix
them
right
and
often
it
is
because
we
have
many
other
customers
and
we
probably
have
seen
the
issues
that
you
are
running
into
before
or
you
know
we
can
we
can
it's
like
a
herd,
immunity
thing
right.
If
you
send
us
your
data,
your
operational
metrics,
we
can
probably
identify
patterns
based
on
what
we
have
seen
somewhere
else.
The
problem
is
that
doesn't
scale.
I
We
can't
collect
all
the
data,
so
we'll
get
to
the
point
that
we
actually
have
to
push
decisions
out
to
individual
nodes
right
when
this
goes
deep.
All
right,
you
can
think
about
like
simple,
like
optimizations,
in
the
like
just
small
data
optimizations
in
the
kernel
or
in
the
tool
chain
in
Linux
right,
where
you
replace
static
heuristics
with
machine
learning.
That
totally
makes
sense
right.
I
You
can't
like
wait
like
you,
can't
wait
for
the
cloud
to
take
a
decision,
there's
not
only
when
I'm
in
a
self-driving
car
I
don't
want
to
wait
for
the
cloud
to
stop.
I,
don't
want
that
in
my
IT
and
we
can't
we
don't
have
the
bandwidth
to
send
all
the
data
and
even
if
we
could
send
it
from
the
bandwidth,
we
don't
want
to
store
it
all
right.
I
So
you
need
to
figure
out
how
to
take
push
decisions
out
and
basically
identifies
the
interesting
data
that
you
want
to
learn
from
right
and
then
figure
out
how
you
either
federates
learning
or
you
send
enough
data
up
to
be
useful,
but
not
too
much.
You
know
not
more
than
you
can
handle.
So
that's
I
think
it's
a
general
problem
that
everyone
has
and
that
being
there
are
solutions
today
and
I.
Think
you're
gonna
see
a
lot
of
innovation
in
that
space
and
open
source
over
the
next
couple
of
years.
Yeah.
D
I
agree,
but
having
said
that,
I
mean
there
are
people
who
are
using
I
mean
you
know.
There
are
each
adduced
cases
that
people
are
using
today
already
so
I
don't
want
us
to
leave
with
an
impression
that
this
is
all
I
mean,
for
example,
I
know
of
use
cases
where
in,
for
example,
at
a
Airport
you
want
to.
D
You
are
getting
a
camera
feed
from
the
gates,
and
you
know,
and
and
and
there
are
companies
who
are
out
there-
I
switch
that
can
analyze
that
camera
feed
can
use
a
model
which
has
been
trained
somewhere
else,
maybe
on
Azure
or
maybe
an
AWS,
and
then
that
model
is
deployed
locally
at
the
airport
so
that
it
can
do
real-time
analytics
of
that
image
right.
So
we
actually
know
of
people
who
are
using
open
shift
to
tries
to
solve
that
problem.
So
so
it
depends
on
how
you
look
at
edge.
D
L
One
thing
I
wanted
to
add
was
yeah,
that's
a
very
good
question
right
and,
as
a
customer
you'd
like
to
see
reference
architectures
and
the
solutions
that
okay,
what
is
the
solution?
Look
like
so
I
think
as
a
vendor
community
and
a
partner
community,
we
need
to
go
in
the
direction
to
can
I,
provide
you
prescriptive
guidance
in
the
architectures
right
that
hey.
This
is
how
you
do
this
kind
of
a
solution
Friday
like
from
the
cloud
to
the
data
center
to
the
edge
and
the
use
cases,
and
so
on.
L
B
So
say:
I'm
a
customer
I
mean
I,
have
a
specific
use
case.
How
do
I
plug
into
the
whole
open
data
structure
so
that
I
can
get
my
use
case?
Looked
at
yeah.
F
We're
actually
trying
to
make
that
process
simpler,
I
think
the
best
thing
is
we
work
a
lot
with
the
field
and
really
just
getting
in
touch
of
the
field
getting
in
touch
with
Tushar.
He
drives
a
lot
of
the
use
cases
that
we
see
from
an
AI
ml
perspective,
and
you
know
he's
probably
a
better
person
to
take
this
question,
but
we
take.
We
spend
a
lot
of
time,
understanding
the
use
cases
from
customers
and
really
driving
home.
What
are
they
trying
to
do?
F
What's
the
value
that
they're
trying
to
bring
to
their
customers
and
why
it's
relevant,
but
then
also
we
we
apply
those
internally.
You
heard
a
lot
about
the
open
data
hub
and
how
we
use
that
internally,
at
Red
Hat.
We
have
our
own
set
of
challenges
in
our
own
set
of
use
cases
that
we
bring
to
the
table
and
we
bring
it
into
an
open
environment.
You
anyone
can
join.
F
The
community
will
be
starting
to
have
community
meetings
where
people
can
can
just
chime
in
and
share
their
use
cases
with
the
the
people
who
are
actually
contributing
to
open
data
hub
and
just
make
it
more
of
a
conversational
piece.
Okay,
what
are
you
trying
to
solve?
Is
there
a
reference
architecture
we
can
provide?
Is
there
more
information
about
tooling
that
can
be
added
to
help
out
with
the
use
and
really
just
kind
of
drive
it
from
the
top
down?
I
D
So
I
was
talking
to
our
friends
from
Microsoft
I'd
like
a
coffee
break
or
something,
and
they
are
also
kind
of
doing
these
kind
of
reference
architectures.
So
we
were
talking
about
you
know
some
they
participating
in
that
sake
or
open
data
hub
or
whatever
I
mean
there
are
some
plans
to,
but
I
think
to
do.
I.
D
Think
the
question
a
Julio's
question
I
think
there
is
more
and
more
interest
and
obviously
these
Commons
gatherings,
one
of
the
things
that
Diane
wants,
is
more
and
more
customers
to
come
and
talk
about
their
use
cases,
because
that
is
real
validation.
Those
are
real-world
problems.
That's
where
we,
you
know
we
get
these
reference
architectures.
So.
K
K
Yeah
so
yeah
when
it
comes
to
these
SIG's
I
mean
these
things
and
the
kubernetes
way
so
I
spent
a
lot
of
time
working
with
the
CN
CF
and
Linux
Foundation.
Networking
and
similar
a
significant
portion
of
the
important
decisions
occur
on
the
sig
calls
on
the
Sigma
mailing
list.
So
when
they're
saying
join
us
sig,
these
are
some
of
the
most
effective
ways
to
to
get
involved
in
order
to
make
a
change.
K
So
please
like
when,
whenever,
if
there's
something
that's
important
to
you,
and
you
see
that
there's
a
Sigyn,
that's
attached
to
it,
whether
it's,
whether
it's
a
Red,
Hat
organized
one
or
community
1
through
CN
CF
or
the
kubernetes
sig,
please
get
involved.
Please
say
what
your
use
cases
are
where
you,
where
the
gaps
are
because
they're
not
gonna,
know
what
what
the
gaps
are,
unless
you
tell
them
so
like
and.
F
K
Better,
if
you
have
the
resources
to
to
join
in
and
actually
send
some
contributions
on,
they're
always
hugely
appreciated.
There's
a
little
even
there's
a
long
tail
of
things
that
need
to
get
fixed
and
the
more
people
who
fix
those
that
long
tail,
the
the
more
that
the
core
engineers
can
can
get
done
and
fix
your
and
fix
your
critical
problems.
F
Another
good
question:
we
actually
have
I
guess
you
know
Ryan
King,
he
I
don't,
if
he's
still
in
the
audience,
but
he
helps
out
a
lot
with
the
the
partner
environment
and
just
you
know,
figuring
out.
How
do
we
have
a
relationship
and
and
how
do
we
you
know
kind
of
grow
together
but
I
think
reaching
it
I
guess?
What's
the
best
Channel
Daniel.
D
F
I
I
We
do
active
a
neighbor.
We
can
actually
actively
help
and
that
also
because
for
us,
the
space
is,
it's
very
customer
driven
right.
So,
if
you're
an
end-user-
and
you
are
trying
to
do
something-
and
you
have
specific
tools
or
IVs
you
want
to
use,
you
can
also
talk
to
Rhetta
through
your
existing
ratted
Channel
Susie.
It's
ruses
sake
to
the
openshift
comment,
stick
and
you
help
us
prioritize
which
problems
we
solve
and
which
partners,
for
example,
we
pull
in
right.
E
The
process
is
obviously
very
efficient,
as
CEO
of
a
you
know,
scale
up
company
and
based
in
London
I
had
very
little
involvement
as
CEO,
but
what
I
did
see
is
that
members
of
the
small
number
of
team
members
so
I
think
it
was
one
engineer
and
one
of
our
business
development
people
coordinated
it
and-
and
there
was
a
very
efficient
certification
process
if
you
mentioned
that.
But
you
know
that
was
step
one
and
then
there
was
the
open
data
hub
opportunity
which
emerged
from
that.
L
Yeah
one
thing
I
want
to
add
is
say
yeah
if
you're
into
IT
right
so
be
friends
with
the
data
scientists
and
the
data
engineers
in
your
organization,
because
they
are
looking
at
all
kinds
of
tools
right
and
there.
At
the
end
of
the
day,
you
may
be
called
on
to
support
all
those
tools
on
your
infrastructure.
L
So
from
that
perspective,
if
you're
able
to
get
ahead
of
the
curve
and
find
out
all
the
tools
that
there
are
the
favorites
of
your
customers
like
as
in
the
scientists
area
engineers,
so
that
way
we
can
prioritize
those
as
well
in
terms
of
the
operators
integrations
by
county
or
the
sake
and
the
special
working
groups.
All.