►
Description
We'll be live streaming all The Best of 2021's OpenShift Commons Gathering End User talks from Kubecon EU, Red Hat Summit, and the Gathering on Data Science. Running commentary will be provided by Red Hat's Diane Mueller and Chris Short, along with Q/A by the Red Hatters, who help these end-users with their deployments. We'll have a few other special end user guests along the way, adding in their insights and comment as well.
A
B
Well,
today,
it's
it's
interesting
we're
about
halfway
through
the
year
2021
and
it
has
been
a
very
interesting
year,
probably
a
very
interesting
year
and
a
half
and
we've
been
hearing
and
having
lots
of
openshift
commons
gatherings.
We've
had
three
so
far
this
year
on
one
on
data
science,
one
at
kubecon
and
one
just
recently
at
summit,
part
two
and
we
had
some
amazing
end
user
talks
and
halfway
through
the
year.
B
What
we
thought
we
would
do
is
bring
you
some
of
the
best
of
those
to
tempt
you
to
watch
the
rest
of
them
and
tell
you
a
little
bit
about
the
the
role
of
end
users
in
the
commons
and
what
the
commons
is
and
then
sort
of
netflix
bingy
style
watch
some
of
these
together,
and
I
just
think
some
of
these
were
really
amazing
stories.
Journeys
to
an
open
shift,
new
workloads
that
I
hadn't
seen
you'll
hear
some
really
interesting
talks.
B
So
I
just
wanted
to
to
really
emphasize
that
these
come
from
across
multiple
communities
of
people
from
different
market
sectors,
whether
they're,
telcos
or
banks,
or
one
of
them,
is
the
the
first
one
you'll
hear
is
from
the
department
of
agricultural,
agriculture
and
fisheries
and
marine
life
in
ireland.
B
The
version
one
talk
but
lots
of
different
folks
and
all
of
them
are
helping
us
make
connections
with
each
other
share
stories
that
are
helping
red
hat
engineers
and
upstream
project
leads
better
understand
what
they
need
to
put
into
the
products
and
the
projects
that
they're,
creating
and
collaboration
is
happening
all
over
the
place.
B
And
I
that's
really
one
of
the
things
about
this
community
is
that
they've
been
really
amazing
at
connecting
cross
communities,
and
some
of
these
stories
really
showcase
that
today,
so
we
thought
we'd
grab
a
few
of
them
play
them
for
you
play
them
with
you
and
take
your
questions
in
the
chat
wherever
you
are,
and
basically
what
we're
trying
to
do
is,
as
always,
promote
peer-to-peer
interactions,
and
so
here
are
some
of
your
peers,
sharing
their
end
user
stories,
their
production
use
cases
and
workloads.
B
What
they've
integrated
into
their
stacks,
because
it's
not
all
about
openshift-
and
you
know
we
do
this
in
commons-
briefings
at
gatherings.
You
do
this
in
working
groups
and
six
and
in
you
know,
cncf
tags,
we're
always
talking
on
slack,
but
here's
an
opportunity
to
kind
of
sit
back,
relax,
enjoy
the
show
and
hear
some
of
these
stories,
because
really
what
it
all
comes
down
to
is
commons
is
really
for
end
users
and
by
end
users.
B
Today,
if
you
haven't
heard
me
rant
before
a
lot
of
the
model
of
open
source
is
changing
quite
a
bit.
There
are
a
huge
number
of
open
source
projects
that
our
end
users,
customers
have
been
pushing
out
and
putting
into
the
cncf
and
other
open
source
foundations.
Just
the
other
day,
we
heard
a
briefing
on
cruise
control
a
project
that
linkedin
has
donated
to
the
cnc
to
to
as
put
out
as
open
source
and
github.
B
You
know
there's
just
a
ton
of
things
that
are
happening
besides
the
production
use
case,
there's
a
lot
of
folks,
you'll
hear
in
the
one
on
health,
os
from
anthem,
about
spiffy,
inspire
and
envoy,
and
all
of
the
other
projects
that
they're
participating
in
it's
just
been
a
very
interesting
first
half
of
2021,
and
we
encourage
you
to
join
openshift
commons,
share
your
stories
and
get
introduced
to
your
peers.
So
today
we're
going
to
kind
of
kick
it
off.
We
have
I've
picked
a
few
of
them.
B
Some
of
these
are
my
favorites,
and
so
you
may
have
your
other
favorites
they're,
all
mostly
on
youtube.
The
last
four
are
still
in
the
red
hat
summit
session
catalog
and
are
available
there
on
demand,
and
they
will
get
uploaded
into
youtube
eventually,
but
we're
going
to
run
through,
and
so
this
first
one
that
we're
going
to
talk
about.
I
just
want
to
set
it
up
a
little
bit.
This
is
version
one
which
is
a
red
hat
partner
and
felipo
sassy
is
going
to
talk
a
lot
about
version.
C
Okay,
so
thanks
anyone
for
making
the
time
for
joining
this
presentation
today,
during
this
session,
I'm
going
to
introduce
you
one
of
the
application
we
we
have
implemented
for
one
of
our
customers.
The
application
makes
use
of
text
analytics
and
artificial
intelligence
to
reduce
the
risk
of
gdpr
breaches.
But
before
diving
into
that,
let
me
introduce
myself.
My
name
is
filippo
sassi.
I
am
a
senior
software
engineer.
I've
been
working
in
the
industry
in
this
industry
for
quite
a
few
years.
C
By
now
in
companies
like
ibm,
concentrics
and
obviously
version
one
which
I
joined
in
2014.
in
my
career,
I
covered
a
number
of
different
roles:
net
web
developer,
scrum
master
tech,
lead
since
2019.
When
I
joined
the
version,
one
innovation
labs
where
I
am
now
one
of
the
leaders
version.
One
is
an
I.t
consultancy,
firm,
driving
customer
success
through
over
20
years
of
market
leadership
and
innovation
in
at
services
version
believes
in
modernizing
innovating
and
accelerating
our
customers
business
transformation.
C
We
believe
that
this
is
what
makes
version
one
different
and,
more
importantly,
our
customers
agree
on
these
slides
some
stats
about
version
one.
The
interesting
thing,
I
suppose,
is
the
quick
growing
rate
on
some
of
the
figures,
and
I'm
not
gonna
lie
to
you
to
create
this
deck.
I
reused
some
of
the
slides
from
approval
presentation.
We
ran
in
october
2020.
C
This
slide
at
the
time
show
just
over
1300
employees
in
just
two
quarters,
we're
already
reaching
1.5
k.
I
think
that
more
than
any
other
number,
this
demonstrates
how
virtu
1
is
growing
while
committing
to
our
core
values.
C
Dapham
is
the
irish
government
department
of
agriculture,
food
and
the
marine
dathan
vision
is
to
be
an
innovative
and
sustainable
agri-food
sector
operating
to
the
highest
standards.
Daphne
is
one
of
the
oldest
version,
one
customers
and
version.
One
provides
many
teams
dealing
with
the
different
defense
schemes.
Applications
and
more
one
of
these
teams
is
the
bps
team.
Bps
stands
for
a
basic
payment
scheme.
C
The
team
handles
application
and
payments
of
pharma
grants
through
the
bps
application,
which
is
can
be
accessed
through
modern
digital
channels,
which
makes
the
customer
journey
easier
with
fewer
administrative
overhead.
C
In
the
last
couple
of
years,
dapham
has
invested
heavily
in
the
openshift
container
platform.
This
choice
was
primarily
justified
by
one
of
the
key
strategic
aims
for
the
department
to
provide
a
capability
for
fast,
flexible
application
deployment
and,
at
the
same
time,
to
be
responsive
to
changing
and
emerging
needs
over
time.
All
of
these,
while
focusing
on
small
products
that
can
be
designed
quickly,
iterated
and
released,
often
in
particular,
the
openshift
container
platform,
was
a
suitable
choice
for
the
product.
C
The
this
solution
reaffirmed
the
department
beliefs
that
the
investment
in
the
openshift
platform
would
provide
long-term
strategic
gains
in
line
with
the
public
service.
Ict
strategy.
Dapham
is
focused
on
digital
transformation,
including
both
front-end
and
back-office
transformation
to
deliver
services
for
citizens,
businesses
and
the
government
from
may
2018.
C
The
general
data
protection
gdpr
regulation
came
into
effect,
requiring
businesses
to
protect
the
personal
data
and
privacy
of
the
european
citizens
for
any
transaction
that
occurs
between
the
european
member
states
and
in
line
with
these
requirements.
With
this
regulation,
one
of
the
fan
priority
for
transformation
was
to
protect
the
personal
data
for
not
only
the
different
customers,
but
also
for
the
customers
of
the
public
service
as
a
whole,
and
in
particular
we
consider
the
following
use
case
and
to
receive
to
receive
grant
payments.
The
farmers
must
upload
various
documentation
through
the
department
website.
C
C
The
department
wanted
to
understand
how
technology
could
be
applied
to
assist
and
to
answer
this
question
daphne
versus
one
on
site
team,
contacted
the
version.
One
innovation
labs.
C
The
labs
is
a
better
added
service
that
version
provide
to
its
customers
to
explore
disruptive
technologies.
A
couple
of
points
to
note
here,
so
it's
version
one
customers.
That
means
that
whatever
we
do,
we
do
it
for
the
clients
which
are
already
within
the
version
1
customers
base
and
for
them
we
are
a
value-added
service.
So
we
are
free
of
charge
that
doesn't
mean
that
we
are
free
of
cost.
Indeed,
we
are
expecting
to
use
their
data
and
we
are
expecting
to
use
their
resources.
These
will
have
particularly
an
impact
on
cost.
C
The
proof
of
value
is
the
same
thing
of
a
proof
of
concept.
Basically,
a
fully
working
prototype.
We
just
applied
the
semantic
suites
to
highlight
that
what
we
do
actually
bring
values
into
the
customers,
businesses
so
far,
we
have
implemented
at
least
one
pov
in
all
the
technological
areas
shown
on
the
slides,
the
whole
exception
being
the
iot.
Some
of
those
povs
were
quite
cool.
I
remember
one
of
the
first
one
I
work
at
when
I
joined
the
lab
was
provo
value
for
a
virtual
reality.
Application
using
oculus
skate
headset
for
the
same
customer.
C
We
immediately
implemented
another
pov
this
time
using
augmented
reality
on
an
android
tablet,
just
to
show
them
the
different
experiences.
Both
the
povs
were
very
well
accepted
received
by
the
customers,
but
we
understood
that
to
push
this
forward,
to
move
this
into
production
and
to
provide
the
client
with
the
wow
factor
they
were
looking
for.
We
simply
didn't
have
the
are
capabilities
within
the
company.
That's
because
these
technologies
are
quite
neat
and
to
they
require
very
advanced
graphical
skills,
especially
3d
graphical
skills,
which
are
almost
those
required
in
the
gaming
industry.
C
The
innovation
engagement
process
with
dafam
was
exactly
the
same
standard
approach
that
any
version
one
customer
faces
when
engaging
with
the
labs.
The
process
is
the
following:
it
always
starts
from
ideation,
so
we
are
constantly
talking
with
our
customers
to
understand
if
they
are
facing
business
problems
which
are
not
solvable
by
standard
day-to-day
technology.
When
we
identify
one
of
those
problems
we
started
searching.
So
we
look
for
academical
or
industrial
resources.
C
We
run
brainstorming
and
design
thinking
sessions
until
we
found
the
technology
that
could
help
solving
the
problem
at
hand
and
when
we
identify
such
a
technology,
we
start
experimenting
with
it.
When
we're
happy
enough,
when
we
think
we
have
found
a
potential
solution,
we
formalize
it
into
an
innovation
canvas
the
canvas
acts
like
a
contract
between
us
and
the
and
the
in
the
customer,
and
the
document
contains
information
such
as
the
problem.
C
We
are
trying
to
solve
the
the
proposed
solution,
the
people
who
will
make
the
development
team
a
timeline
and
the
metrics
that
will
be
used
at
the
end
of
the
project
to
determine
its
success.
When
all
of
this
is
agreed
and
the
canvas
is
signed,
we
start
with
the
actual
implementation.
C
C
We
consider
we
have
proven
the
value
of
the
technology
we
got
in
touch
with
the
rest
of
the
verse,
1
delivery
teams
to
the
final
roadmap
for
moving
the
pov
live,
so
this
is
exactly
the
same
process
as
daf
and
follow
when
engaging
with
us
on
this
particular
use
case
and
the
outcome.
The
outcome
of
the
whole
process
is
smart
text,
so
using
best
of
breed
open
source
technology,
smart
text
provide
text,
analytic
capabilities
to
extract
meaningful
insights
from
unstructured
data,
so
documents
images,
pdfs,
etc.
C
These
insights
are
the
features
that
are
later
used
for
artificial
intelligence
modeling
to
ultimately
classify
if
the
document
contained
or
not
personally
sensitive
information.
Obviously
this
is
just
one
of
the
many
possible
applications.
Martex
could
be
used
by
many
other
scenarios
and
we
will
shortly
see
some
examples,
but
for
now
let
me
just
dive
a
little
bit
more
into
the
components
of
the
solution.
The
first
one
is
the
ocr
cr
stands
for
optical
character,
recognition,
and
this
component
extract
the
textual
content
from
the
unstructured
documents.
C
Each
of
these
components
is
exposed
as
a
separate
api,
ensuring
those
coupling
and
easily
combination
the
apis
use,
cutting
edge,
open
source
libraries
which,
with
appropriate
customization
for
these
and
other
use
cases,
as
example
of
customization.
We
are
currently
retraining
the
open
source
machine
learning
model,
with
specific
set
of
documents
for
making
the
models
domain
specific.
C
As
a
consequence,
we
wanted
that
smart
text
utilize
the
power
of
the
platform
to
demonstrate
its
value,
and
that
came
out
to
be
a
great
choice
as
the
openshift
platform
helped
us
solving
some
of
the
issues
that
we
could
have
faced.
Otherwise,
for
instance,
the
smart
text
solution
was
designed
to
take
advantage
of
the
python
machine
learning
libraries,
but
this
architecture
was
not
supported
in
the
data
infrastructure.
C
The
openshift
platform
allowed
for
secure
deployment
and
build
of
reddit
published
containers.
What
would
have
been
would
have
been
impossible.
Otherwise,
given
the
available
budget
and
time
and
likewise
building
our
tests
and
production
environments
for
the
pros,
it
would
have
normally
been
another
large
costs,
but
this
was
easily
overcome
with
openshift
animates
streams.
C
The
solution
is
currently
live
actively:
mitigating
gdpr
risk
for
farmers
and
agents
flagging
potential
errors
during
the
documents
upload.
This
has
enabled
the
department
to
switch
from
a
reactive
to
a
proactive
approach
of
identifying
data
breaches
and
isolating
them
and
preventing
them
from
occurring.
C
The
project
demonstrated
that
the
department
led
the
way
in
using
cartinage
open
source
technology
such
as
openshift
and
natural
language
processing
libraries.
From
for
what
concern
in
the
labs,
we
were
able
to
demonstrate
our
credibility
in
the
areas
of
text
analytics
and
machine
learning
and
artificial
intelligence.
C
The
smart
tech
solution
is
now
a
key
piece
of
our
smart
action.
Suite
that
we
are
developing.
We
will
shortly
talk
about
the
smart
action
suite
here.
I
just
would
like
to
say
that,
since
we
have
implemented
the
solution,
we
are
having
many
conversations
with
our
customers
and
smart
tech
storage
generated
really
interested.
C
We
immediately
understood
that
creating
an
ability
to
extract
available
insights
and
metadata
from
a
structured
document
being
them
forms
handwritten
letters.
Images
of
document
whatever
would
be
hugely
available
behind
the
initial
use
case,
for
instance
for
one
of
our
customers
in
the
uk,
we
have
been
recently
implementing
a
document
summarization
tool
and
the
the
goal
of
the
tool
is
to
provide
a
key
pieces
of
information
from
the
end
to
the
end
users
from
a
set
of
documents
without
the
user
having
to
read
any
of
those
documents
at
the
core
of
this
solution.
C
There
is
more
text
we
have
also
recently
demonstrated
it
to
many
other
clients,
both
in
ireland
and
in
the
in
the
uk.
All
in
all,
we
think
that
this
project
is
an
excellent
demonstration
of
how
open
source
technology
could
be
utilized
and
augmented
to
develop
solutions
which
which
are
comparable
to
the
major
cloud
vendors.
C
Indeed,
we
commissioned
a
report
to
compare
smart
tech
solution
with
similar
technologies
from
azure
and
aws,
and
this
report
showed
that
the
performance
from
the
are
very
much
comparable
to
those
of
microsoft,
computer
vision
and
cognitive
services
on
one
side
and
aws
extract
and
comprehend
on
the
other
within
daphne.
The
small
tech
solution
was
the
first
application
deployed
on
the
openshift
container
platform
and
I
started
ironed
out
all
the
user
technical
challenges
with
deploying
onto
a
new
platform.
C
However,
talking
with
one
of
the
main
developers,
I
found
particularly
interesting
that
one
of
the
weakest
point
of
the
original
implementation
was
the
central
role
of
the
orchestrator
components
in
in
the
regional
architecture
because
of
the
orchestrator.
That
architecture
was
highly
coupled
working
through
a
set
of
well-defined
steps
to
be
executed,
together
being
so,
the
orchestrator
needed
to
know
everything
about
anything
else,
making
it
the
single
point
of
failure,
that
is
their
case,
goes
down
everything
everything
that
goes
down
too.
So
we
look,
we
look
at
more
modern
architectural
approaches.
C
C
I
previously
mentioned
the
smart
action
suite
so
before
concluding
this
presentation.
Just
please
allow
me
to
quickly
introduce
it
to
you
and
before
we
look
at
the
standard
innovation
journey,
our
customers
are
facing
when
engaging
with
the
innovation
labs.
The
journey
goes
from
ideation
to
the
successful
implementation
of
the
of
the
apov.
C
However,
over
time
we
noticed
that
many
of
our
customers
were
facing
similar
problems.
So,
instead
of
reinventing
the
wheel
all
the
time,
we
have
decided
to
start
productizing
over
existing
povs
and
build
what
we
call
the
smart
action
suite.
This
is
a
suite
of
components
which
could
be
used
either
either
in
isolation
or
like
lego.
Bricks
could
be
combined
together
in
different
numbers
in
order
to
build
many
solutions
which
could
apply
to
different
use
cases
and
scenarios.
C
Some
of
the
components
like
smart
text
and
smart
data
capture
have
already
been
developed
developed
the
other
will
be
implemented
in
the
next
future.
The
overall
idea
here
is
to
provide
our
clients
with
a
hyper
automation
set
of
apps
which
empower
their
employees,
allowing
them
to
take
a
better
and
more
efficient
decision
in
a
shorter
time.
In
a
nutshell,
the
key
components
are
shown
on
the
slide.
We
already
talked
about
smart
text.
I
will
just
introduce
another
couple
of
them.
C
The
right
reference
from
the
documents
will
be
returned:
smart
automation,
the
best
of
breed
automation,
tools
to
develop
hyper
automation,
so
with
a
combination
of
rpa
and
ai
and
finally,
smart
process
advisor,
which
is
designed
designed
to
guide
staff
through
organizational
processes
advising
them
each
step
of
the
way,
and
that
was
all
I
wanted
to
share
with
you
today.
I
hope
you
find
it
interesting.
Thank
you
very
much
for
your
attention.
If
you
have
any
question,
you
can
answer
it
in
the
chat
below
well.
B
Hello,
everybody
and-
and
welcome
back,
I
hope
you
enjoyed
the
talk
we
just
did
with
philippo
sassy
and
now
up.
B
Next
we
have
one
of
my
favorite
people,
joseph
myers,
from
rhodian
schwartz,
who
has
been
a
long
time,
member
of
the
okd,
the
open
source
side
of
openshift
working
group,
and
he
gave
a
great
talk
at
kubecon
eu
talking
about
some
of
the
benefits
of
working
with
the
open
source
side
of
things,
as
well
as
talking
through
his
journey
and
rhodey
and
schwartz's
journey
to
running
on
azure
on
the
azure
platform
and
openshift.
So
without
too
much
further
ado.
B
But
a
suggestion
that,
if
you're
interested
in
working
with
okd
and
working
and
joining
the
okd
working
group,
you
can
go
to
okd.io
and
join
there
and
you'll
find
all
the
links
to
join
the
google
group
and
we
meet
every
tuesday
at
1600.
Utc
and
we'd
be
thrilled
to
have
you
join.
But
here
let
joseph
tell
you
a
little
bit
about
his
road
to
it
may
have
been
bumpy,
but
it
was
lots
of
fun
to
do
it
with
together
and
collaborate
with
the
rhodium
shorts
team.
So
kick
it
off.
D
E
E
That's
only
five
months
after
the
start
of
the
program-
and
this
was
very
tough
for
us,
because
we
had
experience
with
docker
but
not
with
kubernetes,
and
it
was
clear
to
us
that
we
want
to
do
that
with
on
kubernetes,
and
the
first
task
was.
This
mvp
was
to
provide
kubernetes
clusters,
two
ones,
one
on
premises
for
our
developers,
because
we
have
the
policy,
my
company,
that
no
source
code
ever
has
to
be
available
in
the
public
cloud.
E
So
we
created
our
heads
to
create
a
cluster
on-premises
for
our
developers,
so
they
can
access
the
source
code
and
do
builds
was
their
artifacts
and
the
second
cluster
should
be
in
the
public
cloud.
So
our
customers
can
access
them
because
we
don't
serve
our
software
from
our
on-premises
cluster
to
the
internet.
We
have
separate
clusters
for
that.
E
We
had
a
few
requirements
for
that,
at
least.
There
were
three
very
important
ones.
The
first
one
was
don't
pay
any
license:
fees
for
the
kubernetes
distribution
because
yeah
we
started
with
our
digital
business
and
we
didn't
want
to
have
a
burden
of
the
license
fees
on
them
and
their
motto
was
let
their
business
grow
first.
E
So
this
is
the
most
important
requirement
in
the
beginning.
For
us,
the
second
one
was:
the
system
must
be
stable,
that's
obvious
yeah,
but
we
learned
that
it's
not
so
easy
to
achieve.
We
must
take,
and
the
distribution
should
take
care
about
everything
that
you
don't
want
to
mess
around.
Normally,
with
with
networking
with
storage
and
a
few
more
things
yeah,
we
learned
a
lot
about
that.
It's
a
hard
way
that
it's
not
easy
to
maintain
these
things.
E
If
you
have
to
so
yeah,
and
also
if
you
look
back
it's
it's
one
of
the
biggest
and
most
important
requirements,
you
should
take
into
account
if
you
choose
kubernetes
distribution.
Also,
the
third
requirement
was
that
we
would
like
to
have
the
same
stack
on
premises
and
in
the
public
cloud
and
the
same
user
experience.
E
So
that
our
developers
don't
have
to
switch
around
in
their
minds
with
the
usage
of
the
tooling
independence
of
if
they
use
the
on-premise
cluster
or
the
public
cloud
cluster,
we
wanted
to
have
a
look
and
feel
that's
the
same
everywhere.
E
Then
we
went
into
an
evolution.
Phase.
Five
months
is
very
tough,
so
we
rushed
through
that
very
fast.
First,
we
tried
the
obvious
we
used
vanilla
kubernetes,
to
create
our
first
clusters
on
head.
Take
care
about
everything
on
our
own
storage.
Networking
usability
was
was
disastrous
in
the
beginning,
and
so
we
gave
up
very
soon.
That
was
not
the
way
we
wanted
to
work
and,
together
with
our
company,
though,
we
were
searching
for
something
better.
E
So
we
tried
out
several
community
driven
kubernetes
distributions.
I
don't
want
to
name
them,
but
we
had
mixed
experiences.
We
had
problems
with
stability.
I
I
remember
one
tool
that
had
an
automatic
installer
for
clusters
and
every
second
installation
failed
because
of
bugs
user
experience
was
not
so
good
on
the
others.
So
we
were
yeah.
We
had
no
good
feeling
that
we
are
on
the
right
track.
E
It
was
a
was
a
very
tough
time
for
us
during
the
civilization
phase
it
was.
It
was
a
pure
coincidence
that
we
attended
a
sales
presentation
for
openshift,
because
openshift
violated
our
most
important
requirement
that
we
don't
want
it
to
spend
money
for
our
kubernetes
cluster.
You
remember
we
did
not
want
to
have
the
burden
of
license
fees
on
our
digital
business,
but
yeah
it.
It
sounded
very
good
what
we
heard
here.
The
salesman
did
a
very
good
job
in
this
presentation
and
yeah.
He
told
us
about
that.
E
E
Regarding
the
features
it
took,
care
about,
storage
network
had
a
nice
ui
at
that
time,
and
and
great
dev
tools
took
care
of
our
builds.
Everything
was
integrated,
very
good
in
the
web
ui.
It
was
great
for
our
developers.
We
also
got
very,
very
good
feedback
from
them
and
the
third
one
was
set.
Okay
d3
we
could.
We
could
install
it
everywhere,
on-premise
on
our
vsphere
clusters
and
in
the
public
cloud
in
azure.
It
was
very
easy
to
get
the
clusters
running.
We
had
lots
of
configuration
options.
E
We
had
ansible
out
of
the
box
coming
with
okd
that
I
did
the
installation
yeah.
It
was
great.
We
tried
it
out.
E
We
used
it
for
our
mvp
in
the
end
and
yeah
we
successfully
developed
our
mvp
on
the
trade
show
running
on
okt
and
yeah
management
was
very
happy
with
us
and
it
was
a
was
a
cool
time.
It
was
a
very
stressful,
but
we
learned
lots
of
new
things
during
this
phase.
E
A
year
later,
in
2019
we
delivered
even
more
cloud
products
yeah
and
we
were
the
heroes
because
we
enabled
all
of
them
with
yeah
it's
a
great
distribution
in
2019
everything
was
everything
was
cool.
This
was
okay.
We
were
very
happy.
We
didn't
regret
that
we
chose
it
also
in
2019
we
improved
and
automated
our
cloud
ecosystem
because
for
the
mvp
we
have
taken
lots
of
broadcasts
and
workarounds
because
we
were
not
so
experienced
with
kubernetes,
and
the
next
goal
was
to
automate
everything.
E
So
we
found
lots
of
tools
that
helped
us
a
lot
in
this
phase.
Ansible.
We
had
experience
with
that
before
I
found
terraform,
that's
absolutely
great
too,
for
creating
infrastructure
on
yeah
different.
Look
yeah
with
different
providers,
it's
available
for
vsphere
azure
aws
for
for
everything
you
can
imagine,
so
we
use
terraform
to
create
the
infrastructure
and
ansible
to
install
and
configure
okd.
E
Then
we
created
cicd
pipelines.
I
liked
a
lot
that
openshift
had
great
support
for
jenkins
or
has
great
support
for
jenkins.
Everything
is
tightly
integrated
in
the
web.
Ui,
that's
nice,
and
also
we
created
our
first
service
self-service
portal.
That's
a
tool
running
on
our
cluster
that
provides
our
developers
simple,
wizards
in
a
web
user
interface,
where
you
fill
out
a
few
fields
and
get
tasks
done
on
the
cluster
like
setting
up
a
cicd
environment
with
jenkins,
with
the
proper
secrets
everything
completely
automatically
set
up.
E
In
the
beginning
months,
we
learned
that
the
last
release
of
ok
3
occurred
on
in
autumn
2018
and
no
new
version
came
came
out
at
that
time.
Openshift
4
in
the
beginning
of
2019,
I
think
openshift
4
was
released,
but
no
ok,
d4
was
available
and
all
over
the
completely
year
no
lcd
four
was
inside,
and
this
was
a
problem
for
us
because
more
and
more
tools
it
did
not
work
on
ok,
d3,
because
uber
needs
version.
E
I
think
it
was
1.11
got
a
too
old
for
lots
of
tools
and
we
had
to
wisely
choose
which
tools
we
use.
This
was
manageable,
but
yeah.
We
were
waiting
for
something
new
for
okay
for
and
it
did
not
come
so
we
started
to
yeah
learn.
What's
blocking
the
release
of
ocd4,
I
myself
was.
I
tried
okay
for
alpha
in
november
2019.
E
I
remember
that
because
I
had
a
colleague
of
mine
and
he
was
a
master
of
our
dns
server
and
he
spent
on
saturday
evening
or
saturday
night,
together
with
me
in
a
skype
session,
to
set
up
everything
we
need
for
okd4.
He
helped
me
debugging
the
first
steps
and
in
the
end
it
worked,
I
saw
a
web
ui.
I
was
so
happy
the
I
remember
that
this
web
ui
was
so
much
advanced
over
that
we
already
laughed
with
ok
3.
E
It
was
so
so
much
better,
and
but
it
was
not
easy
to
get
there
yeah.
I
had
to
do
lots
of
mental
steps
hacking
around
in
the
os
in
the
linux
console
to
find
problems.
Why
why
the
installation
failed,
and
it
wasn't
alpha,
it
was
okay
and
yes,
and
it
worked
on
vsphere,
well,
very
good
if
it
if
it
ran,
it,
ran
very
pretty
good
and
I
delve
deeper
into
development.
E
E
Everyone
who
wants
to
help
can
attend
this
working
group,
so
I
did
and
the
goal
was
to
to
help
or
do
my
best
what
I
can
do
to
bring
okd
for
life
and
yeah
that
what
I
did
in
2020
I
started
helping
with
socket
d4,
so
I
created
a
few
fixes
for
the
installer
for
azure,
for
example,
because
azure
at
this
time
did
not
was
not
supported
by
okt
at
all,
because
there
were
a
few
problems
with
federal
course
that
is
used
on
in
okd
in
comparison
to
red
hat
chorus
that
is
used
in
openshift.
E
There
were
a
few
problems
with
that:
not
no
big
ones,
but
this
was
my
first
attempt
to
create
a
pull
request
to
the
ok,
d4
community
github
reports
and
my
first
pr
was
so
big
because
I
also
patched
a
terraform
code
and
it
was
far
too
far
too
big
and
badi
brutkowski.
One
of
the
main
supporters
of
okd
was
refusing
it.
E
E
E
I
did
lot
lots
of
testing.
I
also
organized
vsphere
license.
There
is
a
try.
Now
it's
not
a
trial.
It's
it's
called
vmware
user
group.
I
don't
remember
exactly
the
product
name
it
it's
available
for
150
euros,
it's
very
affordable,
and
it
did
also
that,
because
I
wanted
to
get
ok
for
life
and
I
reported
lots
of
bugs
fixed
several
of
them.
E
Not
all
bugs
are
so
complicated
yourself.
I
found
out-
and
yes
so
this
was
my.
It
was
a
time
where
also
our
team
learned
much
about
the
insights
of
okt4
and
that
we
can
use
the
mechanics
to
almost
solve
any
any
task
we
wanted
to
achieve
yeah.
It
was
it's
it's
a
great
great
thing.
I
also
did
something
that
may
sound
a
bit
a
little
bit
crazy,
but
I
created
a
t-shirt
for
the
working
group,
video
meetings.
E
I
always
attended
them
regularly
and
the
idea
was
to
increase
the
release
pressure
and
if
everyone
always
sees
this
okd
4ga
on
my
shirt,
it
was
not.
So
it
was
more
a
funny
idea
and
I
promise
to
not
change
the
shirt
before
the
release
has
been
made,
but
it
took
a
few
months
yeah.
I
have
to
admit
that
I
changed
the
shirt
in
between.
I
never
hold
that
to
anybody.
E
E
E
This
is
also
great,
so
you
have
everything
in
get
no
no
scripts
running
once
and
developers
are
changing
configuration
and
nobody
knows
afterwards
who
has
changed
what,
because
everything
is
in
the
cluster
git
is
the
single
source
of
truth.
That's
nice
with
argo,
and
especially
in
combination
with
okt,
before
we
changed
ourselves
service
portal
to
use
git
ops,
both
of
that
and
also
we
migrated
all
on-premises
apps
from
ok
d3
to
ok,
d4.
We
had
to
change
the
routes
and
other
few
things.
E
Of
course,
a
dns
name
for
ok
d4
contains,
I
think,
a
part
that
is
called
apps
in
the
in
the
url.
That's
a
little
bit
annoying
but
yeah.
We
had
to
change
that
for
all
our
apps
and
in
the
end
it
worked
since
july.
2020
we
upgraded
okd
for
on
on-premises.
Very
often
it
almost
always
worked
great
between
openshift
4.6
and
4.7.
There
were
a
few
hiccups,
but
we
could
always
fix
it
or
find
workarounds
together
with
the
community
around
them.
E
Yes,
since
2018,
we
attracted
many
of
our
developers
to
start
a
kubernetes
journey
on
create
a
digital
business
on
our
kubernetes
platform.
That's
great.
E
I
counted
last
week
that
we
had
onboarded
more
than
50
projects,
not
only
playgrounds
but
real
projects
on
our
okd
clusters
and
it's
available
for
more
than
2
000
developers
in
my
company,
it's
running,
running,
very
stable,
and
but
we
are
using,
we
are
moving
more
and
more
business
critical
applications
to
our
okay
classes.
We
have
a
big
manufacturing,
a
few
manufacturing
sites
to
be
more
precise,
that
want
also
to
use
kubernetes
and
cloud
services
and
that's
why
we
decided
to
invest
at
this
time
in
commercial
support,
because
we
have
digital
business
running.
E
We
have
lots
of
interest
in
my
company,
we
have
business
critical
applications
and
we
always
say
that
this
should
be
the
time
and
to
invest
in
commercial
support,
and
we
did
that.
A
few
weeks
ago
we
started
creating
an
arrow
cluster,
that's
the
abbreviation
for
azure
red
hat
openshift
on
azure
for
our
public
cloud
cluster.
E
That's
the
customer
facing
one
and
on-premise
we
invested
in
an
open
shift
or
see.
There
is
also
a
what
was
the
name
of
that:
okay
e
openshift
kubernetes
engine:
it's
not
okd!
It's
okay!
Don't
ask
me
why
they
have
something
similar.
E
Okay
is
a
version
it's
it's
openshift.
In
fact
you
have
support,
but
not
for
everything,
and
we
are
not
using
all
the
features
of
openshift
at
the
moment
for
all
our
environments,
and
because
of
that
we
chose
oke
for
some
clusters
and
open
shifters
than
the
full-fledged
version
for
the
services.
We
need
full
support
and
yeah
for
the
moment.
We
are
very
happy
with
this
decision
and
to
conclude
what
I
told
you
in
this
presentation,
I
am
absolutely
thankful
and
to
yeah
to
have
okd
during
our
journey.
E
E
A
few
things
are
different
regarding
upgrades
because
in
okay
you
only
have
a
rolling
distribution.
This
means
that
you,
if
something
is
fixed,
it
won't
get
backwards.
It's
always
going
forward
in
openshift
you
have
several
stable
or
fast
channels
and
yeah,
but
if
you
don't
need
that
and
for
in
the
beginning,
you
don't
need
that
yeah.
E
To
be
honest,
then
it's
a
fair
deal
to
don't
pay
any
fees,
and
you
have
a
full-fledged
great
kubernetes
distribution,
and
I
I
can
yeah
congratulate
a
redhead
for
the
decision
to
have
a
community
version
of
openshift
in
their
program
because
it's
as
I
said,
I
think
it's
a
epic
door
opener
for
their
main
product
openshift.
E
They
always
were
very
helpful
and
yeah
vadim,
especially
radim,
is
seems
to
be
online
24
7
on
slack,
and
without
this
guys
we
would
not
have
managed
the
first
steps
with
okd4
and
thank
you
all.
This
is
our
yeah.
This
was
our
journey.
It
took
us
three
years
now
we
are
absolutely
experienced
in
kubernetes.
I
can
compile
some
modules
on
my
own.
We
know
the
insights
of
of
okd
and
kubernetes
very
good.
E
We
help
in
the
community
in
several
projects.
Thank
you
for
watching
and
if
you
have
questions
I
am
available
in
the
chat.
D
B
All
right:
well,
I
think
I
think
that
was
one
of
the
the
best
endorsements
for
participating
in
okd
and
in
the
working
group
and
getting
your
feet
wet
with
openshift
through
the
open
source
side
of
the
and
one
of
the
the
next
talk
that
we're
delivering
here
is
an
initiative
that's
coming
out
of
anthem,
around
health
os
and
so
there's
two
folks
bobby
samuel
from
anthems
and
frederick
coutts
from
sharecare
now
sharecare
just
recently
acquired
doc,
ai,
which
is
where
bobby
met
frederick,
and
this
is
a
really
interesting
project
because
it
incorporates
so
many
upstream
other
projects,
spiffy
spire
envoy
network
service
mesh,
just
to
name
a
few
of
them
and
they're
going
to
tell
you
a
bit
about
this
initiative
and
how
anthem
is
going
about
bringing
together
all
of
its
end
users
and
its
customers
and
partners
into
this
health
os
initiative.
B
F
Hi,
my
name
is
bobby
samuel
and
I've
got
frederick
coutts
here
with
me
and
we're
going
to
talk
to
you
today
about
health
os
and
enabling
standards-based
healthcare
interoperability
using
cloud
native
and
xero
trust.
So,
first
of
all,
I'm
bobby
I
work
at
anthem.
I
lead
up
the
health
os
development
as
well
as
precision
insights
frederick.
Would
you
like
to
introduce
yourself.
G
G
F
F
What's
the
what's
the
point
of
all
this,
so
health
os
is
something
that
we've
created
internally
here
within
anthem
and
payers
are
seen
as
the
middleman
pain
point
across
the
ecosystem
and
causing
abrasion
across
various
user
segments,
whether
it's
provider,
member
or
even
to
other
payers,
but
we
also
sit
in
a
position
where
we
have
the
richest
longitudinal
view
of
data
and
that's
whole
health
data
about
the
person.
F
So
health
os
helps
us
operationalize,
our
health
health
data
to
drive,
improved
outcomes
and
reduce
costs
and,
overall,
you
know,
increases
efficiency.
So
we'll
talk
to
you
about
how
we
do
that,
but
at
the
foundation
of
it
all
health
os
is
a
platform.
It's
a
hub
whose
primary
emphasis
is
interoperability
and
then
driving
world-class
experiences
and
and
uses
machine
learning
and
ai
to
drive
insights
and
also
actions.
F
So,
just
to
talk
about
the
business
architecture
and
how
the
pieces
fit
together,
the
at
the
bottom
we've
got
the
data
layer
and
that
data
layer
it
focuses
on
integrations
with
ehrs,
it's
got
payer
and
clinical
data,
and
then
our
data
about
members
or
our
constituents
is
based
on
fire
or
the
fire
standard
on
top
of
that
layer-
and
this
is
where
we'll
get
into
cloud
native
and
xero
trust,
but
in
in
that
space
in
the
security
layer
in
our
platform
layer,
we've
got
a
number
of
things
that
are
running
and
happening.
F
So
insights
and
action
apps
live
here
and
are
are
created
here.
We've
got
tool
sets
or
ides
and
tool
sets
to
rapidly
build,
validate
and
or
deploy
health
apps,
and
then
this
is
where
we'll
talk
about,
where
we're
implementing
zero
trust
to
do
workload,
identity,
management
and
then,
on
top
of
that,
we've
got
interaction
layer.
So
the
the
cool
thing
about
health
os
is
that
or
one
of
the
many
things
about
health
os?
F
Is
that,
whether
it's
a
ui
ux
that
health
os
manages
or
a
ui
ux,
that
someone
else
manages
whether
it's
a
another
ehr
or
a
homegrown
app
that
we
have
those
all
plug
into
and
have
the
benefit
of
connecting
back
into?
All
of
these
help
apps
and
back
into
the
place
where
we've
got
the
the
rich
data
stores.
So
this
is
the
ecosystem
that
we've
been
putting
together
with
our
client,
endpoint
client
application
endpoints
to
connect,
as
well
as
sdk
our
sdks,
to
build
and
rapidly
deploy.
Apps.
F
So
in
our
ecosystem,
what's
the?
What
are
we
trying
to
do
this?
For
and
at
anthem?
We
have
a
number
of
partners
we
work
with.
We
have
a
number
of
partners
that
we
connect
with
in
various
lines
of
business,
but
the
big
problem
is:
is
they're
not
connected,
anthem's
connected
to
them,
but
they're
not
connected
to
each
other,
and
what
this
allows
us
to
do
is
to
connect
all
the
apps
to
each
other.
F
So
health
os
allows
us
to
connect
to
anthem's
data
ocean.
It
allows
health,
apps
insights
and
actions
to
run
and
connect
all
these
different
apps.
So
we
bring
our
digital
ecosystem
together
and
we
bring
our
emr
systems
together
that
we
connect
with,
as
well
as
internal
systems
that
exist
within
anthem.
F
All
of
these
things
working
together
focused
on
better
outcome
for
the
for
the
member,
and
so
let
me
like
zoom
back
out
into
what's
our
ecosystem
and
where
xero
trust
kind
of
fits
in,
so
we've
put
health
os
in
the
center
once
again.
Action,
apps
and
insight
apps,
so
an
example
of
an
inside
app
would
be
what
benefits
are
covered
by
for
bobby
or
does
bobby.
Have
this
in
his
formulary
this
this
particular
drug
in
his
formula
or
treatment
in
his
formulary,
an
action
could
be
scheduling
an
appointment.
F
It
could
be
one-click
prescriptions
or
you
know
painless,
prior
auth,
one-click,
prior
authorizations,
and
so
those
things
run
together
and
then
using
zero
trust
connections.
F
Epic
cerner,
athena,
health
and
all
of
these
connected
together,
working
together
once
again
focused
on
our
members,
health
and
improving
the
health
of
humanity.
F
G
Bobby
so
before
we
jump
into
zero
trust,
let's
talk
a
little
bit
about
some
security
basics.
Very
often,
when
you
speak
with
a
security
or
information
security
person,
you'll
often
hear
about
the
cia
triad.
We
actually
look
at
four
things
now,
but
the
first
three
in
the
cia-
and
this
is-
is
what
traditionally
people
would
look
at.
Those
three
are
confidentiality.
G
Is
the
information
protected
against
unauthorized,
viewing
or
access?
We
look
at
integrity?
Has
the
information
been
modified
in
a
way
that
was
unauthorized?
How
do
we
protect
it
from
being
modified?
We
also
look
at
availability.
Is
the
information
available
when
you
need
it
to
be
and
there's
a
fourth
thing
that
has
been
added
in
more
recent
times
called
non-repudiation,
which
is
how
can
you
ensure
that
a
entity
that
has
performed
a
transaction
cannot
back
out
of
that
transaction
and
there's
multiple
reasons
for
this,
which
could
include
at
the
business
layer?
How
do
you
prevent
fraud?
G
How
do
you
perce?
How
do
you
ensure
that
you
can
observe
the
system
and
know
that
that's
what
the
state
was
likely
to
be?
It
could
also
be
based
upon
trying
to
make
sure
that
the
that,
when
you're
looking
at
security
systems
that
you
know
exactly
who
you're
connecting
with
and
that
it
hasn't
been
swapped
out
with
with
someone
else,
so
in
general,
there's
now
four
main
categories
that
people
tend
to
look
at
there's
a
couple,
others
that
people
will
bring
in
as
well.
G
But
these
are
the
the
main
four
that
you
that
you
tend
to
see
so
using
this
particular
framework.
We
then
take
a
look
at
what
are
the
business
requirements?
What
is
the?
G
What
is
it
that
we're
trying
to
to
protect
what
has
changed
so
when
we
look
at
at
the
zero
trust
space
and
why
it's
important?
G
One
of
the
things
that
we
want
to
look
at
is
what
is
what
is
the?
What
are
the
previous
assumptions
that
we've
made
and
what
is
the
reality
that
we're
seeing
today?
What
is
what
has
changed
and
the
differences
between
that
assumption
and
reality
can
be
seen
in
the
form
of
cyber
attacks
where
people
will
perform
data
breaches
will
run
ransomware
denial,
service
attacks,
forging
identities
or
so
on,
and
the
policies
that
we
tend
to
apply
in
from
a
regulatory
or
policy
perspective
may
also
end
up
ossifying.
G
G
That
is
where
you
have
a
trusted
network
in
that
network.
You
have
your
services.
If
you
need
to
connect
to
another
network,
you
may
have
a
firewall
that
you
put
in
between
them
in
order
to
protect
entities
in
one
network
and
from
entities
in
another
network,
but
the
problem
is
that,
if
you
end
up
in
attack
with
an
attacker
on
one
of
these
networks,
then
there's
a
lot
that
they
can
do
there
a
lot
of
damage
that
can
that
can
be
done
in
the
xero
trust
model.
G
Instead,
what
we
say
is
well
what,
if
that
network
was
not
trusted?
No,
it's
not
implicitly
trusted
that
doesn't
mean
the
firewalls
go
away.
It
doesn't
mean
that
you're
that
you're
not
trying
to
protect
the
network,
but
it
means
you're
no
longer
saying
this
network
is
the
implicit
thing
in
which
we
base
our
trust.
So
once
you
no
longer
trust
your
network,
then
you
have
to
look
at
where
you
push
the
controls
and
the
controls
end
up
being
at
the
services
themselves.
G
Identity
is
what
is
it
that
identifies
your
service?
What
does
it
identify
as
your
user
or
identifies
your
data?
How
do
you
know
that?
What
you're
looking
at
is
the
thing
that
you
are
that
you're?
Looking
for
how
do
you
attest
that
identity
policy
is?
How
do
you
develop
the
rules
and
apply
those
rules
and
enforce
those
rules
across
the
across
identity?
G
From
the
automation
perspective
is
how
do
we
take
this
from?
Let's
say
you
have
a
single
system
and
you
can
put
a
person
on
that
system
to
defend
it
when
you
say
when
you
start
to
try
to
scale
this
out
to
a
large
number
of
systems,
hundreds
of
systems,
thousands
of
systems,
tens
of
thousands
of
systems,
you
need
to
have
automation
in
place
that
is
able
to
help
you
assign
the
identity
and
enforce
the
policy,
but
also
bring
in
things
like
observability.
G
So
you
can
audit
what's
going
on
and
to
have
controls
over
well
what
the
automation
is
capable
of
doing
or
what
it's
not
able
to
do
so
it
ends
up
being
three
intertwined
primary
pillars
that
that
have
to
be
put
together
in
order
to
build
a
zero
trust
framework.
G
So
we've
been
working
on
a
reference
implementation
for
this
in
a
in
the
cloud
native
environment
and
we
focus
around
three
primary
things.
So
if
you
notice
I
in
the
triangle,
I
actually
made
them
link
up,
so
you
can
see
identity,
we're
using
spiffy
inspire
for
policy,
we're
using
open
policy
agent
for
automation,
we're
relying
heavily
on
things
like
network
service
mesh.
Now
these
aren't
the
only
things
in
the
infrastructure,
but
they're
they're
representative
of
the
type
of
things
that
we're
trying
to
accomplish
so
we'll
go
over
each
of
these
in
more
detail
soon.
G
We
also
build
this
on
top
of
kubernetes.
We
build
it
on
top
of
systems
like
open
shift,
we
build.
We
we
build
in
automation.
On
the
infrastructure
side,
we
have
git
ops,
style
processes
that
we're
bringing
in
and
underpinning
all
of
this,
you
still
need
observability
across
the
whole
stack.
You
still
need
control
over
the
over
the
whole
stack,
so
it
ends
up
becoming
this.
This
model
that
that
this
particular
thing
represents
that
all
works
in
coordination
to
deliver
the
infrastructure
that
is
part
of
health
os.
G
So
what
spiffy
and
spire
are?
Is
they
provide
identities
to
your
workloads?
So
most
people
are
familiar
with
user
identity.
You
put
in
your
password
you
log
into
a
online
service.
You
have
that
user
identity
in
this
scenario
we're
looking
at
workload
identities,
so
every
workload
receives
an
x
509
certificate.
G
G
Your
bank,
but
simultaneously
the
server,
is
capable
of
validating
the
identity
of
the
client.
So
you
have
this
two-way
validation
that
occurs
within
a
trust
domain,
so
we're
able
to
create
these
identities
that
live
within
a
trust
domain
that
allow
them
to
establish
their
identities,
and
these
identities
are
constantly
rotated
out
every
hour
they
get
rotated
out
and
by
default,
if
you're,
using
spiffy
inspire
and
every
time
that
you
assign
a
new
certificate,
you
perform
a
verifiable
attestation.
G
What
we
mean
by
that
is
that
the
system
will
ask
for
an
identity.
We
will
look
at
the
properties
of
that
system.
You
might
have
a
tpm
module
that
you're
working
with
you
might
have
a
a
identity
document
that
is
within
aws
or
within
gcp
or
other
similar
systems
that
have
some
cryptographic
material
inside
of
them.
That
help
prove
something
about
that
system
is
we
are
able
to
build
our
spiffy
identities
without
the
station
that
is
rooted
in
these
cryptographic
materials
from
from
these
type
of
systems?
G
G
The
fact
you're,
connecting
in
with
a
specific
identity,
is
enough
for
the
system
to
recognize
what
type
of
a
system
it
is
and
what
type
of
policies
need
to
be
applied
in
terms
of
policy.
We're.
Looking
at
things
like
open
policy
agent,
an
open
policy
agent
allows
you
to
to
consume
the
identities
that
are
produced
by
a
system
like
spiffy
inspire
and
allows
you
to
decide.
What
is
this
system
allowed
to
do?
G
What
is
what
are,
what
are
its
capabilities
that
that
it
is
able
to
fulfill
and
when,
when
you've,
when
you
create
these
particular
systems,
what
were
the
properties
we're
looking
for
in
that
led
us
to
open
policy
agent
is,
it
has
to
be
something?
That's
human
readable
has
to
be
something
that
is
that
meets
the
look
and
shape
of
common
policy.
So,
in
other
words,
you
could
have.
How
do
you
classify
data?
How
do
you
classify
workloads?
How
can
you
say
this
system
has
phi
and
create
defaults?
G
That
say,
don't
allow
them
to
connect
the
systems
that
don't
have
phi
or
vice
versa,
and
then
from
there
we
can
carve
out
patterns
that
the
system
is
allowed
to
perform.
In
this
example,
we
took
this
from
openpolicyagent.org,
it's
one
of
their
it's
one
of
their
examples.
They
have
on
their
front
web
page,
and
you
can
see
a
request
that
says
pet
owners
are
allowed
with
a
specific
id
that
is
verified
by
the
jwt,
which,
which
is
something
that
identifies
the
user.
G
Cryptographically
is
allowed
to
receive
information
about,
to
make
a
request
against
against
this
api
in
a
specific
way.
If,
and
only
if
the
request
comes
from
like
say,
this
is
in
front
of
a
database
if
and
only
if,
the
request
comes
from
a
client
that
we
that
or
a
workload
that
we
have
identified.
G
So
it
gives
us
a
lot
of
flexibility
to
define
the
exact
type
of
shape
and
policies
that
that
we
want
in
a
human,
readable
way
that
also
allows
us
to
get
this
policy
into
git.
It
allows
us
to
to
have
code
reviews
on
these
policies
to
share
them
with
with
other
stakeholders,
so
we
can
get
their
opinions
on
whether
this
policy
meets
their
requirements
or
not
and
gives
us
that
that
change
over
time.
So
we
can
see
how
policies
has
changed.
When
did
it
change,
because
it's
all
checked
into
into
git.
G
We
also
rely
on
a
new
technology
called
network
service.
Mesh
network
service
mesh
is
another
cncf
project
that
is
looking
to
automate
low-level
networking
systems,
so
we're
looking
at
if
you're
familiar
with
the
osi
model,
we're
looking
at
layer,
two
layer,
three
we're
looking
at
frames
in
ethernet
and
ip
and
other
similar
level
areas
and
what
it
does
is.
It
facilitates
the
underlay
to
services.
So
typically,
when
you're
running
in
kubernetes
you'll
often
have
multiple
clusters.
G
You
want
to
connect
together
in
some
way
and
when
you
connect
them,
the
assumption
is:
there's
are
there's
already
connectivity
established
between
both
systems.
What
network
service
mesh
allows
you
to
do
is
to
acknowledge
that
there
may
not
be
a
connection,
that's
there,
and
you
may
need
certain
things
in
place
in
order
to
make
that
connection
work.
So
this
allows
the
operator
to
say
in
order
for
this
connection
to
occur,
I
needed
to
have
a
firewall.
An
intrusion
detection
system
needs
to
go
through
a
certain
vpn
gateway.
G
A
certain
vpn
concentrator,
so
network
service
mesh
allows
you
to
automate
these
processes
through
a
cloud-native
api,
with
native
support
from
spiffy,
inspire
and
open
policy
agent,
and
it
provides
you
a
cryptographic
non
reputation
of
that
connection
chain.
So,
in
other
words,
in
this
example,
we
have
on
the
left
health
os
app
going
through
a
specific
vpn
gateway
to
a
specific
vpn
concentrator
to
a
specific
health
app.
We
can
get
the
cryptographic
identity
of
everything
in
between
and
see.
What
is
a
system
connecting
through?
Is
it
connected
to
systems
that
we
trust?
G
G
G
So
from
the
process
you
have
a
developer
developer
will
make
some
form
of
a
commit
into
the
source
code
system
such
as
git,
then
the
ci
cd
system,
your
continuous
integration
system,
will
see
those
changes
that
have
been
put
into
git
and
will
then
render
them
into
your
your
test:
environments
into
your
staging
environment,
your
production
environment,
the
every
change
goes
through
source
through
source
control.
Every
change
goes
through
git,
which
gives
us
that
audit
ability
it
gives
us
that
changes
to
make
that
we
make
the
change.
We
also
have
control
from
the
qa
side.
G
So
in
fact,
when
you're
looking
at
regulatory
concerns
in
this
space,
it's
it's
important
that
your
developers
are
not
allowed
to
push
into
production.
You
have
to
have
a
separate
group
of
people,
a
separate
team
that
is
able
to
look
at
what
changes
are
there
and
decide
whether
or
not
those
changes
should
hit
production?
G
G
G
Please
join
these
particular
communities,
there's
a
lot
of
things
that
you
can
work
on
in
those
particular
spaces
and
if
you're
interested
in
the
type
of
things
that
we're
working
on
please
reach
out
to
either
bobby
or
I
and
we'll
help
you
navigate
the
the
path,
whether
it's
coming
to
work
with
us
directly
or
whether
it's
trying
to
work
in
the
same
area
in
your
own
industry.
So
please
please,
come
and
join
us
with
that.
We
have
time
for
questions
and
thank
you
very
much.
B
Yeah,
well,
you
know
we'll
figure
all
the
little
details
here
out
and-
and
I
love
this
last
talk
was
about.
It
was
a
lot
focused
on
the
the
health
os
and
the
health
care
industry,
and
that
and
the
next
couple
of
talks
are
two
folks
that
I
met
via
another
industry
initiative,
albeit
for
telcos.
B
There
was
a
project,
the
enterprise
neurosystem
initiative,
which
is
being
spearheaded
by
american
mobile
verizon,
media,
ericsson,
research
and
a
bunch
of
other
folks,
along
with
some
red
hat
support,
and
out
of
that,
we
decided
to
launch
a
openshift
commons
gathering
earlier
this
year
on
data
science
and
two
of
the
talks
that
came
out
of
that
were
really
cool.
First
of
all,
there
probably
isn't
a
talk
here
that
doesn't
mention
ai
or
ml.
B
So
that's
that's
kind
of
the
interesting
thing,
but
this
one
this
first
one
by
ganesh,
horiz
from
verizon
media
and,
if
you
don't
know,
verizon
media
a
while
back
ago,
they
acquired
yahoo,
and
so
this
a
lot
of
the
folks
from
yahoo
were
doing
some
of
this
work
as
well.
So
he's
going
to
talk
a
little
bit
about
that,
but
he's
going
to
talk
about
building
an
edge
intelligence
application
and
what
it
took
to
do
that.
B
So
there's
lots
of
little
pieces
and
parts
in
there
and
then
followed
on
by
a
really
cool
talk
by
paul
mclaughlin
from
the
same
group
of
folks,
the
enterprise
neurosystem
initiative
folks,
and
that
data
science
gathering
and
he's
even
gonna,
throw
in
a
little
vr
and
ar
into
it.
So
let's,
let's
queue
up
this
first
one
chris
and
see
what
ganesh
has
to
say
and
then
follow
it
with
paul.
D
H
H
I'm
ganesh
harinad,
I'm
with
verizon
verizon
media
I've
been
doing
data
and
ai
for
a
very
long
time
over
a
decade
closely
and
interesting
paradigm
shift
that
I
started
to
see.
H
Moving
forward
5-10
years,
robotic
arm
surgery
is
going
to
be
very,
very
normal,
and
what
that
means
is
a
doctor
from
new
york
can
perform
surgery
on
a
patient
in
los
angeles.
To
me,
this
is
fascinating
and,
interestingly,
when
you
take
a
closer
look
at
what's
required
for
all
these
things
to
happen,
robotics
is
important.
A
virtual
reality
is
very
important
and
artificial
intelligence
is
the
foundation
for
this
capability
and,
most
importantly,
we
being
part
of
telco
5g
would
enable
to
converge
these
technologies
to
make
this
capability
a
reality
in
years
to
come.
H
But
when
we
start
to
ground
ourselves
and
then
take
a
closer
look
at
where
we
are
today,
what
we
are
trying
to
do
with
ml
and
ai
a
lot
of
applications
that
really
require
massive
data
on
the
cloud
applying
ai
to
understand.
Various
aspects
of
the
network
was
one
of
the
area
that
was
very,
very
focused
on,
but
looking
forward,
industrial
automation
is
a
space
where
we
are
starting
to
understand,
build
capabilities
and
solutions
to
the
right
on
the
left
autonomous
cars.
I'm
fascinated
there's
a
long
way
to
go,
but
the
autonomous.
H
Can
look
at
the
car
in
front,
but
what
needs
to
happen
is
to
be
able
to
really
connect
two
5g
capabilities
and
apply
ai
to
plan
the
entire
route
and
that's
in
play
as
well.
These
are
like
the
fascinating
changes
that
we
all
are
living
through
and,
interestingly,
the
shift
has
been
accelerated,
but
the
way
how
I
summarize
my
experience,
any
application
that
we
would
actually
touch
field
c
would
be
powered
by
ai,
but
it's
also
equally
important
that
aspects
like
ai
bias
should
be
taken
into
account
when
designing
these
applications.
H
Now,
to
summarize
how
the
application
shift
is
happening
when
you
take
a
closer
look
at
any
machine
learning
application,
I'm
sure
we
all
know
there
is
an
aspect
of
model
training
which
is
very
compute
intensive,
and
there
is
aspect
of
inferencing
and
in
today's
world
very
easily.
We
deploy
both
training
and
inferencing
on
the
cloud
and
have
this
mlai
experience
directly
from
the
cloud.
H
In
order
to
accommodate
this,
we
are
starting
to
see
a
paradigm
shift,
and
that
is
moving
the
inference
capability
very
intelligently
and
seamlessly
from
the
cloud
to
the
closest
location
where
the
need
is
so
some
of
the
application.
If
the
inferencing
is
of
the
order
of
10
to
25
milliseconds,
that's
just
an
estimate,
then
ideal.
You
deploy
these
inferencing
onto
the
cdn
edge
bmg.
We
have
cdn
edge
in
160
location.
H
We
have
to
start
moving
inferencing
to
a
2u
box
is
what
I
call
now
an
important
paradigm
shift
when
we
go
back
and
start
to
understand
evolution
of
internet
in
the
very
very
beginning,
it
used
to
take
fairly
long
for
pages
to
download
when
we
access
yahoo.com
from
sydney,
but
magically
capabilities
like
cdn,
was
enabled
to
cache
content
geographically
in
different
locations,
and
this
technology
happened
behind
the
scenes
where
a
sudden
change
in
human
experience
happened
in
terms
of
using
the
internet.
H
So
today,
when
we
start
to
take
a
closer
look
at
how
we
want
to
deploy
applications
enabling
the
cdn
edge
to
be
able
to
deploy
ml
applications
is
very,
very
critical
and
there's
a
transformation
or
change.
That's
actually
happening
in
this
area
as
well.
H
Now
what
are
the
applications
that
are
really
being
discussed
right
now,
and
why
really?
We
would
need
inferencing
to
happen
so
near
real
time
and
what?
What
exactly
is
a
big
problem?
H
There
is
another
very
important
paradigm
shift
that
we
all
I'm
sure
started
to
notice.
Up
till
until
now,
a
lot
of
ml
applications
were
actually
primarily
driven
by
signals
from
sensors,
they're,
very
two-dimensional,
they're
records
and
they're
billions
of
records.
H
In
fact,
the
platforms
that
our
team
really
operate
build
applications,
we
ingest
100
billion
records
every
day,
but
it's
very
easy
even
to
operationalize
platforms,
which
can
ingest
and
process
100
billion
records,
because
you
have
that
luxury
to
be
deployed
on
the
cloud
and,
most
importantly,
the
inferencing
aspect
is
on
a
two-dimensional
record
and
the
shift
is
the
video
content
from
where
we
have
to
pick
up
intelligence,
apply,
machine,
learning,
to
surface
insights
and
solve
the
problem.
H
Other
sensory
signals
like
temperature,
current
and
other
things.
So
the
factory
automation
is
a
space
or
area
where
we
are
continuing
to
invest
a
lot
in
building
applications,
and
I
call
it
a
2u
box.
We
have
to
deploy
2u
box,
we
need
a
platform
like
leo.
H
We
need
applications
staying
closer
to
the
edge
that
way
we
have
that
reliability,
both
in
terms
of
high
volume
inferencing
and
also
ensure
that
it
is
seamless
and
it's
actually
working
in
a
factory
environment
and
5g
private
definitely
is
going
to
play
a
big
role
to
connect
all
these
different
sensors
cameras
and
so
on
and
route
signals
and
video
streams
to
a
platform's,
a
centralized
platform
which
can
ingest
and
apply
artificial
intelligence
and
start
to
surface
insights,
to
improve
efficiencies,
to
avoid
error
near
real
time
without
any
material
loss,
and
this
is
an
area
we
verizon
are
starting
to
heavily
invest.
H
H
Now,
knowing
verizon
has
tens
of
thousands
of
cell
towers
having
technologies
like
drone
and
computer
vision.
So
on
it's
it's
very
timely
that
we
we
start
to
build
applications
instead
of
people
climbing
on
the
side
tower
to
understand
issues
with
the
towers
and
connections
and
so
on,
fly
drones
to
understand
the
issues
around
those
cell
towers
one.
It
addresses
a
lot
of
safety
issues
too.
It
addresses
that
a
lot
of
sorry
there's
a
lot
of
cost
efficiencies
attributed
as
well
and,
most
importantly,
with
computer
vision.
H
You
really
see
a
lot
of
insights
where
you
can
take
corrective
actions
near
real
time
and
we're
continuing
to
invest,
and
this
is
kind
of
a
very
vertical
application.
Today
you
solve
it
for
central
hour
12
hours.
You
can
retrain
it
to
monitor
oil
pipelines,
buildings
and
bridges
and
then
so
on.
I
personally
am
very
very
fascinated
about
the
mission
that
we
embarked
on.
H
The
video
streams
coming
near
real
time,
inferencing
on
the
edge
and
then
being
able
to
provide
surface
in
sorry
being
able
to
surface
insights
to
the
person
who
is
really
conducting
this
survey
of
the
cell
tower
or
an
antenna.
H
Now,
how
can
we?
How
can
we
solve
all
these
things
efficiently
as
a
term
that
I
would
actually
like
to
use
when
we.
D
H
Every
application
would
have
an
aspect
of
machine
learning
attached
to
it,
but
the
very
interesting
difference
between
the
application
that
are
powered
by
machine
learning
and
traditional
applications
is
the
machine.
Learning
applications
are
not
static.
I
can't
say
the
release
is
complete.
This
is
an
awesome
application.
You
guys
go
ahead
and
use
it.
H
The
ml-based
applications
can
be
transactional.
I
can't
say:
I've
deployed
the
application
and
I
can't
walk
away.
I
need
to
provide
tools
and
capabilities
which
can
be
used
to
ensure
that
these
applications
are
meaningful
over
a
period
of
time
and
that's
very
important
on
one
side,
on
the
other
hand,
be
able
to
distribute
the
workload,
the
training
workloads
on
the
cloud
and
the
inferencing
workloads
on
the
edge.
H
In
simple
terms,
I
call
the
pink
boxes
and
the
blue
boxes
were
deployed.
On
the
cloud
now
eloquently.
We
have
to
separate
these
pink
boxes
to
the
closest
edge,
which
could
be
a
cdn
edge
or
a
2u
box,
which
would
empower
you
to
build
applications
like
a
drone,
vertical
inspection
applications
like
factory
automation
and
then
so
on.
So
we
are
very
heavily
invested
in
operationalizing
the
capability
of
platform,
which
helps
empowers
us
to
build
edge
applications
seamlessly.
H
H
Now
we
are
talking
about
a
distributed
application
where
the
same
drone
inspection,
the
same
factory.
Automation
has
to
be
deployed
in
multiple
location
and
in
many
cases
it
has
to
be
integrated
on
the
cloud
to
make
it
work
very,
very
seamlessly,
and
it's
a
it's
a
it's
a
fascinating
time
where
the
demand
for
infrastructure
is
changing.
The
security
posture
is
changing.
H
It's
micro
clouds
and
these
micro
clouds
have
to
be
connected
to
the
parent
cloud,
primarily
because
your
application
loads
are
distributed
on
the
edge
and
on
the
cloud
with
seamless
interconnect
and
what
you're
seeing
is
a
reflection
of
our
view
about
a
year
and
a
half
ago,
and
today
what
what
you're
seeing
is
real
so
leo
is
a
glue
between
various
technology
infrastructures,
platforms
and
integration
between
data,
sensors
and
so
on,
which
will
enable
and
empower
to
build
different
applications
like
drone
inspection,
factory,
automation,
digital
twin
that
has
been
operationalized
for
verizon's
own
good
within
verizon,
and
I'm
sure
we
all
have
our
own
strategies,
but
I'm
very
excited
and
encouraged
to
share
the
success
that
we
are
actually
starting
to
see
about
understanding
the
needs
of
the
edge
platform
and
ironing
out
the
capabilities
that
are
actually
needed
on
on
the
edge.
H
H
So
what
that
translates
to
is,
it
can
be
deployed
on
any
edge
platform,
but,
as
I
was
mentioning,
it's
very
important
to
have
a
seamless
interconnect
to
the
cloud,
because
it's
just
only
portion
of
your
application
and
a
lot
of
the
training
needs
to
happen
on
the
cloud
and
there
could
be
compliance
policies
where
you
have
to
persist
data
on
the
cloud
and
this
data
has
to
be
shipped
onto
the
cloud
for
various
reasons
and,
most
importantly,
a
fascinating
approach
of
building
models.
This
is
called
distributed.
H
Model
training,
which
can
be
consolidated
on
the
cloud,
can
be
approached
through
platforms
like
leo
now
at
a
very
high
level
for
us,
when
you
take
a
closer
look
at
what
are
the
capabilities
that
we
would
need
on
the
edge
data
management?
H
Is
super
important,
be
able
to
ingest
data,
all
forms
of
kinds
of
data,
high
throughput
and
so
on,
and
it
should
empower
us
to
build
end-to-end
applications
with
ui,
very
secure
and
so
on
and,
most
importantly,
the
security
posture
has
changed
because
you
have
a
2u
box
sitting
somewhere
physical
security
becomes
important
application.
Security
becomes
important
to
you.
These
things
have
to
be
factored
in
this
which
is
beyond
leo,
but
we
need
to
have
a
strategy
to
address
all
aspects
of
security
and
leo
does
address
application
security.
H
It's
operationalized
and
we
have
been
very,
very
successfully
been
using
within
verizon
and,
interestingly,
though,
it's
very,
very
early
leo
has
become
the
north
star
edge
architecture
for
verizon
media
group,
as
we
speak
now.
To
conclude,
we
are
starting
to
see
a
new
influx
of
application.
I
call
this
as
next
generation
application
and
these
applications
each
one
of
them,
would
be
powered
by
ai
there's.
H
H
I
H
How
I
would
like
to
summarize
a
lot
of
the
stories
and
experiences
that
I
have
explained
it's
it's
a
very,
very
it's
going
to
be
very,
very
interesting
as
we
move
forward,
primarily
as
you
start
to
take
a
closer
look
at
building
ml
and
ai
based
applications.
H
We
need
to
have
a
strategy
and
partnerships
in
place
where
we
have
control
on
the
edge
and
technologies
like
openshift,
definitely
will
put
us
in
a
very,
very
good
situation,
to
have
a
very
controlled
and
manageable
environment,
taking
into
account
it's
very,
very
distributed
too
and,
most
importantly,
how
are
we
going
to
build
test?
Deploy,
keep
the
environment
very
agile
that
way
it's
adaptive,
adaptive
too,
so
so,
taking
all
these
things
into
account,
we're
very
early
on.
H
H
While
we
bring
in
what
we
know
primarily
from
experience
perspective
in
terms
of
solving
problems
on
the
edge
building,
ml
and
ai
applications
for
verizon,
verizon,
media
and
other
enterprise
customers
that
we
are
starting
to
work
with,
we
are
here
to
learn
as
part
of
the
ecosystem
and
become
more
and
more
efficient,
as
we
continue
to
build
our
next
generation
applications,
which
I
envision
would
change
a
human
experience
which
would
improve
efficiencies
and
also,
most
importantly,
I
am
excited
about
the
security
posture,
improving
security,
posture
and
also
health
and
safety
too.
H
B
Next
up,
we
have
paul
mclaughlin
from
erickson
research
who
was
also
part
of
the
data
science
gathering
earlier
this
year
and
he's
got
he
did
actually
the
keynote
for
us,
and
so
it's
quite
an
interesting
talk:
melding
sustainability,
machine
learning,
augmented
reality,
vr
and
5g.
A
B
So
it
and,
and
really
one
of
the
focuses
paul
has-
is
really
about
doing
using
ai
for
good,
and
I
thought
that
was
a
great
theme
for
this.
So
I'll.
Let
you
queue
that
up
chris
and
we'll
move
right
right
into
that
got.
A
A
A
D
A
J
Good
afternoon
I'm
paul
mclaughlin,
I'm
a
research
leader
and
I'm
part
of
erickson
research
based
in
santa
clara
california.
Today,
I'm
going
to
be
talking
about
how
erickson
is
using
ai
to
help
address
sustainability
and
climate
change,
because
we
know
that
climate
change
is
real
and
having
devastating
impacts.
Now
humans
have
caused
one
degree
centigrade
of
global
warming
above
pre-industrial
levels
and
nasa
and
noaa
say
that
2020
was
the
second
hottest
year
on
record
globally.
J
Climate
change
is
causing
extreme
weather
events,
which
are
the
most
visible
effects
of
climate
change,
but
the
frequency
of
extreme
weather,
like
wildfires
droughts,
hurricanes,
tornadoes
thunderstorms,
is
increasing
in
the
united
states
and
in
2019
extreme
weather
cost
45
billion
dollars
in
the
united
states
alone.
This
also
has
pretty
important
societal
impacts,
because
climate
change
damages
hit
low-income
americans
in
the
south,
artists
and
minorities
and
people
of
color.
They
are
a
disproportionate
share
of
the
climate
change
burden.
J
J
So
by
2030.
The
information
and
communication
technology
sector
can
have
a
massive
impact
towards
that
goal.
In
2020,
54
gigatons,
which
is
a
billion
tons
of
greenhouse
gas
emissions,
came
from
the
ict
sector.
So
following
the
carbon
law
to
avoid
catastrophe,
emissions
needed
to
have
peaked
last
year
and
between
2030,
the
2020
and
2030,
we
need
to
have
a
further
50
reduction
in
greenhouse
gas
emissions
and
for
every
decades,
following
that
until
2050.
J
At
the
same
time,
we
also
have
to
invest
in
carbon
sinks
like
forests
to
help
capture
some
of
the
carbon
we've
already
emitted.
Action
is
required
right
now.
Otherwise,
the
longer
we
delay
the
bigger
and
faster
reduction
is
required.
Digitalization,
though,
is
an
exponential
technology
which
will
help
us
address
this
target
even
more
quickly.
J
Ericsson
research
indicates
that
the
ict
sector
can
enable
reductions
in
global
and
greenhouse
gas
emissions
by
15
globally,
and
this
is
based
on
existing
ict
technology.
More
opportunities
to
go
exceed
that
15
will
likely
be
enabled
by
technologies
like
5g
and
machine
learning
and
ai.
That
erickson
is
investing
in
heavily.
J
But
the
main
point
is
that
decarbonization
solutions
exist
today
we
don't
need
to
wait
for
a
silver
bullet
and
the
estimated
financial
benefit
of
low
carbon
is
26
billion
dollars
by
2030
for
reference.
So
we
have
an
incredible
opportunity
ahead
of
ourselves,
so
erickson
is
leading
the
way
and
we
are
reducing
emissions
and
impact
of
our
company's
activities,
our
products
and
services,
and
this
also
have
a
dramatic
impact
on
society.
J
And
so
our
goal
is
to
be
carbon
dioxide
neutral
by
2030,
which
speaks
to
our
company's
impact.
This
includes
fleet
vehicles
and
facilities,
but
our
goal
is
for
5g
to
be
10
times
more
efficient
than
4g,
which
speaks
to
the
impact
of
our
products,
because
30
percent
of
network
opex
today
comes
from
energy
consumption
and
90
of
mobile
network
operator.
Emissions
are
from
network
power.
J
We
are
pursuing
leed
gold
and
lead
zero
carbon
certifications
and
ninety
percent
of
the
materials
for
that
factory
will
be
diverted
from
landfill.
Landfill
we've
installed,
1600
solar
modules
and
we
produce
over
a
million
kilowatt
hours
annually,
which
is
enough
to
power
93
us
homes.
For
a
year
we
have
water
recapture
tanks,
so
we
can
capture
and
reuse
rain
water,
which
is
enough
for
us
to
enough
water
for
one
u.s
home
for
133
days.
J
This
is
an
example
of
how
erickson
is
actually
investing
to
ensure
that
our
products
are
sustainable
and
helping
to
show
how
manufacturing
can
transition
towards
a
low-carbon
future.
J
But
the
ict
sector
has
decarbonization
solutions.
It
can
get
us
to.
They
can
help
lead
to
50
energy
reduction
or
emission
reduction
by
2030..
So
things
like
renewable
electricity
to
power
networks.
The
ict
sector
today
is
the
largest
purchaser
of
renewable
power
mobile
network
efficiency,
where
we
can
see
erickson's
leadership
role
in
innovation,
but
we
worry
that
energy
consumption
will
increase
dramatically
if
5g
is
deployed
like
3g
and
4g
were.
J
J
This
allows
operators
to
decouple
mobile
data
traffic
growth
from
energy
consumption
and
carbon
emissions,
we're
also
transforming
transportation,
so
transportation
emissions
constitute
60
percent
of
the
global
total
or
8.6
gigatons
of
co2
per
year.
Commercial
transfer
powered
by
renewable
electricity
is
critical
for
decarbonization,
and
a
robust
5g
innovation
platform
will
be
required
for
this
future.
For
further
development
of
this
technology,
a
fully
built
out
5g
network
will
be
required
to
operate
autonomous
vehicles
at
a
massive
scale.
J
So
the
challenge
is:
how
do
how
do
we
provide
affordable
and
safe
transportation
and
reduce
greenhouse
gas
emissions
and
an
example?
Solution
of
this
is
ericsson.
A
swedish
startup
called
einride
and
swedish
mobile
operator
tilia
created
an
electric
and
autonomous
transportation
system
that
is
safer
and
more
sustainable,
and
the
impact
is
that
ein
wright
says:
electric
vehicles
powered
by
renewable
renewables,
reduce
carbon
emissions
of
a
logistics
network
by
up
to
90
percent
autonomous
driverless
commercial
vehicles.
J
J
We
also
think
the
digital
divide
is
a
critical
component
to
sustainability
as
well,
because
the
digital
divide
is
most
pronounced
in
rural
and
minority
communities.
Today
in
the
united
states,
37
percent
of
rural
students
lack
adequate
connectivity,
and
this
has
really
critical
impacts,
as
schools
are
closed
during
the
kobe
19
pandemic.
So
if
you
lack
connectivity,
you
cannot
attend
e-learning
and,
according
to
deloitte,
the
digital
divide
currently
costs
the
united
states
economy,
130
million
dollars
a
day,
so
as
an
example
of
how
erickson
is
tackling
this
problem.
J
J
Public
schools
delivered
google
chromebooks
that
have
wireless
connectivity,
and
this
happened,
and
not
in
weeks
or
months,
but
in
less
than
10
days
and
homes
in
rutland
now
have
wireless
speeds
well
above
10,
meg
100
megabits
per
second,
which
enables
students
now
to
access
world-class
education
and
e-learning
opportunities
and
erickson
is
committed
to
this
globally.
So
we
are
partnering
with
unicef
to
make
this
possible
globally
for
students
around
the
world
to
really
bridge
that
digital
divide.
J
J
So
the
challenge
for
renewables
to
scale
up
so
there's
a
large
number
of
power
generators,
multiple
solar
panels
and
wind
farms
and
bi-directional
energy
distribution,
power
sold
and
purchased
from
a
grid
is
needed
and
we
have
fluctuations
in
power
generation,
because
renewables
can
sometimes
be
unpredictable.
There
may
not
be
wind
one
day,
so
the
solution
to
this
problem
is
smart
grids.
J
More
renewables
means
the
distribution
system.
Operators
need
total
control
of
power,
distribution
networks
and
distribution
system
operators
need
to
respond
rapidly
to
balance
power,
production
and
load
to
avoid
outages.
So
the
role
of
5g
is
that
distribution
system
operators
see
digitalization
and
connectivity
as
key
enablers
in
transition
to
renewable
power
distribution
system
operators
recognize
cellular
tech.
Connectivity
offers
lower
capex
compared
to
cabling
for
great
communications
and
real-time
power
system
management
requires
low
latency
communication
connection
and
we
can
reduce
interruptions
by
up
to
75
percent
with
ict
compared
to
today's
level.
J
According
to
a
swedish
distribution
system
operator,.
J
Digitalization
is
also
critical
for
the
industrial
sector,
so
the
industrial
sector
currently
accounts
for
32
percent
of
global
greenhouse
gas
emissions
and
the
challenge
to
decarbonizing.
This
is
that
the
industrial
sector
needs
to
meet
consumer
demand,
while
cutting
emissions
by
50
by
2030.,
so
business
as
usual
is
not
sustainable
and
we
have
to
transition
from
linear
to
circular
business
models,
which
is
what
we
think
of
as
industry
4.0
and
the
role
of
connectivity
and
industrial
process.
J
Optimization
is
vast,
so
by
2024
5g
will
cover
65
of
the
global
population
and
there
will
be
4.1
or
we
believe
there
will
be
4.1
billion,
cellular
iot
connections
and
so
that
ubiquitous
connectivity
enables
real-time
measurement
and
real-time
ai
of
industrial
processes
on
a
massive
scale.
The
exponential
roadmap
shows
up
to
20
reduction
in
annual
energy
intensity
is
possible
by
real-time
monitoring
of
processes.
Things
like
ai
and
energy
use,
and
the
ai
itself
will
help
us
get
to
continual
optimization
of
processes.
J
So
erickson
is
using
connectivity
in
our
smart
factories
today,
in
tallinn,
estonia
and
in
the
united
states
to
implement
use
cases
to
increase
efficiency
and
reduce
our
own
carbon
emissions.
So
we're
showing
how
this
can
be
done
today,
but
the
role
of
connectivity
is
really
critical
in
enabling
this
circular
economy,
because
it
enables
increases
the
lifetime
of
products
and
enables
reuse.
J
So
I
want
to
pivot
and
talk
about
some
of
my
own
research,
because
I
was
speaking
to
you
a
lot
about
it.
How
erickson
sees
tackling
this
challenge
across
the
industry
across
all
the
industries
we
partner
with
and
how
connectivity
plays
a
role,
but
the
team
I
work
on
works
on
augmented
and
virtual
reality,
which
are
technologies
that
will
help
bring
full
experiences
to
people,
and
we
are
thinking
of
this
as
it
relates
to
carbon
emissions
sustainability.
J
Well,
a
lot
of
travel
is
incredibly
important.
It's
something
I
personally
love
because
I
love
to
have
the
sense
of
being
in
a
place,
the
smell
the
taste,
the
sounds
of
a
taste
of
food,
the
sounds
of
the
environment,
but
a
lot
of
travel
today
is
to
take
a
tour
of
a
factory
or
look
at
a
demo
of
a
product
or
shaken
a
person's
hand.
That
can
conclude
a
business
meeting.
J
J
I
get
goosebumps
every
time.
I
see
that
video,
so
our
vision
at
erickson
research
is
that
by
2025
we
will
be
able
to
have
advanced
technology
that
will
allow
people
to
have
full
five
century
immersive
experiences
across
a
mobile
network,
and
we
think
our
vision
by
2030
is
for
people
to
be
able
to
share
things
such
as
memories
or
thoughts
using
brain
computer
interfaces.
J
We
also
know
what
types
of
objects
they
are,
what
the
relationship
the
end
user
has
with
those
objects,
and
this
will
really
enable
us
to
create
that
full
five
sensory
content
and
experience,
because
once
we
have
that
information,
we
can
then
generate
overlays,
and
so
these
overlays
are
critical
uses
for
aar
and
beer.
So
here,
as
an
example,
is
what
you
might
see
through
your
headset
when
you
go
to
pick
up
your
rental
car
in
the
future.
J
So
in
order
to
place
this
overlay
on
top
of
your
rental
car
with
your
return
date,
the
price
per
day
you
know
like
we
have
to
understand
the
object.
We
have
to
understand
the
environment.
We
have
to
do
this
incredibly
rapidly,
because
users
can
experience
what
we
call
virtual
reality
motion
sickness
if
there's
any
delay
greater
than
about
40
to
50
milliseconds.
J
J
This
is
a
challenge,
though,
because
it
also
requires
ai.
It
requires
mobile
network.
It
also
requires
headsets
and
xlr
headsets
for
ar
and
vr
headsets
today
are
evolving
rapidly.
So
today
there
aren't
any
commercially
available
headsets
that
have
embedded
5g
chips
inside
of
them.
So
that
means
that
headsets
and
these
experiences
are
not
fully
mobile.
Yet
if
you'll,
forgive
the
pun
ar
and
vr
headsets
cannot
without
5g
chips
cannot
push
connectivity
and
data
processing
over
the
network
unless
they're
connected
to
wi-fi.
J
So
in
that
example,
I
just
showed
you
in
the
car
rental
pickup
garage.
The
challenge
will
really
be
that
without
5g
or
network
connectivity
we
may
not
be
able
to
get
to
calculate
that
overlay
without,
unless
you're
connected
to
wi-fi.
J
So,
for
example,
one
millisecond
end-to-end
latency
is
the
standard
for
5g
and
that
dramatically
reduced
headset.
That
dramatically
reduces
latency
means
that
headsets
can
work
with
real-time
data,
so
that
means
as
objects
or
the
environment
changes
in
the.
In
the
end
user's
field
of
view,
we
can
track
objects,
we
can
correctly
track
overlays,
so
that
content
and
overlays
in
xor
move
with
the
environment
and
move
with
the
end
user
and
20
bits
per
second
down
speed.
10
gigabits
per
second
up
speed
means
we
may
not
have
to
compress
content
or
video
as
much
so.
J
Not
only
will
you
have
content
that
reacts
in
real
time,
it
will
look
real
as
well,
because
we
may
not
have
to
compress
it
as
significantly.
This
will
also
really
help
with
spatial
computing,
because
it
will
improve
the
accuracy
and
precision
of
environmental
understanding,
algorithms
like
simultaneous
localization
and
mapping.
J
We
also
are
really
excited
about
the
possibilities
of
edge
computing
for
spatial
computing,
so
pushing
data
processing
to
the
edge
of
the
network
really
will
enable
rich
experiences
and
immersive
experiences
that
are
mobile
as
well
and
with
edge
computing.
J
One
millisecond
data
travels
at
the
speed
of
light,
so
one
millisecond
means
that
an
edge
computing
facility
can
be
located
upwards
of
50
miles
from
the
end
user,
but
we're
also
working
to
be
able
to
think
of
how
to
make
smaller
edge
facilities.
It
can
be
located
even
closer
to
the
end
user,
which
will
really
help
us
address
that
latency
challenge
for
machine
learning
and
ai.
So
if
we
can,
for
example,
think
about
how
to
distribute
where
data
is
processed
that
will
really
help
us
reach
that
latency
ceiling.
J
Once
5g
radios
are
inside
of
these
headsets
we'll
be
able
to
process
and
experience
ar
and
vr
content
outside
of
the
home.
That
updates
in
real
time
with
that
incredible
latency
from
5g.
In
this
speed,
once
we
push
processing
into
the
edge
of
the
network
as
well,
we'll
see
longer
battery
life,
where
we
believe
we
will
see
longer
battery
life,
because
we
will
probably
need
fewer
chips
on
on
the
actual
headset.
J
So
we
don't
need
to
have
asics
that
consume
quite
a
lot
of
battery,
so
we
will
see
people
be
able
to
wear
their
headsets
all
day,
long
like
they
use
their
cell
phone
today,
and
the
key
piece,
I
think,
is
the
most
exciting
for
me-
is
around
collaboration,
because
without
connectivity,
without
5g
and
frankly,
without
ai
as
well,
people
can't
have
a
really
difficult
time
collaborating.
J
So
if
we
wanted
to
have
a
business
meeting
in
person
or
look
at
a
product
demo,
it's
together,
it
will
be
a
challenge
to
make
sure
that
we
are
seeing
the
same
thing
at
the
same
time
and
to
interact
with
it.
So
we
can
change
things
and
collaborate
together.
Play
games
together,
watch
entertainment
together,
that's
what
the
latency
from
5g
and
then
mobile
network
connectivity
will
enable
is
that
collaboration
and
just
to
give
you
a
couple
of
examples,
this
is
the
lenovo
a3.
J
So
these
are
headsets
that
are
commercially
available
today,
we're
already
starting
to
see
a
dramatic
change
in
the
physical
form
factors,
and
this
is
an
unreal,
so
we
are
seeing
headsets
for
ar
and
vr
that
are
starting
to
look
a
lot
like
the
glasses
I'm
wearing
today,
and
that's
our
vision
for
how
a
and
our
vision
is
that
the
internet
of
senses
is
coming
and
our
vision,
as
I
said,
is
for
this
to
be
have
the
technology
in
place
by
2025
to
enable
full
sensory,
internet
and
connectivity,
and
so,
as
you
can
see
in
this
image,
we
may
tackle
sustainability
by
needing
removing
the
need
to
travel
and
meet
in
person.
J
So
here
we
see
a
person
having
a
business
meeting
with
someone
with
a
hologram
and
because
of
the
placement
because
of
the
connectivity
and
latency
from
5g
that
hologram
is
able
to
travel
with
the
person.
You
can
share
a
secret
and
whisper
and
you
can
shake
that
holograms
hand
and
feel
the
weight
of
their
hand.
J
So
I
really
want
to
thank
you
for
your
time
for
listening
to
me.
The
message
I
really
want
to
impart
you
with
is
it
climate
change
is
real.
It
is
critical
that
we
address
it
and
every
day
that
we
wait,
the
problem
gets
a
little
bit
harder
to
solve,
but
by
solving
climate
change
like
erickson
takes
very
seriously
it's
not
a
it's,
not
a
solution,
or
it's
not
a
problem
that
has
enough
solutions
using
existing
technology.
B
Well,
all
right
and
and
and
I
love
that-
and
that
means
I'm
gonna
probably
have
to
upgrade
my
oculus
rift
yet
again
to
get
the
internet
of
senses
there
and
to
get
vr
with
sensory
things.
Mostly.
I
feel
sensory
deprived
right
now
when
I'm
in
my
vr
hits
stuff.
B
B
Nausea
because
you're
flying
over
stuff
and
there's
there's
one
game.
I
can't
think
of
it.
But
that
gives
me
that.
But
it's
really
interesting
to
listen
to
how,
because
that
talk,
really,
you
know,
didn't
he
didn't,
go
deep,
diving
into
what
the
infrastructure
was
underneath
the
underneath
it
or
the
kubernetes
or
the
open
shifter.
B
But
for
me,
what's
interesting,
it
keeps
running
through
is
all
the
ai
and
ml
workloads
that
are
running
on
openshift
and
the
thread
of
you
know
how
people
are
leveraging
the
red
hat
technologies
that
we're
we're,
enabling
so
that
that's
really
cool
and
the
next
talk
that
we're
queuing
up
came
from
the
most
recent
red
hat
summit.
Part
2
in
june
and
isa
bank,
which
is
out
of
turkey,
did
a
wonderful
talk
about
enabling
gpu
usage
for
machine
learning
with
openshift
and
also
talked
about
their
set
storage
stuff.
B
But
one
I
wanted
to
give
a
huge
shout
out
for
them
because
they
went
to
massive
lengths
to
record
this
talk
during
covid
epidemics
and
everything
else.
And
I
really
appreciate
that,
and
I
think
it
might
be
the
first
time
that
they
were
on
stage
anywhere
at
red
hat
as
well.
The
isobatic
folks
so
really
cool.
They
talk
about
aiml,
some
big
data
data
management
analytics
and
they
had
already
been
doing
a
lot
with
ci
cd
pipelines
and
using
lots
of
third-party
products.
B
But
this
talk
really
talks
about
how
they
brought
all
that
together
and
I'm
not
gonna
steal
their
thunder,
but
I'm
going
to
let
you
queue
it
up.
Chris
then
we'll
have
one
more
talk
after
this
because
I
think
we're
running
up
to
our
time
limit
at
noon
and
then
we'll
cue
up
the
the
remainders
at
a
later
date.
So
thanks
everybody
for
for
hanging
in
with
us
today.
Well,
we
figured
out
this
this
platform
and
how
to
use
it
properly
for
all
this
stuff.
So
thanks
again
there
you
go
chris.
A
K
My
name
is
inar.
I
am
responsible
for
container
equivalence
platform
in
niche
bank.
I
will
give
somebody
for
information
about
our
official.
Thank
you
virus
journey,
but
first
let
me
let
me
give
some
brief
information
about
ish
bank
each
bank
is
the
largest
private
bank.
In
turkey.
We
have
two
20
million
customers,
1250
branches
and
approximately
25
000
employees
in
turkey,
for
it
depart
department.
K
K
K
K
K
K
K
We
have
installed
version
3.11
and
integrated
this
version,
38.11
openshift,
with
our
existing
devops
toolchain
for
devops
toolchain.
We
are
using
other
devops,
xavier
labs
release
and
deploy
products,
sonata
nexus
elasticstate
and
our
custom
in-house,
build
architecture,
tools
named
power
and
genome
and
in-house
montenegro
systems.
K
K
In
january
of
this
year
we
started
studying
with
openshift
version
4..
We
used
bare
metal
installation
with
restricted
network
in
both
version
3
and
version
4.
We
are
using
bare
metal
servers
for
ai
and
machine
learning,
workflows
and
work
virtual
machines
for
other
workflows
for
openshift
weather
version,
4
migration,
redhead
offered
csa
engagement,
csa
means
class
success
architect.
K
K
K
And
at
last
this
slide
shows
what
we
have
gained
from
openshift
subservice
provisioning
of
compute
storage
saved
us
a
lot
of
time
before
office
shift.
It
was
taking
days
or
weeks
to
get
the
required
components,
but
now
it
takes
seconds
to
deploy
all
of
the
application
components
and
second,
it
was
very
easy
to
integrate
with
our
custom
devops
tools
and
third,
again
is
our
application.
Development,
speed
and
deployments
increased
by
15
to
20
percent
and
at
last
openshift
provided
us
securing
secure
environments
by
default.
L
L
L
So
s3
and
s3
protocol
is
an
amazon
protocol
which
is
kind
of
a
de
facto
standard
nowadays,
and
not
only
the
public
cloud,
but
also
the
the
the
on-premise
private
cloud.
Environments
also
requires
s3
endpoint
and
the
second
biggest
requirement
from
us
for
the
openshift
platforms.
Like
my
colleague
mentioned,
the
openshift
container
platforms,
utilizes,
openshift,
container
storage
for
openshift
person,
volume
needs
and
the
the
third
requirement
was
the
multi-site
configuration.
L
So
the
all
the
data,
all
the
objects
which
is
written
into
ceph,
is
replicated
bidirectionally
between
two
sites
and
also
the
storage
infrastructure
needs
to
be
redundant
available
and
sustainable
all
the
times.
You
don't
have
a
chance
to
you
know,
put
the
storage
infrastructure
down
and
you
know,
provide
the
maintenance
during
that,
so
ceph
is
redundant
available
and
sustainable
all
the
times
which
allows
you
to
do
such
maintenance
jobs,
and
the
other
requirement
was
the
bucket
notification.
L
So
as
of
today,
ceph
allows
you
to
use
amqp,
advanced
messaging,
queueing,
protocol,
http
and
point
and
kafka
for
bucket
notification.
I
will
get
into
the
details
for
that
an
auditing.
So
whenever
or
whoever
accesses
the
objects
within
the
storage
environment,
you
need
to
audit
all
these
access
requirements.
L
Last
but
not
least,
is
the
bucket
life
cycle
managing
management.
So
you
need
to
tear
down
or
tear
up
all
the
objects
within
the
cluster
so
that
you
will
manage
the
cost
and
you
will
manage
the
performance
in
a
required
way.
So
these
are
the
main
business
requirements
for
us
to
put
object,
storage
and
software-defined
storage
within
the
bank.
L
So
just
after
the
business
requirements,
I
would
like
to
summarize
the
s
ceph
architecture
on
each
site.
This
is
a
brief
summary
of
the
topology,
so
we
have
at
as
as
my
colleague
mentioned
about
it,
we
have
two
data
centers
and
two
sites.
So
for
the
tier
four
data
center,
we
are
using
three
different
rooms
and
we
are
replicating
the
data
in
a
3x
replication
factor
for
each
object.
L
So,
within
these
rooms,
thanks
to
crash
hierarchy,
so
we
have
placed
all
the
servers
which
consist
of
the
openshift,
which
construct
openshift,
container
storage
within
a
different
rack
on
each
room,
and
you
will
see
all
the
services
which
is
running
on
top
of
those
and
all
the
services
like
months.
Managers
and
mds
daemons
are
running
containerized
by
the
way
and
they
are
all
running
on
a
docker
containers.
L
On
top
of
these
servers,
and
you
will
see
publican
cluster
network
so
just
after
we
have
introduced
the
ssd
disks
within
the
cluster,
the
public
and
cluster
network
utilization
really
increased
in
a
very
high
fashion.
You
need
to
keep
an
eye
on
those,
because
the
cluster
network,
which
distributes
the
data
across
all
these
nodes,
is
heavily
utilized
just
after
introduced
ssds
and
you
need
we
are
using
jumbo
frames
by
the
way,
which
is
quite
critical
for
us.
L
The
message
transfer
units
them
to
use
is
9000
as
of
today,
which
gives
us
additional
performance
benefits.
Yeah,
as
I
mentioned,
this
is
the
only
this
is
the
architecture
for
each
site,
and
since
we
have
two
data
centers,
we
have
an
identical,
safe
cluster
installations.
On
each
site,.
L
We
have
two
different
domain
names
which
is
running
just
just
under
the
f5
load
balancers,
the
primarily
the
red
one
is
serving
the
internal
requirements
where
the
applications
needs
to
access
or
write
or
retrieve.
The
data
within
the
safe
storage
is
accessing
the
cluster
in
a
from
a
different
name
space
and
the
green
one
which
you
will
see
is
for
applications
which
is
which
needs
to
access
the
save
storage
from
outside
of
the
bank.
L
L
L
As
I
mentioned,
two
different
data
centers
located
in
two
different
cities
in
within
six
rooms,
18
commodity
servers
with
432
osds,
excluding
the
block,
tvs
or
ssds,
which
is
you
know,
working
for
the
blue
store
and
as
all
rado
skaters
are
containerized
and
so
multi-site
dmz
and
production.
L
Rados
gateways
are
running
by
their
own,
which
is
which
serves
two
different
workload
needs
and
in
total,
within
the
cluster,
we
are
managing
more
than
400
million
objects
within
15
pools
and
approximately
4
000
placement
groups
and
with
three
crash
rules
and
by
the
help
of
this
crash
rule,
as
we
discussed,
we
are
using
them
for
bucket
life
cycle
management
and
so
that
you
can
create
custom
rules
to
move
objects
across
different
pools
in
order
to
have
the
cost
benefit
out
of
it.
L
So
I
would
like
to
give
you
some
brief
information
about
the
use
cases.
So
we
are
integra.
We
have
integrated
the
openshift
container
storage
with
openshift
container
platform
and
where
this
container
storage
is
being
utilized
by
eight
openshift,
eight
different
openshift
clusters.
L
L
The
reason
why
we
are
using
it
this
way
is:
we
have
many
openshift
clusters
and
if
you
do
not
use
that
and
use
it
in
an
internal
mode
with
hyper
converge
mode,
you
need
to
maintain
and
manage,
for
example,
eight
different,
open
shift,
container
storage
installations
and
with
the
external
mode,
you
are
only
using
the
operator
to
to
to
communicate
with
the
external
and
outside
container
storage
and
all
the
person
volume
claims
are
being
done
with
the
openshift
container
storage
and
we
have.
L
L
So
the
next
use
case
is
the
notification
application,
so
we
have
a
mobile
banking
application.
As
my
colleague
again
I
mentioned
about
it,
approximately
85
percent
of
the
transactions
is
coming
through
this
mobile
banking
application
and
whenever
a
customer
requires,
whenever
a
customer
gets
a
new
notification
about,
you
have
a
new
document
kind
of
a
document
like
a
bank,
deposit
or
credit
card
credit
card
deposit
as
well.
So
they
would
like
once
they
would
like
to
access
that
document.
L
They
are
going
accessing
and
getting
the
token
from
the
authentication
server,
which
is
reddit
sso
for
identity
provider
and
once
they
get
the
token
they
are
coming
to
safe
storage,
thanks
to
by
the
way
secure
token
server.
Second
secure
token
service
here,
which
is
the
same
name
with
amazon
as
well
and
ceph,
is
offline
by
offline
validation.
L
The
second
use
case
is
that
I
would
like
to
mention:
is
the
access
management
auditing
for
the
safe
object,
storage
cluster,
so
any
user
which
has
the
access
key
and
secret
key
for
that
user,
which
consists
of
all
the
objects
underneath
is
able
to
access
the
documents.
So
this
is
not
a
secure
way
of
accessing
the
documents
we
are
planning.
We
have
integrated
such
a
workflow
to
access
to
gain
an
access
to
users
that
has
needs
to
access
the
objects
within
the
object,
storage
cluster.
L
So
whenever
a
user
tries
to
access
the
document,
they
are
going
and
trying
to
get
the
token
from
the
identity
provider,
so
identity
provider,
which
is
right
at
sso.
The
upstream
key
clock,
the
upstream
name
is
key.
L
Clock
is
integrated
with
the
internal
active
directory
of
the
bank,
and
if
they
are
part
of
that
active
directory
group,
they
are
able
to
create
a
new
token
for
that
particular
need
from
the
created
realm,
which
is
within
the
reddit
sso
and
one
once
they
get
the
token
they
are
going
and
authenticating
with
the
safe
storage
and
creating
a
session
policy
and
role
name
with
the
duration.
L
So
the
other
use
case
that
I
would
like
to
mention
is
the
monitoring
and
alerting
so
ceph
is,
is
in
a
really
critical
part
of
our
devops
pipeline,
and
you
need
to
monitor
really
critical.
L
Applications
are
running
on
top
of
this
and
we
need
to
monitor
and
create
alerts
out
of
this
storage
cluster
and
the
we
are
using
the
embedded
pro
materials
engine
for
that
and
we
are
creating
dashboards
out
of
this
prometheus
data,
which
is
coming
from
the
prometheus
engine
and
as
well
as
all
the
alerting
is
managed
by
the
promoters
alert
manager,
and
we
have
integrated
the
prometheus
alert
manager
with
our
internal
ticketing
system
and
all
the
error
level
alerts
creates
a
critical
ticket
to
the
monitoring
team
like
whenever
a
node
gets
down
or
slow
operations
is
just
introduced
or
any
scrub
errors.
L
We
are
getting
alerts
and
creating
tickets
out
of
that,
and
the
final
use
case
that
I
would
like
to
mention
is
the
artificial
intelligence
use
cases.
So
my
colleague
chala
will
get
into
details,
but
ceph
is
just
in
between
within
artificial
intelligence
pipeline,
so
it
starts
with
getting
and
collecting
all
the
raw
data
within
the
big
data
platform.
Just
after
we
have
the
data
within
the
big
data
platform,
the
model
inputs
where
they
would
be
introduced
to
the
training
model.
L
Training
which
is
running
on
the
openshift
container
platform
is
stored
in
the
corresponding
bucket
within
the
safe
storage
and
once
the
model
is
completed
and
the
model
output
has
been
done,
they
are
putting
the
outputs
to
the
safe
object,
storage
cluster
to
the
corresponding
bucket,
and
once
the
objects
have
been
introduced,
we
are
firing
a
notification
so
that
the
new
object
is
there
within
the
corresponding
bucket
and
you
can
keep
going
with
the
next
stop
in
the
devops
pipeline.
L
M
I'm
charlotte
and
I'm
working
as
ai
architecture
and
development
chapter
elite
at
each
bank
presented
our
openshift
and
safe
in
the
infrastructure
in
details,
and
now
I
will
talk
about
how
we
run
ai
workflows
on
these
platforms.
M
First,
I
want
to
start
with
briefly
this
describing
our
ai
application
development
life
cycle.
M
First,
we
start
with
a
business
analysis
as
usual
and
after
the
business
analysis
is
finished,
we
prepared
the
level
of
data
to
use
in
that
business
case
and
the
prepared
data
is
used
in
model
development.
It's
an
iterative
process.
The
model
is
developed,
experimented,
changed
and
re-experimented
and
when
the
model
is
ready
and
the
performance
is,
as
expected
is
deployed
to
the
target
environment,
and
then
we
start
monitoring
our
model's
path
performance
and,
if
there's
a
need,
we
re-analyze
it
and
make
changes
on
our
model.
M
M
After
preparing
the
data
we
go
into,
we
put
the
data
into
the
openshift
class
and
we
start
processing
the
data
and
developed
with
the
model
in
date
processing
the
data
we
detect
future
types,
input
missing,
values
and
encodes
and
scale
the
features.
Then
we
choose
the
best
algorithm
best
performing
algorithm.
M
This
deployment
is
starts
with
a
pilot
phase
or
there
may
be
some
a
b
testing
and
when
the
model
is
in
its
final
state
is
used
in
production
and
then
in
the
final
stage,
we
monitor
the
performance
of
the
deployed
model
and
make
changes
to
the
model
if
needed,.
M
And
if
I
go
into
the
architecture
on
which
we
run
this
ai
workloads,
there
are
two
pipelines:
first,
the
data
pipeline
and
the
model
pipeline.
In
this
slide,
I
will
tell
you
about
which
technologies
which
platforms
we
use
and
in
the
next
slide,
I
will
go
into
the
details.
M
In
the
data
pipeline.
We
collect
data
from
kafka
and
also
we
collect
data
from
our
data
wireless
and
aggregate
this
data
in
our
big
data,
hadoop
cluster.
M
D
M
Go
into
the
details
of
our
ai
architecture
in
event:
kafka.
We
have
banking
events,
we
collect
banking
events
from
kafka
and
install
it
in
our
big
data,
big
data
cluster,
and
also
we
collect
our
call
banking
data
from
our
data
warehouse
in
big
data
cluster.
We
process
this
this
data
and
prepare
the
master
data
to
be
used
in
our
machine
learning
applications
when
the
data
is
ready
in
hadoop
clustered,
we
export
it
to
our
safe
object,
storage
as
a
model
input.
M
M
Our
predictions
may
maybe
batch
or
real-time
predictions
depending
on
the
use
case.
Batch
predictions
produce
model
outputs
as
a
file,
and
we
put
this
file
also
in
safe
object,
search
and
if
we
have
a
real-time
prediction
in
that
use
case,
we
export
we
expose
rest
apis
which
will
be
used
out
banking
applications
as
a
result
of
training.
We
have
a
model
file
and
it's
serialized
and
stored
in
our
of
also
in
our
object
search
in
our
openshift
cluster.
We
also
run
our
jupiter
notebooks
used
by
our
data
science
team.
M
We
also
have
some
management
uis
in
openshift
cluster,
which
is
used
to
handle
parameters
and
set
the
parameters
for
the
models
in
mongodb.
M
We
have
model
method
metadata
and
also
we
keep
the
register
of
our
models
as
well
as
the
track
of
the
experiments
and
finally,
in
our
openshift
cluster,
we
platform,
it's
also
a
part
of
our
openshift
infrastructure
and
in
internal
messaging
between
ports.
We
use
kafka
for
messaging.
M
M
We
have
30
servers
with
more
than
50
gpus
and
we
have
a
cpu
farm
off
more
than
30
000
v
cores,
and
we
have
more
than
50
terabytes
of
memory
and
as
a
search.
We
use
the
50
televised
of
safe
search
at
the
moment
and
more
than
third
device
of
object
search,
and
we
have
batch
applications,
api
and
ui,
and
we
have
jupiter
notebooks,
which
we
have,
which,
which
are
more
than
30
ports,
running
conquering
and
10.
Otml
pods
are
also
running
on
gpu
in
openshift
cluster
and.
M
I
want
to
finish
my
verse
with
giving
some
example
applications
that
we
develop
in
our
ai
team.
We
have
pricing
applications,
including
retail
loan
term
deposit
and
fixed
pricing.
We
have
a
next
product
by
application.
M
B
So
it's
muted
and
re-muted
and
undone
that,
but
you
know
kareem
and
everybody.
Yes
thanks.
B
I
muted
myself
there,
but
I
I
really
love
the
isobank
story,
a
lot,
because
I
mean
it's
a
huge
self
deployment,
but
also
that's
how
they
threaded
in
their
ai
story
and
their
ai
workloads
and
how
they're
making
it
all
you
know
taking
some
of
their
legacy
stuff
over
and
and
just
making
it
work,
it's
it's
a
testament
to
their
persistence
and
to
the
the
ability
of
openshift
to
to
take
on
a
variety
of
tasks
from
cicd
to
the
workloads
that
they're.
B
Looking
at
this
next
talk,
which
I
also
loved.
I
love
a
lot
of
talks,
so
I'm
very
biased
towards
end
user
talks,
especially
those
that
advocate
for
change
like
the
sustainability
one
from
erickson.
But
this
next
one
is
from
the
southern
coalition
for
social
justice
and
with
all
the
stuff
that's
going
on
in
the
world
these
days.
It's
wonderful
to
see
the
collaboration
that
happened
between
some
red
hatters
in
raleigh,
north
carolina,
clayton,
clarence,
clayton
and
christopher
tate
are
part
of
this
presentation,
along
with
tyler
wittenberg
who's.
B
The
chief
counsel
for
the
for
justice
reform
for
the
southern
coalition
for
social
justice
and
they're,
a
non-profit
based
in
durham
north
carolina
and
red
hat's,
been
working
with
them
to
see
how
they
could
make
and
help
facilitate,
making
a
greater
impact
on
some
work.
B
They
do
around
racial
equity
report
cards
and
they
leveraged
a
little
bit
of
red
hat
ansible
coordination
and
open
shift
and
really
saved
a
lot
of
time
and
energy
so
that
they
could
focus
not
on
the
technical
aspects,
but
on
getting
stuff
done
and
making
a
difference
and
making
a
change.
So
without
any
further
ado.
D
I
Hello,
everyone
and
welcome
to
this
session,
entitled
using
open
source
and
open
data
to
address
educational
disparities.
We
look
forward
to
sharing
more
about
this
wonderful
work
and
partnership
during
our
time
together
today
my
name
is
clarence
clayton.
I
manage
the
data
privacy
team
at
red
hat
and
have
been
with
the
company
since
2013.,
I'm
also
honored,
to
serve
as
the
chair
of
the
build
community,
which
stands
for
blacks,
united
in
leadership
and
diversity,
and
it's
in
that
capacity
that
I
am
with
you
today
build
is
one
of
red,
hat's
diversity
and
inclusion
communities.
I
I
I
Now,
with
that
in
mind,
the
story
of
today's
session
really
began
in
may
of
2020
the
death
of
george
floyd
and
the
resulting
protests
and
demonstrations
hit
very
close
to
home.
For
me
personally,
as
well
as
red
hat
as
a
company,
there
were
protests
in
downtown
raleigh
right
outside
red
hat's
headquarters
and
everything
that
was
happening
compelled
our
company
to
take
action.
You'll
see
here
a
statement
from
our
ceo
paul
cormier,
letting
it
be
known
that
red
hat
stood
in
solidarity
with
the
black
community
in
the
fight
for
social
justice.
I
The
social
innovation
program
connects
the
talent,
skills
and
expertise
of
red
hatters
to
causes
that
matter
to
them
and
allow
them
to
make
a
difference
in
the
world
outside
of
red
hat.
So
I
thought
it
was
a
perfect
opportunity
to
connect
her
with
tyler
and
ryan,
so
we
met
and
quickly
identified
some
technical
challenges
and
inefficiencies
that
the
southern
coalition
was
facing,
and
we
thought
that
red
hat
could
help
address
them.
So
alexandra
then
brought
in
kevin
ritter
and
christopher
tate,
who
you'll
meet
in
a
few
moments
to
get
that
work
underway.
I
N
Thank
you
for
that
introduction,
clarence.
Yes,
my
name
is
tyler
wittenberg.
I
work
with
the
southern
coalition
for
social
justice,
which
partners
with
communities
of
color
and
economically
disadvantaged
communities
throughout
the
south
to
defend
and
advance
their
political,
social
and
economic
rights.
We
do
this
primarily
through
what
we
call
community
or
movement
lawyering,
where
we
provide
legal
and
policy
analysis.
Communication
support
strategic
research
as
well
as
support
and
organizing
efforts,
and
one
big
issue
that
we
work
on
is
the
school-to-prison
pipeline.
N
So
what
is
the
school-to-prison
pipeline?
The
school-to-prison
pipeline
is
really
a
web
that
consists
of
policies,
practices
and
a
systemic
investment
in
schools
and
in
certain
practices
that
we
know
actually
support.
Students
of
color,
in
particular
part
of
that
has
to
do
with
lack
of
investment
in
things
that
we
know
support
students
academically.
N
So
we
look
at
academic
achievement.
We
also
look
at
the
use
of
exclusionary
discipline
that
being
suspensions
and
expulsions,
because
we
know
that
students
who
are
suspended
or
expelled
are
more
likely
to
enter
the
justice
system,
and
then
we
also
look
at
that
direct
funneling
of
youth
into
the
justice
system,
which
is
school-based
referrals
to
law
enforcement.
N
You
identify
disparities
within
all
these
areas
and
we
do
so
by
county
using
the
race
directly
report
cards,
so
these
import
cards
are
important
because
they
really
give
a
temperature
check
on
what
the
scooter
prison
pipeline
looks
like
in
any
particular
school
district
in
north
carolina
115
of
them.
So
it's
a
lot
of
work
to
put
these
together.
N
So
a
lot
of
data
that
is
inputted
one
at
a
time,
which
is
why
it
used
to
take
us
three
months,
a
few
attorneys
three
months
and
a
lot
of
interns
to
get
this
done,
and
we
were
also
kind
of
static
in
the
process
because
we're
not
able
to
really
maneuver.
If
there's
any
revisions
that
need
to
need
to
be
done.
So
with
that
I'll
pass
it
to
christopher
tate.
So
he
can
explain
exactly
what
technology
you
all
provided.
O
O
O
O
O
Now
here
I
get
access
to
everything:
let's
go
to
the
state
of
north
carolina
because
that's
where
we
have
data
available.
So
this
is
the
state
of
north
carolina
and
you'll,
see
that
it's
related
to
many
different
school
districts
or
agencies.
Here,
let's
go
to
alexander
county,
for
example,
and
so
this
is
the
record
for
alexander
county
and
you'll,
see
that
there
are
two
report
cards
available
for
alexander
county
2018
school
year
and
2019
school
year.
Let's
go
to
the
2018
school
year.
O
O
O
O
O
O
O
O
O
O
County
scroll
down
to
the
graph
and
you'll
see
that
the
percentages
in
all
the
groups
is
very:
even
we
can
figure
out
solutions
where
one
county
is
doing
really
well.
What
can
we
do
in
other
counties
that
can
make
a
difference
tyler?
How
has
the
new
site
helped
your
team
achieve
its
goals
for
racial
equity
report
cards.
N
It
went
from
being
a
three-month
project
to
a
three-day
project.
Now
it
takes
about
three
of
us
no
interns,
three
days,
which
means
we're
able
to
have
more
partnerships
with
community
members.
It
means
that
the
racial
equity
report
cards
themselves
no
longer
become
this
larger
burden
that
ends
up
being
its
own
project.
Now
it
really
is
a
tool
to
advance
the
work
as
we
work
with
our
community
in
various
ways.
Also,
sometimes
we
misspell
stuff.
Sometimes
we
get
one
data
point
wrong.
N
Sometimes
the
data
is
updated
and
changed
and
we
have
to
be
able
to
react
to
that
because
we're
posting
this
information.
As
you
know,
it
is
publicly
available
data,
but
we're
posting
our
analysis
and
we
want
everyone
to
know
that
we
are
responsive
to
it.
N
We
are
accountable
to
it
so
now,
when
there's
a
very
there's,
a
change
of
any
kind,
really
we're
able
to
either
go
in
there
ourselves
and
change
it
in
real
time
right
then,
and
there
or
simply
reach
out
to
chris
and
get
either
advice
on
how
to
do
the
change
or
support
in
changing
it
right
away.
It
makes
us
far
more
responsive
than
we
were
prior
to
this
relationship
with
red
hat
and
the
timing
couldn't
be
could
not
have
been
better.
N
It
is
good
to
hear
about
the
story
of
how
red
hat
came
to
this
work
and
then
responding
to
the
uprising
around
the
murder
of
george
floyd.
We
were
also
at
a
time
where
we
needed
to
be
extremely
available
to
our
community.
N
While
we
also
had
the
obligations
race
secret
report
cards,
so
we
were
able
to
be
just
as
responsible
as
responsive
as
we
need
it
to
be,
while
also
do
what
we
said,
we're
going
to
do
and
get
these
reports
out
and
do
so
actually
in
a
much
more
timely
manner
than
we
did
last
time.
So
I
speak
for
all
of
the
southern
coalition
for
social
justice
and
saying
we
are
immensely
appreciative
for
the
support
from
red
hat.
We
look
forward
to
this
collaboration
continuing
and
from
from.
O
Strengthening
our
children,
our
families
and
our
communities
is
the
most
important
work
we
can
do.
This
work
with
scsj
shows
that
open
collaboration
to
create
a
shared
solution
leveraging
each
other's
expertise
can
solve
a
common
problem.
Red
hat
will
continue
to
support
scsj
through
technology
so
that
they
can
continue
to
make
a
difference
in
the
world.
O
D
B
All
right
well,
thank
you
chris,
for
producing
today
and
working
through
all
the
kinks,
and
one
of
the
things
is
we're,
especially
this
last
talk.
B
I
was
really
appreciative
of
of
the
work
that
tyler
and
chris
and
clarence
had
done
to
make
this
happen,
and
these
are
the
kinds
of
stories
that
really
make
us
thrilled
to
be
part
of
these
collaborations
and
we're
immensely
happy
to
be
part
of
it,
as
well
as
for
all
the
work
that
the
folks
at
the
center
for
social
justice
are
doing,
and
we
look
forward
to
doing
more
collaboration
with
you.
B
You
get
to
see
that
the
immense
variety
of
the
work
that
people
are
doing,
that
is
leveraging
red
hat
technologies
and
not
just
openshift
but
ceph
and
all
kinds
of
other
ansible
pieces
and
parts
of
the
of
our
different
product
suites.
So
we
really
love
that
everybody
has
stepped
up
today
and
shared
their
stories
with
us
and
allowed
us
to
share
them
with
you
and
look
forward
to
doing
it
again
sometime
soon.
So
thanks
again
to
chris
and
to
bobby
kessler
and
the
other
folks
at
openshifttv
for
producing
this
session
so
take.