►
From YouTube: OpenShift Coffee Break: Red Hat Hackfest Office Hours
Description
Get your espresso ready for the EMEA OpenShift Coffee Break as we kick-off the Red Hat Hackfest Office Hours! Join the monthly meeting with Red Hat and Red Hat Partners experts around Edge, RHEL and OpenShift as part of the Red Hat Hackfest.
A
Hey
welcome
back
to
the
openshift
tv
coffee
break
our
after
our
summer
break.
We
are
back
here,
live
on
openshift
tv
every
wednesday
morning.
10
a.m
set
time
here
in
emea.
Today,
I'm
very
happy
to
have
our
special
guest
in
the
show
for
the
red
dot
act,
fest
office
hour
there.
There
is
the
our
you
know:
permanent
djs
like
andrea
and
matthia
andrem,
I
am
matia.
Are
you.
B
B
Yeah
sure
I
came
to
the
screens,
maybe
I'm
dragging
cuddles,
the
other
guest,
so
chris
is
work,
is
a
colleague
of
mine
and
his
team
colleague
of
mine
is
from
the
harps
region,
he's
a
the
best
principal
solution,
architect
that
you
can
find
around
the
world,
and
so
we
are
really
working
together
to
enable
the
customer,
and
today
we're
going
to
show
one
of
the
approaches
they
were
using
at
the
customer
to
enablement
and
to
use
their
power
for
the
openshift
platform
in
a
nice
way.
A
Great
great
and
andrea,
do
you
have
some
words
like
those
for
miguel.
D
First
of
all,
I
think
so
it's
our
fourth
session
today
and
I
wanted
to
invite
mike
because
miquela
is
one
of
the
esteemed
members
of
the
global
team
that
covers
the
validated
patterns
for
edge,
and
this
is
a
fantastic
opportunity
to
start
our
collaboration,
so
whatever
the
hackfest
team
produces
is
validated
and
then
promoted
at
the
global
level
through
michele's
team.
So
miguel,
if
you
would
like
to
say
a
few
words
about
you,
that
is
yours,.
E
Thanks
yeah,
I'm
mikhaile,
I
live
in
northern
italy
and
I've
been
in
engineering
now
for
seven
years
and
redhead
for
11
and
yeah
these
days.
I've
been
working
on
these
validated
patterns
and
we'll
talk
about
that,
I
think
in
a
little
bit
thanks.
A
Great
thank
you
for
the
the
introduction.
As
andre
will
say,
this
is
the
fourth
red
attack
fest
office
hour,
andre,
but
is
worth
because
maybe
people
come
back
from
the
holidays
or
maybe
people
just
join
from
twitch
or
youtube.
It's
worth
to
remind
people
what
is
the
red
at
hackfest
office
hour
and
I
want
to
say
to
all
the
people.
If
you
have
any
question
about
the
topic
we
talked
today
and
it's
all
about
openshift
edge
computing,
open
source
and
streaming,
please
send
questions
to
the
chat,
I'm
putting.
A
I
feel
free
to
ask
the
question.
We
will
bring
the
question
to
the
to
the
audience:
let's
make
it
interactive
but
andrea.
What
is
the
red
attack
fest
office
hour.
D
Thanks
natalie,
so
the
hackfest
office
tower
is
a
kind
offering
from
the
from
our
friends
of
the
openshift
coffee
break,
hosted
by
openshifttv
to
spread
news
around
what
the
hackfest
technical
team,
so
that
happens,
technical
team
and
the
community
are
working
on.
So
the
red
of
hackfest
is
an
event.
You
know
that
as
an
event
behind
the
red
attack
quest
event.
D
B
A
Thanks,
andrea
thanks
sandra
for
the
quick
introduction,
the
context
for
the
people
that
might
join
you
know
later
in
this
acquist
mattia.
What
is
the
topic
of
today.
B
D
So
we
we
wanted
to.
We
have
three
topics
as
usual
for
for
this,
for
the
harvest
office
hour
session,
a
quick
overview
of
what
we
are
doing
and
actually
we
usually
run
a
call
to
action
for
all
the
people
who
could
be
interested
in
collaborating
or
contributing
to
our
work.
Then.
D
Will
this
would
talk
to
us
about
the
lighting
talks
and
will
give
us
an
overview
of
how
what
they
are
and
how
to
use
them,
and
then
we'll
have
miguel
giving
us
an
overview
technical
overview
of
what
the
validated
patterns
are
and
what's
the
value
of
collaborating
with
them
or
using
them
adopting
them
for
a
customer
project?
For
example,
the
call
to
action
for
today
is
quite
interesting
in
collaboration
with
microsoft,
potentially
and
intel
and
other
technology
vendors.
We
are
trying.
D
We
are
thinking
of
an
fsi
edge,
related
business
case,
so
we
are
we're
starting
the
implementation
of
a
potential
fsi
prototype
that
could
solve
financial
services
prototype
that
could
be
used
in
several
and
several
business
cases.
So
that's
gonna
include,
as
usual,
all
the
edge
related
portfolio
from
red
hat,
but
also
cloud
but
also
hardware
with
medium
and
small
footprint.
So
we
will,
as
usual,.
D
Will
adopt
reference,
but
also
we
are
discussing
the
adoption
of
microshift.
We
will
work
on
on
signal
openshift
and
several
technologies
that
stay
on
top
of
our
extended
platform,
an
additional
discussion
we
are
having
today
in
the
community
and
by
the
way,
thanks
to
the
links
natalie
shared
with
you,
you
can
go
the
qat
project
website.
B
D
Is
actually
the
community
version
of
our
red
and
harvest
event?
You
can
connect
to
our
and
join
for
free,
our
slack
channel
and
also
join
our
weekly
meetings,
community
meetings
that
happen
every
thursday
4
p.m,
central
european
summer
time
so
another
another
potential.
Adoption
is
related
to
the
edge
management
software
as
a
service
from
redhat,
which
is
something
a
lot
of
customers
and
partners
they
are
asking
for
and
with
the
edge
management
that
we'll
probably
discuss
in
the
next
session.
D
We
have
a
great
software
as
a
service
tool
to
manage
patches
and
updates
on
our
remote
web
forage
instances.
So
that's
something
that
is
really
interesting,
that
is
a
no-brainer
for
each
and
every
technical,
devops
depths
option
manager.
Who
is
keen
to
understand
the
technology
and
not
that.
So
I
kindly
encourage
all
of
you
to
join
our
community
meetings.
D
B
C
Perfect
so,
as
andrea
mentioned,
we
are
going
to
have
or
talk
about
more
a
soft
topic
today
and
not
really
a
deep
technical
topic.
So
we
are
going
to
talk
about
why
lightning
talks
can
be
a
really
good
enablement
format
and
how
we
have
used
them
in
a
very
concrete
scenario.
With
a
with
an
organization
we've
been
working
with
so
approximately
two
years
ago,
next
slide
material.
C
Approximately
two
years
ago,
when
working
with
a
really
large
organization,
we
faced
a
pretty
typical
situation.
Openshift,
as
a
container
platform
has
been
established,
so
has
been
in
place.
The
first
set
of
applications
has
been
onboarded
onto
the
platform
running
in
production,
and
then
the
organization
decided
they
want
to
leverage
and
utilize
the
benefits
of
containers
and
kubernetes
for
far
broader
set
of
their
applications.
C
So
we
are
talking
about
an
organization
with
with
hundreds
of
applications
with
thousands
of
software
developers
and
with
a
certain
complexity
associated.
So
it's
a
global
organization
globally
distributed
people
are
sitting
in
different
geos,
different
time
zones,
talking
different
languages
and
as
well
a
very
heterogeneous
set
of
technologies
used
within
the
company.
So
it's
from
really
legacy
to
top-notch
modern
cloud
native
applications.
C
You
would
find
everything
you
would
almost
find
each
and
every
technology
stack
and
programming
language
in
that
company
too,
and
so
we
were
discussing
about
how
to
enable
people,
and
especially
developers
in
that
organization,
to
get
started
with
containers
with
kubernetes.
C
Even
if
people
are
coming
from
a
completely
different
background
and
we
have
been
discussing
different
formats
and
things,
we
could
do
next
to
the
usual
trainings
and
we
chose
to
do
a
series
of
lightning
talks
and
we
want
to
basically
wanted
to
achieve.
A
certain
set
of
objectives
was
with
these
lightning
talks.
So
in
the
end,
it
should
be
a
series
of
enablement
for
for
the
people,
so
technical
enablement
on
technical
topics
and
educate
them
and
also
generate
awareness
about
the
new
topics.
C
How
other
people
within
the
organization
have
solved
certain
issues
and
with
that
also
foster
and
drive
collaboration
within
the
organization,
because
oftentimes
you
see
in
these
big
organizations,
a
lot
of
different
teams
are
solving
the
same
issue
in
slightly
different
ways,
and
it's
usually
just
a
matter
of
knowing
about
each
other,
a
matter
of
communication.
C
Besides
that,
we
were
also
trying
to
gather
feedback
from
the
field
from
the
people
in
order.
What
do
they
need?
Where
do
they
stand?
What
information
they
need?
What
are
the
problems?
C
They
have
so
really
feedback
from
the
developers
and
application
teams
on
on
what
they
are
interested
in
and
with
that
feedback
we
consolidated
that
and
fed
it
back
to
the
openshift
platform
team,
so
the
team,
which
is
engineering
and
operating
the
openshift
environment,
but
like
this,
we
could
really
influence
the
roadmap
of
the
features
which
were
delivered
on
platform
level
depending
on
the
needs
of
the
actual
customers.
C
So
it's
not
just
an
enablement
thing,
but
also
trying
to
introduce
a
feedback
loop
between
consumers
and
producers
of
of
such
new
technologies
and
platforms,
and
in
the
end
we
wanted
as
well
it's
with
large
organizations
pretty
much
always
the
same.
So
it's
sometimes
it's
hard
to
get
started
with
new
stuff.
There
is
a
lot
of
boundaries
and
we
wanted
to
prove
it
can
really
be
easy
to
get
started
and
can
also
be
quick
to
get
started
with
new
stuff.
C
C
So
we
formed
a
small
kind
of
steering
committee
consisting
of
people
from
red
hat,
as
well
as
from
from
the
organization
we
were
working
with,
and
these
were
basically
the
agreement
which
was
overseeing
the
topics
which
we're
presenting
and
as
well
the
content.
So
we
had
basically
if
ways
how
people
could
formulate
requests
or
proposed
topics.
We
should
take
up
in
the
lightning
talks,
and
then
we
were
discussing
them,
selecting
based
on
number
of
requests.
C
We
were
selecting
topics
and
then
preparing
the
respective
session
always
from
from
red
hat
side,
so
we
were
always
providing
parts
of
the
session,
but
always
contextualized
into
this
business.
Specifics
of
that
organization,
because
out
of
the
box,
sometimes
it
just
doesn't
work
in
each
and
every
organization
right.
Therefore,
we
always
make
sure
we
have
it
working
and
in
the
associated
context
of
the
of
the
organization,
and
then
basically,
it
was
up
to
the
organization
to
internally
communicate
about
these.
C
These
sessions
and
lightning
talks
so
to
promote
them
and
make
sure
we
have
the
right
audience
on
on
the
sessions,
and
then
we
also
make
these
things
available
in
internal
learning,
centers,
so
as
replays
and
and
everything
so
that
people
also
who
are
joining
the
party
later,
they
can
still
make
use
of
all
the
materials
we
produced
throughout
these
lightning
talk
sessions,
the
concrete
permit
of
such
a
session.
How
we
shaped
it
I
think
mattia.
Can
you
take
us
through
that.
B
Yes,
thanks
chris,
so
the
format
I
think
it
was
the
key
of
this
lightning
talk
we
devised
in
three
parts.
The
first
part
was
about
the
introduction
of
the
session
and
overview
or
what
we're
going
to
shown
and
how?
Technically
you
could
achieve
it
right
with,
for
instance,
talking
about
design
pattern,
how
you
architecture,
how
you
can
solve
that
problem,
and
the
second
part
is
really
showing
the
demo
about
how
you
can
implement
such
pattern
or
architecture
or
integration,
with
a
specific
ecosystem
of
the
customer
in
the
current
platform.
B
Last
but
not
least,
of
course,
q
and
a
where
you
ask
questions
where
they
ask
questions
about
this
solution
that
we
propose
or
architecture
that
we
propose
and
then
the
last
piece
of
this
10
minutes
queen
is
about
server,
because
we
want
to
gather
feedback
about.
How
was
the
talk,
how
we
can
improve
footer
around
this
light.
Clean
talk,
series.
B
So,
just
to
give
you
an
example
of
which
talk
to
be
present
on
the
customer
and
as
you
see
as
a
first
topic,
is
about
really
basic
functionality,
because
you're,
starting
with
this
platform,
so
you're
going
to
start
from
the
basic.
Because
if
you
understand
the
basic,
then
it
will
be
easier
on
the
footer
talks
to
understand
that
technology,
how
you
can
work
with
the
technology
itself,
and
so
so
basic
introduction
with
container
kubernetes
12
factor
then
go
deep
dive
with
java
containers
reason.
B
Why
was
to
start
with
java
because,
for
instance,
for
the
customer
side,
most
of
the
applications
were
in
java,
but
this
it
was
not
strictly
related
to
java
application
and
to
do
hand-to-hand
the
demo
with
build
and
deploying
touching
the
the
component
across
this
flow
like
container
registries,
how
you,
the
pipelines,
which
type
of
pipeline
you
can
use
and
then,
of
course,
how
you
can
deploy
going
even
further
on
the
platform
you
start
with
just
applica
apply
manifest
with
a
simple
management
that
you
want
to
really
scale
with
helm.
B
We
like
to
create
a
real
package
for
kubernetes,
and
then
we
go
even
further.
With
this
complex
topic
like
how
we
can
start
moving
from
a
legacy
application
based
on
monolithic
approach
to
microservices
how
you
what's
the
approach,
what
tool
you
can
use
to
migrate
right
and
and
then
you
are
in
the
platform
itself
and
then
how
you
make
sure
your
amplification
is
held
in
production,
how
you
can
protect
from
disaster
recovery
and
then
even
further
talking
about
difficult
topic
like
distributed
transaction,
because
now
you
are
not
anymore
on
a
monolithic.
B
And
of
course
you
have
several
thousand
services,
and
then
you
want
to
have
zero
trust
between
these
services.
So
you
introduce
a
certificate
management
on
top
of
the
flatmoon,
so
how
you
can
achieve
that
right?
So
you
you
prepared
this
lightning
talk
to
enable
the
developer
about
several
topics
from
basic
to
advanced,
and
then
you
prepare
example
where
they
can
start
working
on
and
make
it
reproductive,
and
just
talking
about
numbers
and
people
attended.
This
session
were
around
150
in
average
and
as
well
offline.
B
They
were
even
more,
and
so
it's
very
nice
to
see
how
this
grow
on
the
viewing
and
see
how
help
people
to
use
the
platform
and
work
in
a
better
way
yeah.
So
the
storyline,
you
start
from
the
scratch,
and
then
you
increasingly
use
leverage
the
platform
and
the
talks
and
help
them
to
integrate
this
platform
to
the
existing
ecosystem
of
the
customer,
because
the
ecosystem
of
the
customer
can
be
wide
broad.
B
It
gives
a
iso
monitor
solution,
it's
on
login
solution
or
the
how
to
expose
an
application
outside
the
platform
with
their
own
load,
balancer
solution
as
well
from
their
pki
infrastructure.
What
became
they
use,
how
they
can
integrate,
how
they
can
manage
secrets.
So
it's
quite
broad.
B
If
you
go
to
the
customer
side,
it's
not
just
about
install
the
platform,
it's
about
how
you
integrate
the
platform
itself,
so
some
consideration
during
that
we're
doing
the
lighting
tone
you
know,
is
a
process
that
you
learn
on
where
and
when
you
start,
you
learn
as
well.
B
It's
based
on
the
feedback,
so
you
the
flow
all
we
have
to
do
from
basic
to
advanced
it's
better,
to
have
a
specific
session
for
a
specific
topic
and
as
well
is
the
obvious
must
be
important
because
you
need
to
identify
which
audience
will
be
attending
meeting,
because
if
you
go
to
deep
dive,
but
then
the
session
is
attended
by
management
or
people
that
are
not
really
working
day
by
day
on
the
code.
So
maybe
it
doesn't
make
any
sense.
B
And
of
course
you,
if
you
do
such
a
three
that
you
need
to
promote,
because
if
you
do
this,
you
don't
promote
it,
the
audience
will
be
low
and
then
this
it
will
not
work
and
more.
I
think
this.
The
most
important
things
is
about
showcase
functionalities
that
are
currently
available
on
the
on
the
platform
itself,
because
if
because,
if
you
prepare
the
demo,
you
prepare
the
lighting
talk
and
then
the
developer
or
the
application
owner
cannot
use.
It
is
kind
of
useless
because,
yes,
it's
it's
cool,
really
fancy,
I
love
it.
B
But
then
what
can?
I
do
right
and
of
course,
the
one
important
topic
you
want
to
provide
on
when
you
work
in
a
microservices
or
on
a
container
platform.
You
want
to
provide
a
simple
way:
how
to
transform
your
application
and
keep
it
always
up
to
date.
So
you
want
to
show
migration
process
how
you
can
implement
as
well
new
green
application,
application
with
third
party
provider
and
so
on
and
so
forth,
and,
of
course,
as
well.
B
A
No,
we
don't
have
a
yet
question
from
the
chat
if
you
have
any
questions
from
what
chris
and
andre
and
matthias
said,
please
send
in
the
chat.
No,
it
looks
to
me
very
interesting.
I'm
interested
in
the
modernization
part
you
you
mentioned
the
java
modernization
for,
for
that
part,
are
you
using
any
framework
any
any
any
tool
or
is
just
following
the
patterns.
B
Well,
we
are
leveraging
the
tool
from
coveria
taco,
okay,
but
this
to
create
an
assessment
of
the
application
on
wherever
they
are
right,
so
to
define,
if
it's
the
status,
if
it's
ready
to
be
containerized
or
known,
if
you
follow,
for
instance,
the
right
pattern
already
or
is
completely
unknown
status
and
based
on
this,
we
can
in
the
define
how
difficult
or
how
it
takes
to
transform
this
application.
On
on
the
to
move
on
the
platform
itself,.
C
And
then
we
were
also
certainly
talking
about
patterns
itself
to
get
from
a
monolith
to
microservices,
so
how
to
strangle
a
monolith.
What
approaches
you
have
there,
and
so
all
these
conceptual
things
are
equally
important.
Of
course,.
A
A
Thank
you,
and
I
like
also
the
light
in
talks
format.
Chris
we've
been
together
in
some
of
those
some
of
those
I
think
they
are
very
effective,
successful
and
I'm
happy
to
see
the
same
approach
also
here.
This
is
my
opinion,
of
course,
but
I
think
the
it's
a
very
good
format
to
for,
for
you
know
managing
some
of
these
topics.
Yeah.
Those
are
the
questions
that
that
I
had.
A
I
don't
know
what
what
we
have
in
the
flow
now
do
we,
I
don't
know
you
want
to
add
something
on
that
or
do
you
want
to
what
any
do
you
have
any
presentation
or
something.
E
All
right,
then
I'll
get
started
so
I'll
talk
a
bit
about
validated
patterns
and
what
they
are
why
they
can
be
useful.
E
So
from
a
very
high
level
point
of
view
validate
patterns
really
the
next
level
of
reference
architectures.
In
a
way,
so
we
have
a
a
proven
design.
That's
been
implemented
already
somewhere
some
customer
site.
E
We
take
that
and
we
make
it
into
a
pattern
by
making
sure
it's
fully
deployed
via
get
ops,
so
we
we
try
to
be
as
declarative
as
possible
in
the
whole
pattern.
We
do
have
some
ways
to
do
to
run
imperative
bits
with
ansible
mainly,
but
the
the
key
here
is
is
really
the
whole
get
ops
approach.
E
E
At
this
point
in
time,
we're
mainly
doing
ci
over
release
stuff.
So
when
there's
a
new
open
shift,
we
test
it
with
that
version,
a
new
operator
version,
that's
officially
released.
We
test
it
with
those
version,
and
the
bonus
is
that
we
do
the
testing
as
a
whole.
So
all
the
operators,
all
the
ocp
versions,
are
tested
as
a
whole
unit
with
the
whole
pattern.
Our
next
step
there
is
we're
striving
to
do
ci
on
unreleased
stuff
as
well,
so
next
version
of
an
upcoming
operator.
E
E
We
test
it
to
make
sure
that
it
keeps
working,
no
matter
what
the
cloud
you're
using
the
other
benefits
is
it's
open
for
collaboration.
E
E
It's
it's
also
benefit,
because
it's
a
lot
faster
to
go
to
production,
because
you
start
from
a
a
pattern
that
you
start
a
poc
with
a
pattern,
and
you
are
already
a
long
way
ahead
and
if
you
had
to
start
from
scratch
and
and
re-implement
a
bunch
of
designs
or
architectures
or
way
to
communicate
across
different
components.
E
So
yeah
too
long
didn't
read,
they're
a
natural
progression
from
from
the
somewhat
older,
not
older,
but
a
bit
different
reference
architectures
we
built
on
top
of
them
for
the
most
part,
we're
focused
on
get
ops
and
we
really
only
do
stuff
that
exists
in
the
out
in
the
real
world.
E
E
Yeah,
the
the
other
benefits
are
it's
it's
possible
to
show
customers
and
folks
to
what
is
what
is
possible.
It's.
The
way
we
can
see
validate
patterns
is
that
there
are
a
bunch
of
lego
bricks
and,
and
you
can
build
things
we
just
give
you
a
way
to
to
make
things
simpler,
to
start
off,
to
be
faster
to
to
to
play
with
stuff
and
and
try
things
out
the
feedback
that
we
get
back
from
people
from
customers.
E
Then
flows
back
to
the
pattern,
and
then
we
improve
on
it
and
change
it
over
time
to
make
it
better.
I
said
we
tested
in
ci,
so
we
get
on
a
daily
basis.
We
see
what
breaks
and
what
doesn't
and
we
provide
any
feedback
that
we
get
back
to
the
to
the
units
that
that
are
in
to
the
teams
that
are
in
charge
of
the
different
components.
E
E
We
make
sure
that
it's
in
ci
that
it
gets
tested
and
then
we
provide
some
feedback
to
the
architects
that
helped
build
this.
This
architecture
and
we
sort
of
work
on
this
loop.
E
E
E
D
D
A
question
please,
as
we
are
talking
about
life
cycle-
and
you
mentioned
architecture
here,
I
would
love
to
a
lot
to
underline
to
our
audience
that
reference
architectures.
They
of
course,
are
based
on
our
platform,
but
they
use
different
products
from
our
portfolio
and
third
parties
like
we
are
doing
together
with
the
hackfest
pocs.
But
what
about
maintaining
these?
These
validated
patterns
now
an
example
that
jumped
into
my
mind
and
that's
the
reason
actually
why
I'm
asking
this
question.
D
We
worked
one
year
ago
in
a
manufacturing
use
case
that
has
been
formalized
in
a
white
paper
that
has
been
is
under
under
validation,
thanks
to
you
and
your
team,
but
we
started
working
on
that
with
openshift
and
matia.
Correct
me
in
front
wrong.
That
was
open
sheet,
4.8
or
4.9,
which
means
we
had
to
use
specific
operator
versions.
We
had
to
use
specific
functionalities
from
the
opposite
platform.
What
if
I
would
like
to
have
an
updated
version
of
that,
once
it's
been
adopted
by.
D
E
So
what
we
do
is
we
tried
with
the
newer
versions,
say
411
that
just
came
out
which
changed
a
few
things
with
tokens
and
service
accounts,
and
so
things
did
did
break
a
couple
of
things
did
break
and
what
happens
is
it?
It
depends
a
bit
on
how
much
changes
is
necessary.
E
E
E
These
are
the
this
is
the
issue.
We
have
these
ways
to
solve
the
problem.
E
How
do
we
go
about
it,
and
so
on
so
far
we
were
lucky,
of
course,
now
I
jinxed
it
and
and
next
our
cp
version
will
be-
will
bring
in
more
fun
but
yeah
for
the
most
part
for
the
small,
I
would
say
I
would
recap
it,
as
the
small
bits
will
take
care
of
the
minor
nets
to
just
get
it
working,
but
if
it's
an
architectural
change,
that's
needed,
for
whatever
reason
then
would
have
to
sit
down
and
have
a
longer
discussion
as
to
how
to
go
about
it.
E
E
So
far
we
have
three
validated
patterns.
We
have
a
bunch
of
community
patterns
that
are
in
our
upstream
repos,
which
I
will
share
at
the
end,
but.
E
Those
might
progress
to
the
status
of
validated
patterns
once
they
fit
certain
requirements.
So
far
we
have
industrial
edge,
which
is
a
distributed
architecture.
Maybe
it's
the
one
you
were
mentioning
before
there
that
does
monitoring
of
production
line
it's
distributed
across
clusters
has
a
certain
eye
model
to
detect
anomalies
and
report
back.
It
has
some
visualizations
and
it's
all
running
in
githubs.
E
Medical
diagnosis
is
another
pattern
that
uploads
images
that
get
gets
scanned
for
an
ai
ml
model.
E
They
look
for
a
certain
patterns
to
to
help
doctors
to
guide
diagnosis,
to
detect
certain
anomalies,
or
things
like
that,
and
it's
based
on
uploading
a
number
of
images
to
to
the
cloud
which
will
then
work
on
it.
E
Do
the
analysis,
the
scanning
and
report
back
and
then
we
have
multi-cloud
git
ops,
which
is
a
pattern
that
showcases
a
few
things
I'll
talk
about
I'll
talk
a
bit
more
about
this
today
and
I'll
show
how
to
deploy
it.
It's
basically
how
to
do
a
get
ups,
how
to
run
get
ups
across
multiple
clusters
or
multiple
clouds.
E
So
so
I'll
go
a
bit
in
depth
a
bit
more
in
depth
about
this
multi-cloud
get-ups
pattern.
The
components
that
make
up
this
pattern
are
volt
hash
vault
in
the
in
the
hub.
We
have
one
hub
cluster
and
multiple
regional
clusters
satellite
clusters.
E
In
the
hub
cluster,
we
have
hashicorp
vault
that
acts
as
a
storage
for
secrets.
We
configure
it
on
the
hub
in
the
pattern
and
we
load
the
secrets
into
the
vault
out
of
band
I'll.
Show
you
a
bit
how
it's
done.
We
do
it
with
ansible.
It
can
be
done
in
a
number
of
ways,
but
we
do
it
out
of
band
because
we
want
to
make
sure
this
no
secrets
end
up
in
git,
which,
which
is
a
large
problem.
E
These
days,
we
use
the
external
secrets
operator,
which
basically
takes
talks
to
vault
and
for
certain
reads
vaults
and
creates
kubernetes
secrets
on
the
hub.
The
reason
we
chose
the
external
secrets
operator
is
to
be
a
little
bit
independent
with
the
secret
storage.
E
So,
whichever
secret
storage
external
secrets
operator
supports,
we
we
can
support
fairly
easily
with
with
little
so
the
external
secrets
reads:
vault
the
external
secrets
operator
reads:
vault:
it
generates
coordinated
secrets
inside
the
hub
and
then
acm
we'll
take
this
secret
on
the
hub
and
and
push
it
across
the
regional
clusters,
which
can
then
make
use
of
it.
E
We
do
want.
We
do
require
here.
Acm
2.5,
because
in
the
because
of
the
way
secrets
are
pushed
around
is
is
secure.
It's
a
feature
that
only
showed
up
in
2.5.
E
This
is
very
roughly
the
the
high
level
diagram
of
how
this
pattern
is
is
put
together.
So
on
the
left,
you
have
the
hub,
the
installation
happens
with
a
command
or
you
can
also
use
a
an
operator
that
we
wrote
to
deploy
a
pattern
in
a
simpler
way,
but.
E
The
simple
approach
which
I
will
show
is
to
run
make
install
from
your
laptop
or
your
machine.
This
will
talk
to
the
hub
cluster.
Make
sure
that
get
ups
operator
is
installed
and
point
it
to
your
gate,
repo
of
your
pattern
of
your
fork.
E
It
will
set
up
the
external
secrets
operator,
which
talks
to
the
vault
and
then
the
automated
installation
to
make
install
you
ran
waits
for
the
vault
name
space
once
that
is
set
up
and,
and
everything
is
working
correctly,
it
will
start
loading
the
secrets
into
the
vault
out
of
band,
then,
as
soon
as
you're
ready
to
join
clusters,
you
use
acm
to
import
your
remote
cluster
and
as
soon
as
that
happens,
and
it
has
the
correct
labels
on
it,
acm
will
make
sure
that
openshift
get
ops
is
installed
and
on
the
regional
cluster,
and
at
that
point,
argo
and
get
ups
will
take
care
of
deploying
all
the
applications
that
you
that
you
assign
that
regional
cluster
to
an
acm
will
also
copy
the
secrets
that
you
want
on
the
remote
namespace
of
the
regional
cluster.
E
The
repository
structure
of
a
pattern
is
described
a
little
bit
here.
You
have
a
common
folder
where
we
have
a
bunch
of
helm,
charts
and
a
couple
of
scripts
where.
E
That
are
common
across
all
of
our
patterns,
then
all
the
the
charts
folder
you
have
all
the
the
helm
charts
that
you're
gonna
use
in
your
in
your
different
clusters
and
I'll
show
you
later
in
the
values
files.
You
have
a
values
global
which
gets
applied
to
all
of
all
clusters,
be
it
the
hub
or
regional
clusters,
and
then
you
have
the
other
values
file,
one
for
the
hub
one
for
the
regional
clusters,
and
you
can
split
up
the
imported
clusters
in
multiple
regions.
E
Here
we
just
did
region
one.
Just
a
single
region
and
every
values
file
will
will
have
the
list
of
the
applications
that
it
needs
to
be
that
need
to
be
installed
and
the
make
file
is
the
entry
point
make
install
is
the
command
that
is
used
to
to
deploy
such
a
pattern.
E
So,
just
to
recap,
on
the
hub
we
have
get
ops,
acm
and
the
vault
an
external
secret
operator.
Here
in
this
setup
I
have
the
same
config
demo,
application,
which
is
just
a
canal
web
server.
E
That
will
show
you
the
secret
that
you
loaded
in
the
vault,
and
it
runs
both
on
the
hub
and
on
the
regional
cluster,
with
the
same
sim
code
and
the
regional
cluster,
you
import
it
into
acm,
and
it
basically
just
has
a
the
git
ups
instances
that
make
sure
that
the
applications
for
that
cluster
are
installed
all
right
now,
we'll
quickly
go
through
a
demo,
let's
see
if
the
demo
gods
are
on
my
side,
so
I
cloned,
can
you
see?
Is
it
big
enough?
Should
I
make
it
bigger.
E
I
made
a
fork
of
the
of
the
patterns,
repo,
that
is
in
our
hybrid
cloud
patterns,
github
org,
and
I
pushed
it
into
my
into
my
private
github.
So
that's
the
origin.
E
There
are
really
two
things
to
do:
make
sure
that
you
have
a
valid
cube,
config
file,
and
then
you
really
just
want
to
do.
There
is
a
template
here
for
the
secrets
you
take
this
and
you
copy
it
in
your
home
directory.
E
Choose
the
secret
that
you
want
for
this
app
and
just
save
the
file
I'll
show
you
maybe
the
files.
So
this
is
the
file
that
that
defines
what
runs
on
the
hub
cluster
and
you
list
the
namespaces
the
subscription,
where
you
can
define
the
versions
that
you
want.
E
E
E
The
run
with
a
specific
interval
of
10
minutes
by
default,
which
can
be
changed
and
they're
just
ansibles
that
get
run
in
a
in
a
container
inside
the
cluster,
and
you
can
do
whatever
you
you
want
with
it.
So
once
you've
copied
the
secrets,
template
edited
it.
It's
really
just
make
install
away,
and
a
couple
of
checks
will
be
run,
making
sure
that
your
repo
exists
with
the
branch
that
you're
on
and
then
it
just
starts
installing
the
entry
point
and
take
care
of
everything
else.
E
This
now
is
is
a
bit
of
ansible
that
waits
for
the
vault
to
be.
E
Installed
and
configured
while
this
runs,
I
prepared
another
cluster
where
I
can
show
you
what's
going
on,
so
that
the
little
silly
demo
app
is
called
config
demo
and
it
has
a
single
route.
It's
just
a
web
server
that
shows
the
the
secret
once
you
click
on
it.
E
And
if
you
go
on
the
hub,
it's
config
demo-
and
this
is
the
secret
that
I
I
configured
before-
and
it's
test
one
two
three:
if
you
change
the
secret,
it
will
obviously
be
reflected.
This
is
probably
it
will
be
reflected
on
the
hub
and
on
the
regional
cluster
later
on.
It's
every.
I
think
the
sync
is
every
few
minutes
and
yeah
and
all
the
applications
are
managed
via
via
our
go.
E
It's
taking
some
time
there,
it
is
so
here
you
have
us,
you
can
see
which,
which
apps
are
are
being
configured
for
the
hub
and
in
what
state.
They
are
same
thing
goes
for
the
regional
cluster,
the
one
that
was
important
imported
before
via
acm.
E
It
has
the
config
demo
up
and
if
we
open
it,
it
will
tell
us
that
this
is
mcg1
as
opposed
to
hub,
and
it
has
the
same
secret
that
was
synced
over
with
with
atm,
and
this
one
here
also
has
an
argo
instance,
which
will
only
have
the
config
demo
up,
because
the
regional
cluster
in
this
case
has
a
lot
less
a
lot
of
stuff.
E
A
We
have
a
place
where
we
put
all
the
slides
after
the
session.
If
you
want
to
give
me
the
link-
and
I
can
add
it
there-
otherwise
yeah
or
we-
we
make
it
public,
but
we
can
do
it
that
way.
All
the
links
I
already
shared
in
the
chat
old.
D
A
Basically,
so
yeah
that's
great.
D
Yeah,
we
also
create
a
blog
post
on
our
community
website,
with
the
content
from
the
the
acquest.
A
E
And
yeah
that
that's
really
all
I
had
we're
very
open
to
any
form
of
participation
collaboration,
so
the
links
will
be
provided
so
feel
free
to
get
in
touch.
A
Demo,
because
there
are,
there
are
lots
of
components
involved,
for
this
is
the
music
cloud.
You
know
key
tops
validate
and
pattern.
There
are
many
pieces,
I
think
it's
very.
It
was
a
powerful
demo.
Thank
you
give
an
idea
of
how
you
can
manage
those.
You
know
complex
use
case,
but
at
ease
with
a
pattern,
so
there's
a
there's
a
way
to
make
it
reproducible
and
make
it
stable,
and
I
think
this
is
the
overall
idea
around
validated
pattern.
A
I
don't
have
additional
question:
do
you
folks,
do
you
have
any
questions
for
michelle.
D
No,
I
know
actually
miguel,
they
are
implementing
this
patterns
overall
in
the
regional
india
region,
which
is
the
one
we
we
belong
to.
Also
chris
matthew
and
myself.
I
wanted
to
ask
you
actually
mathia
if
there
is
already
a
current
adoption
of
project
of
these
buttons
for
any
of
your
customers,
without
mentioning
the
customers,
of
course,
but
you're
already
working
on.
B
Yeah,
let's
say
this:
pattern
is
coming
on
the
customer
side
with
acm
to
manage
a
multi-book
cluster,
because
customers
want
to
have
the
functionality
of
this
platform
worldwide.
So
it's
a
distributor
customer,
so
they
need
a
way
to
manage
all
the
other
clustering
in
an
easy
way
as
well.
It's
in
secure
ways
with
as
well,
they
use
external
operator
they
have
volt,
but
also
they
have
elder
secret
provider.
That's
why
external
external
stick
operator
is
nice
feature
about
that,
but
yes
is
coming
on
the
on
the
customer
side.
B
A
Oh
yeah,
folks,
it
was
a
really
great
session,
thank
you,
chris
and
and
michelle
for
joining
us
today.
It
was
a
pleasure
to
hear
from
you
and
thank
you,
andrea
and
mattia,
to
coming
back
for
the
act.
Fest
office
hour
always
great
to
hear
from
you
and
the
news
from
the
hackfest
appointment.
Openshift
tv
coffee
break
come
back
next
wednesday
10
a.m,
with
young
lawson,
talking
about
serverless
and
kafka
out,
to
combine
those
two
pattern
together:
data
streaming
and
serverless
and
other
than
that
folks.