►
Description
Get your espresso ready for this new iteration of the OpenShift.TV Coffee Break as we celebrate the winners of the fourth edition of the Red Hat Hackfest in EMEA! Join us to discover how Red Hat partners built solutions for the retail industry bringing together Edge computing, Quarkus and Azure Red Hat OpenShift to create an innovative and measurable solution to a real-world problem based on cloud-native technologies.
A
A
Welcome
back
to
the
openshift
TV
coffee
break.
Sorry
I
had
an
issue
with
my
microphone.
Now
it's
fixed
I'm,
very
happy
to
be
here
with
lots
of
people.
As
you
can
see,
those
are
the
winners
of
Red
Dot
access
for
retail
2023,
together
with
organizer
I'm
pleased
to
have
today
Andrea
James
and
all
the
participants
of
the
Act
Fest.
So
before,
starting,
let's
give
a
little
bit
of
context.
You
are
an
open
shift,
TV
coffee
break.
A
B
B
B
Yes,
of
course,
so
we
very
very
thrilled
to
have
here
today
the
the
best
three
teams
you
actually
won.
The
hack,
Fest
2023
focused.
B
So
we
have
Tomic
today
from
the
other
we
have
Christian
today
from
atos
and
tassos
from
uni
systems.
We
will
give
you
an
overview
of
what's
been
done
during
the
early
Park
Fest
and
why
they
have
been
awarded
with
the
best
three
positions
at
the
end
of
the
of
the
event.
A
Sounds
great
sounds
great,
so
Andrea
this
is
I'm
putting
some
captions
so
that
people
can
see
which
is.
This
is
Reddit
access.
Oh,
that's!
A
thicker
look
like
a
Marquette
in
HTML
in
90s,
I!
Don't
like
it!
Let
me
have
the
this.
This
caption
looks
like
more
red
app
accents
and
while
we
we
go
so
this
is
Andrea
which,
with
this
is
the
third
edition
or
fourth
edition
of
the
access.
B
B
In
the
next
times
to
come,
we've
been
together
for
for
a
long
time
in
Italian
this
hack,
Fest
I,
remember
we
started
working
on
the
first
pilot
and
the
first
use
case
three
years
ago
when,
when
we
got
locked
down
because
of
the
covet-
and
we
started
with
something
around
smart
series-
I
remember
back
in
the
days
and
then
the
hackvest
became
a
more
Enterprise
go
to
market
tool
two
years
ago,
actually
with
defense
first
sponsorship
by
Intel
and
IPM,
then
we
cooperated
also
and
got
sponsored
by
Intel
and
AWS
in
Asian
Pacific
and
now,
with
this
last
trailer
park,
Fest
in
India,
we
have
decided
to
focus
on
retail
and
we
worked
together
with
Intel
and
and
Microsoft
and
that's
important
right.
B
So
sponsorship
is
not
just
about
money.
It's
all
about
what
kind
of
value
technology
can
add
to
customer
business
and
to
prototypes,
because
the
hackvest
is
all
about
prototyping
and
working
together
in
the
partner
ecosystem,
on
solutions
that
solve
customer
or
helps
help
customers
solving
their
technical
challenges,
but
also
grow
the
business
and
expand
in
specific
vertical
and
that's
why?
For
example,
the
hackvest
focuses
on
a
specific
region
and
a
specific
Edge
vertical
in
each
and
every
business
motion.
B
So
again
to
give
some
context
to
our
audience.
We
decided
to
focus
on
retail,
because
retail
is
a
very,
very
big
landscape.
We
will
hear
from
our
special
guests
in
the
winners
today.
What
kind
of
part
of
the
retail
vertical
they
focused
on
could
be?
Logistics
could
be
a
point
of
sales
could
be
warehousing,
so
all
of
them
are
important.
All
of
them.
B
They
require
special
workload
or
business,
logical
Technologies
to
develop
a
proper
scalable
prototype,
but
that
would
also
James
if
you
can
hear
us
and-
and
you
can
speak
also
to
you-
Bob
James
who's
been
the
the
technical
heart
of
this
business
motion.
If
I
may.
E
D
A
E
Anyway,
I
would
just
like
to
say
thanks
everybody
for
joining
the
stream
today,
to
listen
to
the
hackvest
and
listen
to
the
you
know
the
winners
here.
It
was
a
fun
event
right.
It
took
many
many
months
to
put
together
and
it
ran
for
over
five
weeks
or
so
with
a
with
a
broad
technical
stack.
I'm
sure
you'll
hear
a
lot
about
it
today
with
azuretta
openshift,
with
Edge
devices
with
red,
Hat's,
middleware
portfolio
and
lots
of
other
pieces
in
there
as
well.
E
So
we
had
a
lot
of
fun
with
a
lot
of
teams
building
things
over
these
many
weeks,
but
the
reason
for
doing
it-
really:
it's
not
just,
of
course,
for
having
fun
but,
of
course,
is
to
build
skills
and
promote
Partners
who
who
can
adopt
this
technology
in
a
meaningful
way
and
use
it
to
solve
your
business
problems
in
this
case
in
the
retail
industry
in
the
retail
sector,
and
that
was
our
that
was
our
focus,
is
what
we'll
hear
more
and
more
about,
but
I.
Think
what's
interesting.
E
A
Yes,
thank
you
for
giving
more
context.
James
yeah
and
at
this
point
yeah
I'd
like
to
introduce
Dominic
Christian
and
tassos.
B
E
They
beat
many
other
teams,
but
so
there
is
a
ranking
but
they're
all
here
on
their
success
today.
So
right.
B
On
yeah
and
so
I
would
start
I'm
very,
very
thrilled
to
have
here
that
sauce
from
uni
systems,
we
got
impressive
by
the
way,
the
guys,
the
teams
they
have
to
pass
two
rounds,
and
so
they
have
to
be
evaluated
by
two
juries,
One
Technical
and
one
business.
It's
not.
A
B
About
technology
right,
it's
also
again
as
I
shared
already
is
about
what
volume
business
value
the
solution
Beach
presents,
developed
developed
by
by
the
team
can
add
to
customer
challenges,
so
third
place
in
the
ranking
goes
to
UNI
systems
is
here
with
us?
Would
you
like
to
ask
us
to
talk
about
your
your
Team,
your
company
and
your
solution?.
D
Yes,
so
hello,
it's
a
pleasure
to
be
here
and
it
was
a
wonderful
experience
participating
in
hackfest,
so
I'm
a
principal
Solutions
architect
at
the
uni
systems.
D
Uni
systems
is
a
systems
integrator
that
focuses
both
on
on-prem
and
cloud-based
Solutions,
and
we
have
we
are
based
in
Athens
Greece,
but
we
have
also
presence
in
many
places
in
Europe,
so
we
are
working
both
with
customers
and
organizations
in
our
local
region
and
in
European
agencies
and
as
far
as
I
think
for
what
we
did
with
hackfest
I
think
we
can
talk
about
this
later
on
right.
D
A
B
That's
fine
bye
to
me
and
so
second
place
in
the
ranking,
the
other
with
Dominic
the
other
focused
on
aspects
that
are
usually
let's
say
our
customers.
They
usually
just
to
quote
Ian
Boyle
our
Chief
Architect
for
retail,
usually
our
little
customers.
They
believe
they
have
full
control
on
their
stocks
on
their
on
their
processes,
internal
processes.
But
it
usually
not
not
exactly
like
that,
and
there
is
always
space
for
improvement.
So.
F
Let
me
please
go
ahead:
yeah
hi,
I'm
Dominic
winter,
thanks
for
having
me
I'm
senior,
consultant
and
Tech
lead
for
Middle,
aware
Solutions
at
VR.
We
are
relatively
small,
consulting.
F
B
Also
very
proud,
because
I've
been
working
with
Piada
for
many
many
years
enabling
and
supporting
the
company
in
several
customer
projects,
so
very
well
done
very
well
done,
thank
you
and
last
but
not
least,
actually
the
first
place
in
the
ranking
we
have
Christian
here
from
formerly
arthos
and
Christians.
So
tell
us
more
yeah.
C
Correctly,
thank
you
yeah,
so
we
started
as
team
hotos
and
during
The
Heck
Fest
with
a
lot
of
pre-works.
Of
course,
the
company
has
won
a
split
off
and
our
part
of
our
former
ethos.
Business
is
now
in
the
new
brand
of
evidence
where
we
have
the
cloud
business,
big
data,
security,
yeah
everything
around
digital
and
there
I'm
working
as
a
senior
Cloud
devops
engineer
in
the
yeah,
basically
in
the
red
hat
department.
C
A
That's
great
to
hear
thanks
Christian,
so
we
we
heard
that
the
three
winners,
let's
go
into
the
technical
detail,
I'm
sure
our
audience
is
really
Keen
to
hear
from
you
what
you
learned
from
this
experience
and
what
are
the
technical
details
of
what
you
did
so
I
think
tasus
was
starting
to
doing
that.
So
please
and
James.
Do
you
want
to
show
this
diagram
before
we
go
into
the
technical
details
or.
E
So
that's
slide.
12
I
think
might
be
helpful
for
Tesla's
description.
So
is
the
right
right,
a
couple
of
screenshots
from
each
of
the
team's
workloads
in
the
task,
so
you
can
choose
to
speak
to
it.
If
you
wish
or
anything
you
can
choose
if
they
wish
to
speak
to
it,
yeah.
A
So
let
me
start
sharing
quickly
the
slides
that
give
a
little
bit
of
context.
What
we're
doing
here
so
I
hope
you
can
see
that
in.
Let
me
let
me
check
and
yeah.
It
should
be
visible.
Maybe
it's
more
visible
like
this
and
do
you
want
to
James?
Do
you
want
to.
E
Yeah,
certainly
so,
like
I
say,
all
of
the
teams
for
hagbas
were
given
the
same
environment,
so
they
had
Azure
Red,
Hat,
openshift
environment
running
in
the
data
center.
They
had
an
Azure
radar
operative
environment
to
simulate
a
store,
and
they
also
had
two
bits
of
Hardware
as
well,
so
they
had
a
fitlit
and
an
IBM
Nook.
So
the
IBM
Nook
is
a
x86.
E
Thank
you
previously
powered
machine
that
would
simulate
kind
of
what
a
retailer
would
likely
have
in
a
store
right
in
their
shops,
and
then
they
had
a
kind
of
a
lower
powered,
x86
machine
kind
of
simulating
a
point
of
sale
right
and
we
could
send
the
link
in
a
second.
So
you
can
see
all
the
technical
details
right,
I
think
you
know
what
really
impressed
Us
by
the
immune
systems.
Is
they
put
together
a
live?
E
You
know
working
demo
about
product
updates
and
product
promotions,
rather
that
spans
all
of
these
environments
and
I
don't
want
to
take
the
the
power
away
too
much
from
Tesla
to
this
explanation
and
tassels.
Hopefully,
that
diagram
is
useful
for
you,
but
you
choose
how
you
wish
to
explain
the
rest
of
it,
but,
like
I,
say
we're
really
impressed
by
how
you
used
all
those
environments
together,
it
has
an
architectural
working
across
all
three
of
those.
D
Yeah
and
before
I
started
presenting
the
environment,
one
thing
that
I
want
to
point
out
that
it
was
really
nice
and
we
enjoyed
about
the
hack
Fest
was
that
we
we
put
together
a
team
with
various
profiles.
So
you
know
it
was
it
rains
all
the
way
from
a
programmer
Java
programmer
to
a
Linux
administrator.
We
had
a
devops
person.
We
have
an
openshift
expert.
D
We
had
also
someone
that
was
a
proficient
in
Jewels,
so
we're
able
to
combine
all
these
Technologies
to
come
up
with
the
result
that
you
see
so
to
go
over
very
quickly
about
what
we
implemented.
So
the
idea
was
to
to
be
able
to
hand
out
to
handle
personalized
promotions
that
are
based
on
the
stock
that
a
retailer
has.
So,
for
example,
if
you
have
the
end
of
season
approaching
where
you
know
that
you
need
to
have
there's
there's
new
items
coming
in,
so
you
need
to
get
rid
of
the
old
items.
D
Oh
there's
a
new
line
of
products
that
are
superseding
the
existing
ones
of
whatever
other
reason,
even
products
that
are
nearly
other
expiration
date
that
they
need
to
be
able
to.
You
know
you
need
to
to
be
able
to
to
make
them
to
sell
them
as
fast
as
possible.
So
from
a
technology
perspective,
what
we
did
was
we
had.
We
have
three
pillars
right:
we
have
the
headquarters
data
center,
where
we
have
a
database,
and
this
day
in
this
the
data
center
was
based
on
Azure
Red
Hat
openshift.
D
There
was
a
cluster
there
there's
another
cluster
on
the
store
level
that
these
two
clusters
may
be
connected,
but
may
also
be
disconnected.
So
we
had
to
be
sure
that
we
can
handle
disconnected
scenarios
and,
of
course,
we
have
the
POS
device.
The
POS
device
was
a
physical
device,
a
fitlet
that
would
be
able
to
handle
the
the
POS
transactions.
D
These
events
contained
the
promotions
that
were
applicable
to
customers
based
on
their
profile
and
then,
when
one
of
one
important
thing
that
we
implemented
in
our
solution
was
that
we
wanted
to
make
sure
that
the
promotion
was
only
sent
to
a
customer
when
we
were
sure
once
we
were
sure
that
this,
the
actual
promotion
information,
what
had
reached
the
store.
So
we
implemented
this
bi-directional
communication
between
the
two
clusters.
D
Once
the
promotion
reached
the
store,
we
would
send
back
an
acknowledgment
and
only
when
the
acknowledgment
would
actually
reach
the
headquarters
and
what
you
see
here
as
step
3B
and
four.
Only
then
we
would
actually
send
and
notify
the
customer
that
there
is
a
promotion
waiting
for
them
at
the
store,
and
then
the
customers
would
be
able
to
go
to
the
store.
D
They
would
need
to
have
their
loyalty
card
with
them
so
that
they
could
actually
claim
the
promotional
had
been
sent
to
them
and
and
then
once
once
they
bought
the
items
attached
by
the
promotion.
Then
they
would
actually
get
you
know
the
the
respective
discount
or
whatever
was
there
so
from
a
technical
perspective.
We,
it
was
a
very
good
chance
for
us
to
work.
We
we
had
worked
with
Kafka,
we
hadn't
worked
with
the
bedroom
or
mirror
maker.
We
had
experience
with
drills,
but
we
had
to
work
with
kojito.
D
So
and
of
course,
we
hadn't
worked
with
physical
devices,
you
know
being
in
a
world
of
cloud,
so
it
was
a
really
nice
experience
for
us
to
actually
get
experience
with
get
acquainted
and
work
with
these
Technologies
actually
Implement,
something
that
was
that
was
that
solves
this
problem.
B
B
Of
course,
you
want
to
automate
no
each
and
every
process,
including
they
needed
the
connection
of
those
devices
with
the
near
Edge
cluster
that
this
was
mentioned
so
having
this
kind
of
natural
flow
and
seeing
all
the
the
whole
environment
working
as
one
component
was
very
impressive,
and
this
is
actually
what
we
expect
from
technology.
What
our
customers
expect
from
technology.
They
have
an.
B
E
I
have
to
say
plus
one
to
that
there's
nothing
better
than
doing
the
demo
live
right
and
that's
what
we
could
that's
what
we
got
and
I
think
every
technology
is
always
worries
right.
When
you
do
something
live,
you
know
something's
going
to
break,
but
it
was
a
cool
demo
and
yeah
very,
very
good
to
see.
A
Thank
you
yes
and
I
attempt
Thanks,
James
I
think
we
have
a
a
link
to
look
at
the
search
code
right,
so
the
people
can
really
see
the
implementation,
the
technical
details.
You
know
this
is
a
technical
show
right.
So,
let's
give
to
our
audience
technical
details.
I
put
the
link
of
a
repository
where
you
can
find
how
only
system
implemented
this
version
of
the
arkfest
this
year.
So
that's
great!
Thank
you
for
sharing
it.
Please
have
a
look
and
you'll
find
you
know
you
have
in
the
slide
that
diagram
the
architectural
diagram.
A
Right
that
was
really
interesting
and
yeah.
Let's
go
for
Dominic
Dominic.
F
Yeah
we
for
a
little
bit
of
context.
We
have
a
very
development
heavy
in
our
team,
so
we
had
like
one
infrastructure
guy
who
felt
sick,
so
we
were
pretty
much
watching
all
the
things
that
had
to
do
with
infrastructure,
so
our
use
case
was
also
focused
more
on
the
application
side
and
data,
but
we
also
went
into
the
direction
of
inventory
management
for
the
stores.
F
F
Reality
shows
that's
not
the
case,
so
our
solution
was
to
to
bring
sensor
data
into
live,
track,
inventory
stock
and
also
to
to
manage
the
data
that
we
get
from
the
sensors
to
maybe
reorder
products
that
are
going
out
of
stock
or
or
get
even
more
metadata.
Maybe
there
are
some
stores
in
Germany
that
sell
a
lot
of
beer
at
every
Sunday,
so
we
want
to
stock
them
with
gear
that
they
can
make
out
their
customers
happy
so
collecting
data.
F
Then
Collective
data
in
these
stores
then
aggregating
the
data
in
the
data
center
and
from
there
we
had
all
the
data
in
hand
and
could
do
the
data
science
from
there.
So
on
the
left
side,
a
little
bit
simplified,
but
the
architecture
of
our
solution.
So
we
had
basically
all
teams
had
the
same
base
architecture
like
you
already
heard
where
we
had
a
fit
lip
on
for
an
edge
device
and
single
node
openshift
cluster,
on
an
internet
for
the
for
the
store
side
and
then
a
data
center,
or
it
was
our.
F
So
it
was
on
the
hosted
on
Azure
and
we
had
to
sync
those-
and
that
was
a
quite
the
experience
for
only
developers
to
sync
up
to
Kafka
clusters
which,
where
the
developers
have
never
really
worked
with
Kafka.
So
we
learned
really
a
lot
in
in
the
SEC
Fest
and
also
we
kind
of
learned
how
we
integrate
different
Technologies.
So
we
have
the
sensors
that
are
writing
into
mqtt
topics
technically
so
that
basically,
the
industry
standard
that
iot
iot
devices,
mostly
right
into
mqtt
topics,
I
think
my
headset
went
off.
F
Can
you
still
hear
me?
Okay,
just
basically
into
the
industry
startup
and
we
used
Kafka
connect
or
we
try
to
use
Kafka
connect
to
to
translate
basically
for
mqtt3
into
Kafka
topics.
So
we
had
that
side
and
we
could
use
the
redhead
product.
It
was
a
part
of
mq
streams
and
then
we,
the
market
data,
that
we
that
we
collected
the
sensor
data.
F
We
pushed
that
via
mirror
Maker
2
again
into
the
data
center
and
in
the
data
center
we
had
an
application
that
we
called
the
virtual
Market
and
in
the
virtual
Market
we
can
select
different
markets
that
that
are
collecting
data
and
pushing
the
data
into
the
Data
Center
and
see
a
live
representation
of
the
stock
that
the
that
the
sensors
are
collecting.
I
can
show
a
mock
application.
It
should
be
running
right
now.
I'm
sharing
my
screen
second
might
be
dangerous,
but
let's
try.
F
That's
that
was
the
the
Prototype
that
we
presented
also
in
the
hackvest
judging
session
and
right
now,
it's
unfortunately
again
mock
data
behind
it
because
we
don't
have
access
anymore
to
the
Clusters,
where
we
develop
our
application.
F
But
that
was
our
our
vision
and
our
work
and
prototype
where
we
collected
the
sensor
data
on
the
edge
device
pushed
it
into
the
Sno
and
replicated
it
from
the
Sno
into
the
data
center,
and
we
built
a
little
mock
application
where
we
can
see
that
the
stock
of
cake
is
42
right
now,
so
we
can
react
to
those
to
those
levels
of
stock.
So
that
was
our
idea.
F
We
had
some
more
plans
to
go
into
data
science
to
analyze,
maybe
how
stocks,
how
the
inventory
behaves
over
time
so
that
we
can
react
to
to
different
situations.
But
we
didn't
have
quite
that
much
time
because
we
are
pretty
much
engulfed
with
infrastructure
challenges
right.
E
F
E
But
but
it
was
really
cool
to
see
you
built
a
new
visualization
new
front
end
connected
back
to
the
openshift
Clusters
did
that
during
hackvest
and
that
was
really
cool
to
see
but
I.
Think
particularly,
you
know.
Like
I
say
both
teams
were
judged
from
a
technical
and
a
business
standpoint
and
it
really
sorted
the
added
team.
Here
he
listened
to
a
key
problem
in
retail.
You
know
believe
it
or
not
in
2023
a
real-time
stock
control.
E
Real-Time
stock
visualization
is
still
a
hard
problem
and,
although
it's
a
prototype,
a
proof
of
concept
yeah,
this
was
a
an
interesting
way
to
to
be
tracking
those
stock
levels
and
visualizing
them
so
cool
to
see
and
your
live
demo
work,
which
is
always
nice.
But
yes,
unless
you
didn't
click
on
anything,
but
that's
okay,.
A
That's
great:
we
had
the
light
demo
we
had.
We
also
have
the
repository,
so
you
can
look
at
the
at
the
chat
and
you
can
go
to
the
to
GitHub
and
check
out
the
source
code
of
this
implementation.
Bye.
E
E
That's
a
good
segue
now
I
think
over
to
to
Christian
yeah
representing
the
first
place,
winners
of
hack
Fest-
and
you
know
you
guys
blew
us
away
right
with
that
results.
C
On
the
first
week
we
were
like
planning
on
what
what
are
we
going
to
do
and
then
I
was
sick
for
more
than
a
week
and
then
not
much
happened
because
of
guys
were
trying
to
figure
out
how
to
build
the
iot
side
of
things
with
the
red
Edge
image
and
then,
after
basically,
two
weeks
were
gone
and
we
were
looking
into
her
face
and
we
were
saying:
can
we
actually
make
this
or
do
we
have
to
break
up
because
we
had
to
plan
so
much
of
stuff?
A
C
And
I
don't
know
how
we
won,
but
nevertheless
so
yeah.
All
our
results
are
on
GitHub
I'm,
still
hoping
that
at
some
point
in
time
we
can.
We
can
also
put
this
under
under
open
source
license,
which
is
currently
not.
Can
we
share
my
screen
or
my
my
tab?
Please,
yes,.
C
So
we
came
up
with
this
features:
company
called
Taylor
Swift,
one
of
our
team
members
suggested
this
name,
which
I
think
is
the
greatest
thing
ever
and
the
other
idea
was:
why
are
we
going
for
a
fashion
company,
so
Taylor
Swift
is
a
fashion
company
and
the
inspiration
for
that
was
this
crazy,
Red
Sox,
who
received
as
the
merchandise
that
came
along
with
the
iot
devices
and
yeah?
Basically,
we
we
have
our
architecture
overview
here
and
I
hope.
C
This
is
big
enough,
so
you
will
see,
there's
quite
resemblance
to
the
reference
architecture
from
redhead,
because
we
basically
said
yeah
that
looks
good.
Let's
basically
do
everything
which
was
a
bit
ambitious,
so
we
couldn't
dive
so
deep
into
the
particular
topics.
C
C
How
can
we
make
sure
that
this
iot
devices
that
we
have
here
work
in
a
secure
way
allows
us
to
put
them
any
place
in
shops
as
kind
of
a
decentralized
point
of
sale,
architecture,
and
one
way
to
achieve
this
is
doing
a
secure
boot
device
initialization
where
the
device
registers
itself
and
gets
a
pki
certificates.
C
We
use
the
sort
manager
operator
for
that
to
to
identify
itself
then
later
on
at
runtime
our
iot.
C
Run
in
like
a
kiosk
mode,
where
it
yeah
where
it
can
run
either
in
a
basically
in
a
customer
mode
where
you
can
see
all
of
our
products,
you
can
see
our
stocks
and
the
prices
you
can
scan
an
article
see
the
price
you
can
check.
Is
this
article
in
stock
in
a
different
color
or
a
different
size?
C
If
you
don't
find
it
right
now,
and
all
of
this
information
should
be
secured
in
a
way
and
to
do
this,
all
these
calls
that
come
from
this
a
single
page
web
application
go
through
an
edge
proxy
on
the
device
where
it
basically
adds
crypto
headers,
using
this
certificate
that
it
acquired
during
the
initialization
process,
and
these
crypto
headers
are
then
validated
on
the
back
office.
C
B
C
Basically,
how
we
ensured
yeah,
authentication
or
trust
on
the
on
the
devices
and
then
the
the
shop
device
also
has
some
kind
of
sales
mode.
So
if
an
employee
wants
to
make
a
sales
transaction,
then
you
can
maybe
scan
his
his
NFC
badge
and
then
he's
locked
in
and
can
do
a
Reclamation
or
a
sale
or
whatever.
So.
B
C
Just
the
the
device
part,
but
we
went
on
with
that
it
said
so.
The
reference
architecture
from
Reddit
had
this
cool
idea
with
Division
and
mirror
maker
in
a
Kafka.
We
heard
about
it
the
other
two
projects
already,
so
this
change
data
captions.
So
ever
whenever
we
make
a
change,
we
can
replicate
it
into
One
Direction.
C
So
from
the
data
center
to
the
back
office,
for
example,
we
use
this
to
replicate
our
Master
data,
so
we
have
a
list
of
customers
and
products
and
base
prices
in
a
central
data
center,
where
you
in
theory
could
also
run,
for
example,
some
self-service
registration
for
your
customers
or
whatever
to
get
maybe
discounts
for
people
who
registered,
and
but
we
said
why
stock
here,
why
just
go
into
one
direction
of
replication
when
we
can
use
it
in
both
directions
and
what
we
actually
did,
then
was
to
say,
hey.
C
C
That
we
can
make
use
of
machine
learning
in
the
data
center.
We
have
all
the
data
that
are
available
and
we
can
reuse
redhead,
openshift
data
science
for
that
with
the
jupyter
notebook
and
also
medicine,
grafana
visualizations
on
that
and
the
devices
itself.
Since
there
are
not
in
a
large
data
center,
they
only
have
limited
capabilities;
they
only
have
the
data
that
they
need
in
the
local
store
yeah,
that's
basically
the
the
overall
architecture.
C
From
that
we
used
a
lot
of
the
available
operators
and
everything
that
you
see
here
is
built
with
infrastructures
code,
so
interior.
You
can
replicate
this.
We
also
worked
with.
Since
we
were
remote
team.
We
didn't
use
the
iot
devices
because
then
we
would
have
somehow
shared
the
screen
on
that.
So
we
use
Virtual
machines
for
that.
So
you
can
rebuild
this.
If
you
want
anywhere,
everything
is
documented
and
yeah.
C
The
the
actual
services
that
we
built
with
broadcast,
which
is
by
the
way,
a
lot
of
fun
so
I'm
a
Java
developer
with
a
lot
of
spring
experience.
I
could
say
Co-op
course
is
a
magnificent
tool
for
that,
especially
the
hot
reloading
Parts,
making
my
life
easy.
The
servers
aren't
that
complex,
so
look
into
it.
If
you're
interested
it's
usually
not
more
than
like
500
lines
of
code,
pretty.
C
A
B
In
the
data
center,
yes,
so
the
platform
is
always
the
standard
starting
point
for
each
and
every
stretch,
stretched
or
distributed.
A
Architectures,
you
know
in
the
Reddit
hack
Fest
right.
Thank
you.
James
James
just
share
with
me
the
architectural
diagram
that
really
help
us
understand
how
it
works.
Let
me
share
it
quickly,
because
I
think
it's
really
cool
to
understand
how
all
the
people
implemented
this
version
of
the
access
and
I'm
I
hope
you
can
see
that
yeah.
E
I
mean
I.
Think,
as
you
said.
What's
really
interesting
is
that
each
team
got
the
same.
The
same
template
architecture,
it's
not
a
reference
architect,
but
it
was
like
a
template
architecture
right.
So
it
means
the
teams
we
provisioned
Azure
rather
openshift
for
them
over
fruitions.
You
can
see
a
bunch
of
operators
like
a
web
terminal
and
Jager
and
Prometheus
and
all
this
kind
of
stuff,
but
everything
was
left
unconfigured
right
you
log
into
openshift
you've
got
a
blank
environment.
There's
nothing!
You
know,
there's
nothing!
E
There,
we've
written
a
little
bit
of
code
like
really
a
tiny
bit
of
code
to
be
honest
to
get
them
started,
but
each
of
these
teams
you
heard
from
just
now,
took
this
and
thought
okay.
Well,
we
like
some
of
these
bits,
I
mean
like
and
we
don't
like
some
other
bits
and
we're
going
to
change
and
adapt.
You
know
what
what
we've
got
here.
E
So
what
makes
sense
for
the
business
problem
we're
trying
to
solve
so
I
think.
What's
really
cool
as
well
was
you're,
probably
looking
at
the
diagrams
in
a
working
get
all
the
bits.
These
teams
worked
out
all
these
bits.
They
worked
out
the
bits
in
the
architecture
when
they're
not
working
on
this
full-time.
You
know
they're
working
on
this
during
the
let's
say
one
day
a
week
or
so
over
four
or
five
weeks
and
they've
produced
some
incredible
things
as
you've.
Just
seen
right,
you've
taken
different
approach
and
produced
incredible
things.
E
One
thing:
maybe
if
we
just
leave
the
diagram
up
here
for
now
for
for
a
minute
or
two,
maybe
worthwhile
going
around
the
teams,
something
that
I
heard
from
each
team
was
we
never
used
the
museum
before
we
never
use
chain
data
capture
right,
another
team
said
I
think
we've
never
used
Kafka
or
at
least
never
used
it
in
this
configuration
before
other
teams,
say
again,
we
didn't
never
use
half
the
stuff
in
this
architecture.
E
Maybe
if
we
could
go
around
the
teams,
perhaps
we'd
go
to
tesos,
then
first
so
going
in
that
same
order.
We
had
last
time
I
mean:
did
you
find
perhaps
having
a
technology
from
Red
Hat?
Maybe
helped
you
go
a
little
bit
faster
than
if
you're
trying
to
figure
this
out
yourself
or?
How
is
that
learning
experience
for
you?
E
D
Really
enjoyed
having
to
learn
some
and
and
use
some
technologies,
so
you
know
we
had
heard
about.
We,
we
understood
what
change
data
capture
is
and
why
we
will
be
using
it.
Why
would
we
need
to
use
it,
but
we
had
never
actually
tried
to
deploy
it
and
it?
It
was
one
thing
we
really
enjoyed.
Another
thing
that
we
really
enjoyed
about
the
hack
Fest
was
that
the
level
of
complexity
of
difficulty
was
exactly
where
it
should
be.
You
know,
so
it
was
challenging
enough
for
us.
We
had
to
learn
the
bedroom.
D
We
had
to
figure
out
how
to
configure
mirror
maker
and
I
mentioned
the
kochito.
So
we
have
experience
with
rules,
but
you
know
having
to
use
now
the
quarkus
and
cultural.
That
was
something
new
for
us,
and
we
were
very
glad
that
we
were
able
to
use
that
and
from
a
more
system
perspective
realm
for
Edge
was
something
that
was
completely
new
for
us
and
we
took
the
decision
to
actually
go
for
us.
D
Andreas
said
to
go
for
the
for
the
actual
device
you
know
to
set
it
up
to
to
have
a
Docker
container
automatically
start
once
it's
booted.
So
because
we
wanted
to
understand
what
it's
like
in
a
disconnected
world
where
a
device
you
may
not
even
be
able
to
have
a
proper
UI,
a
proper,
a
monitor
to
understand,
what's
going
on
to
to
be
sure
that
it
can
start
up
and
hook
up
to
your
network
properly.
E
F
Just
just
to
add
the
rail
for
Edge
thing,
it
was
exactly
the
same
for
us.
We
we
had
some
struggle
installing
it,
but
in
the
end,
when
we
power
cycle
the
thing
and
it
just
booted
up
and
the
UI
was
a
really
good
feeling
again,
our
changes
were
mostly
because
we,
we
are
a
development
team
and
we
had
to
face
much
of
the
infrastructure
problems
head
on.
Okay,
like
you
mentioned
that
the
Kafka
is
the
Kafka
operator
is
installed,
but
it's
not
configured
right.
So
we
had
to.
We
had
to
look.
F
How
do
we
configure
everything
which
in
hindsight,
isn't
really
that
hard,
but
we
didn't
know
how
right
so,
so
we
learned
a
lot
of
stuff
in
in
that
direction.
So
refer,
Edge
was
completely
new
for
us
and
division.
We
in
theory,
we
knew
how
it
would
work,
but
we
never
set
it
up.
So
it
was
quite
a
nice
experience
too
yeah.
We
just
learned
a
lot
and
it
was
quite
intimidating
intimidating
in
in
the
beginning,
because
all
the
Austin
operators
are
installed.
Do
we
need
those?
F
E
Definitely
and
I
think
hopefully,
what
we're
trying
to
simulate
a
little
bit
really
is:
is
the
real
world
right?
How
many
times
has
everyone
on
this
call
talk
to
customer
customers?
Hey
I've
got
this
thing
over
here
in
this
thing
over
there
we're
trying
to
get
this
bit
working
and
you're,
trying
to
oil
away
all
the
complexity
and
then
work
out.
E
What's
the
real
business
problem
they're
trying
to
solve
and
how
you
know
do
they
need
all
this
technology
or
what
the
bits
that
they
do
need
to
solve
if
there's
that
problem
so
again,
that
was
a
challenge
that
all
the
teams
were
faced
with
with
who
and
last,
but
certainly
not
least,
Christian
from
Atlas
team,
again
I
think
again
lots
of
Technology
you've
not
used
before
but
found
yourself.
You
can
make
it
productive,
really
quickly,
right.
C
Yeah,
that's
what
I've
saw
I
haven't
used
or
I'm,
not
sure
anybody
in
the
team
had
used
Kafka
before
setup,
certainly
but
not
used
but
event
driven
architectures
were
new
to
us,
so
setting
up
deepesium
and
Cafe
yeah.
The
most
challenge
was
actually
then
getting
all
the
configuration
right.
Reading
all
the
dogs
and
actually
division
broke
in
the
process.
C
It
working
at
some
point
and
then
it
just
it
just
stopped
it
even
with
you
guys,
we
couldn't
fix
it,
and
then
we
had
this.
This
trouble
with
the
crunchy
DB
operator
refusing
to
to
to
pull
the
image
anymore.
So
we
said:
okay,
we
rather
not
we're
changing
too
much
right
now.
Otherwise
the
whole
demo
will
fail.
C
That
was
a
funny
experience.
My
learning
was
this:
like
The.
Operators
are
great
if
you
already
know
what
you're
doing,
but
it
doesn't
lift
you
away
from
the
complexity
of
understanding
the
technology
underneath
because,
as
soon
as
you
run
into
the
problems
and
you,
if,
if
you
run
in
production,
you
will
run
into
problems
at
some
point
in
the
time
you
will
have
to
understand
this
technology
anyway.
C
So
it's
a
good
starting
point
for
a
hackvest.
If
you
know
it
and
it
can
help
you
getting
started
easy,
but
you
still
have
you
don't
save
the
time
on
learning
technology
with
rail
at
Edge?
We
also
had
our
problems
mostly
I.
Think,
maybe
because
we
use
the
the
virtual
machine
and
we
had
troubles
getting
any
output
there
at
the
beginning,
because
I
don't
know
some
issues
with
BNC
that
are
probably
not
even
related
to
the
red
Edge
image
control
to
the
VM
that
we
could
solve
eventually.
C
But
we
lost
a
lot
of
time.
There
I
think
the
idea
behind
wallet
Edge
and
this
Fido
device
onboarding
is-
is
amazing
technology,
and
but
the
docs
right
now
are
really
scattered
around
on
the
web,
and
we
had
to
dive
into
the
rust
code
from
the
there's.
An
implementation
on
the
I
think
it's
from
the
Fedora
project,
I'm
not
actually
sure
like
on
which
part
is
it
listening?
How
do
we
talk
to
it?
It
wasn't
documented.
C
Anyway,
we
were
following
some
blog
posts
and
we
we
started
the
applications,
but
we
couldn't
integrate
it
in
our
to
our
solution,
but
I
think
that's
that
thing.
I
would
really
see
working
because
that
would
solve
the
problem
of
device
initialization
in
a
safe
way,
and
then
you
put
on
top
away
to
securely
communicate
after
it
and
that's
a
combination
of
this
I
think
it's
it's
a
it's.
C
A
big
deal
I've
been
working
on
projects
before
where
a
lot
of
there
are
a
lot
of
companies
out
there
right
now
that
aren't
piloting
phases
without
key
devices
where
everything
works
great,
but
every
time
you
set
up
a
new
device,
a
developer
puts
its
hand
on
it,
takes
an
hour
or
two,
but
that
doesn't
scale
yeah.
If
you,
if
you
do
this,
for
a
thousand
or
ten
thousand
devices,
it's
not
going
to
work,
and
this
is
where
this
video
device
onboarding
can
really
help.
C
E
Always
needs
a
better
documentation,
that's
that's
great,
a
universal
truth.
Definitely,
but
I
thought
it
was
interesting
again
all
teams
using
the
red
hat
technology
to
quickly
build
something
yeah
in
a
hack,
Fest
style
way
for
sure
we
know
we're
not
building
for
production,
but
you
know
we
did
some
pretty
interesting
things
in
in
this
way
and
actually
I
just
need
to
call
out
last
point
before
I
hand
back
to
Natalia
here
the
the
atos
team.
E
Here
you
know
freaked
us
out
at
one
point
because
they
said
hey:
we
want
to
deploy
openshift
data
science
on
top
of
AO
right.
Well,
great,
that's
not
supported,
and
also
we
need
another
100
gigabytes
of
RAM
and
16
reports
in
the
cluster
I
was
like
okay,
well,
I
guess,
as
the
Ops
Team
we'll
go
ahead
and
build
that
up
and
make
it
available,
but
again
talk
about
trying
new
things
and
testing
the
ideas
out
really
innovating.
E
You
know
really
really
cool
to
see.
So
it
really
surprised
us
from
that
side.
Here
we
have
to
scale
the
cluster
quickly
yeah
and
we
did
it
I
think
yeah.
Actually,
in
the
end,
it
was
pretty
quick
and
we
were
scrambling
behind
the
scenes
to
keep
everything
running
and
answer
questions,
but
really
difficult
to
see
again
how
different
teams
interpreted
the
this
in
different
ways
and
built.
You
know
interesting
stuff,
even
if
it's
not
supported
sometimes
but
yeah.
That's
cool
to
see.
A
Yeah,
that's
that's
great,
that's
great
to
hear
this
feedback
James.
So
there
was
this
request
and
to
to
use
openshift
data
science
on
on
the
Arrow
part
right,
yeah.
A
Well,
I'm
I'm
curious.
What
kind
of
why
you,
you
thought
you
needed
the
something
like
openshield
data
sign
which,
which
part
would
be
good
fit
for
AI
ml
workload,
I.
C
Mean
yeah
from
from
the
over
architecture
it
just
it
just
belongs
there
right,
we're
doing
a
lot
of
work
to
get
all
the
data
into
one
place,
and
why
would
you
do
this?
If
not
for,
for
machine
learning
and
data
science
on
it
and
from
our
perspectives
like
I,
wasn't
even
aware
that
there
are
like
support
limits?
It's
just
like
okay,
there's
redhead
or
shift
data
science.
We
are
running
openshift.
Can
we
have
it.
E
C
F
C
A
That
was
cool.
Oh
that's
great,
to
hear
right,
so
you
could
also
use
those
generative
AI
outcomes
in
your
access,
wow
that
that
lots
of
stuff
well
I'm
I'm
impressed
about
the
technical
level
of
this
Edition
and
what
you
implement
I
think
it's
really
cool.
You
know
having
openshift
at
the
edge
with
single
node
openshift
heavy
ownership
on
cloud
with
arrow.
The
same
you
know
experience
hybrid.
A
This
is
really
the
you
know,
a
hybrid
Cloud,
View
and,
and
you
implemented
the
wall
stack
and
I
like
how
you
did
your
connection
with
Kafka
mirror
maker,
how
you
use
the
video.
A
So
what
kind
of
databases
you
you
you?
You
use
that
yeah.
C
On
our
end,
we
we
decided
to
use
the
same
database
schema
in
the
back
office
and
in
the
data
center
because
it
simplifies
it
saves
it
really
a
lot
of
time.
In
our
scenario,
I'm,
not
sure
this
is
a
good
approach
for
production
scenarios.
Yeah.
C
Don't
know
I
I,
like
this
change
data
capture
approach
in
general,
for
solving
issues
where
you
already
have
a
SQL
database,
for
example,
because
you
have
a
Erp
system
running
where
you
want
to
connect
to,
and
it
also
solves
these
two
faiths,
the
commit
issue
that
you
sometimes
have
we
faced
this
in
previous
projects,
where
you
said
you
can
you
have
a
you
made
a
transaction
and
you
now
save
it
to
the
database
and
then
send
it
to
the
message
broker.
What
happens
if
the
message
broker
isn't
available?
C
D
B
Here,
that's
the
also
the
beauty
of
the
hackers
right.
So
in
the
past
we
used
mongodb
any
complex
DB
for
the
time
series
database,
so
using
different
Technologies
to
achieve
the
same
result.
Now
they
are
using
crunch
DB
and
also
we
always
used
search
manager
as
a
certified
technology
on
our
project
to
secure
the
entire
environment,
creating
delegate
certificate
authorities.
So
it's
not
just
the
product
platform
as
the
Baseline,
but
we
also
use
a
lot
of
Technology
partners
and
isvs
to
make
sure
that
solution
developed
by
our
partners.
B
Our
solution,
providers
and
system
integrators
is
including
technologies
that
are
Mac
that
actually
match.
Now
that
our
generalists
as
we
are
like
a
SQL
or
nosql
database
of
search
management,
but
also
some
Specialists,
Technologies
and
and
isbs
for
specific
use
cases
so
I'm
very
impressed
by
by
our
entire
ecosystem
and
the
ability
they
have
to
achieve
the
same
results
using
different
Technologies.
A
Yeah
I
totally
agree
with
you
Andrea.
This
is
great.
Also
you
this.
The
purpose
of
the
access
you
know
is
to
makes
this
combination
of
ideas,
or
you
know,
Technologies
and
implementation
that
start
with
new
new
stuff,
maybe
from
access-
and
this
is
something
I
would
like
to
ask
to
all
each
of
you
again.
What
is
the
outcome
you
have,
after
this
Act
Fest?
What
what
the
actress
brought
to
you?
What
do
you
think
the
added
value
that
you
now
have,
after
the
access
and
starting
from
Duchess
again.
D
Okay,
before
I
talk
about
the
outcome,
just
a
very
briefly
I'd
like
to
to
also
mention
the
the
challenge
that
we
faced
when
we
started
this
endeavor,
it
was
to
think
of
something
that
would
actually
be
useful
and
also
original
from
a
business
perspective.
You
know,
given
that
our
team
was
made
out
of
Engineers,
who
didn't
have
any
analysts,
so
we
worked
our
way
bottom
up.
You
know,
as
opposed
to
having
a
customer
asking
you
to
do
something.
D
So
as
far
as
outcomes
are
concerned-
and
we
feel
much
much
more
confident
and
how
we
can
use
these
Technologies
in
real
world
cases
with
customers-
and
we
think
that
we
have
a
much
better
grasp
of
the
iot
challenges.
You
know
when
it
comes
to
what
what
these
challenges
are
and
how
managing
devices
what
it
means
to
work
in
a
disconnected
world
and
the
the
approach
that
we
use
for
asynchronous
guaranteed
replication
between
the
headquarters
and
the
store.
We
think
that
something
that
can
prove
to
be
quite
useful
and
perhaps
in
unit
systems.
B
A
D
F
The
whole
configuration
with
the
to
get
the
applications
to
talk
to
Kafka
and
to
get
mirror
maker
to
talk
the
other
Kafka
cluster
and
certificates
were
a
real
pain
point
there.
It's
probably
because
we
weren't
so
familiar
with
the
third
manager,
but
that
cost
us
a
lot
of
time
but
yeah.
Maybe
you
can,
can
get
it
better.
If
you
know
how
the
search
manager
works,
we
couldn't
get
to
that.
F
So
what
we
I'll
come
over
Hector.
So
in
the
end
it
is
an
enabling
enablement
right.
We
had
a
pretty
large
developer,
Focus
team
and
we
most
of
the
team
didn't
have
any
experiences
with
Kafka
and
didn't
have
any
experience
with
the
museum
or
with
open
shifting,
and
we
yeah.
We
had
a
really
good
time
just
doing
on
our
know-how
transfers.
One
guy
has
already
worked
with
openshift.
F
One
guy
has
already
worked
with
Kafka
and
we
now
more
people
in
our
teams
are
able
to
do
this
stuff,
not
maybe
even
not
of
production
level
right
but
you've
heard
the
things.
So
what
is
the
mirror
maker?
How
does
the
communication
work,
how
do
I
replicate
across
clusters
and
in
the
end,
we
have
more
people
that
can,
even
when
we
talk
to
our
customers
right,
they
have
in
the
back
of
the
head.
F
Oh,
we
have
the
solution
that
we
did
on
egg
test
to
replicate
between
the
data
center
and
and
your
your
out
your
of
feather
road,
your
stores,
for
example.
Right
and
now
we
can.
We
can
do
that.
We
know
how
to
use
those
products
so
enablement
in
the
NDC,
argumentative
I
would
say.
A
C
There
are
some
troubles
that
you
can
have,
for
example,
if
you
don't
have
the
naming
schemes
right
and
you
do
double
rip
both
way
replication,
you
can
end
up
in
an
endless
circle
of
messages
passing
around
also,
if
you
don't
know
how
the
Vision
Works
like
we
did
at
the
beginning
and
you're,
for
example,
underscores
in
your
table
and
then
you
redhead,
the
mq
operator
is
trying
to
build
up
the
the
topics
in
crds
in
openshift,
but
since
underscores
cannot
be
represented
in
in
the
topics
they
get
a
different
name
and
I
thought.
C
This
is
a
name
that
is
going
to
be
appear
in
the
cluster.
But
it's
not
it's
a
correct
name,
and
if
you
don't
know
that,
if
you
don't
just
look
into
the
cluster,
you
don't
find
it
and
then
yeah.
We
lost
a
lot
of
time
on
that,
but
the
biggest
achievement
we
have
of
the
biggest
learning
is,
of
course,
learning
a
lot
of
technology,
for
we
also
had
a
few
Juniors
on
the
team,
but
also
getting
more
knowledge
of
this
iot
topic,
especially
device
onboarding,
but
also
this
replication
of
data
between
clusters.
C
A
Thank
you.
Thank
you
for
sharing
your
feedback.
I.
Think
it's
really
good.
Andrea
James
has
the
feedback
to
make
another
great
edition
of
the
access,
and,
let's
close
this
session,
Andrea
James
with
some
final
thoughts
and
words.
E
Well,
obviously,
I'd
like
to
thank
our
winning
teams
again
for
their
time
today,
but
their
participation
during
hackvest,
incredible
work,
but
also,
thanks
to
all
the
other
teams,
also
participate
in
hackvest
again.
The
winners
you've
got
here
beat
a
lot
of
other
teams,
but
a
lot
of
other
people
participated
in
this
hack,
Fest
biggest
thanks
as
well
to
our
sponsors
here
to
Microsoft
and
Intel
for
sponsoring
hackvest
and
making
this
possible.
It
obviously
wouldn't
have
been
possible
without
them,
so
as
for
Hardware
support
for
Azure
credits
for
their
technical
support.
B
James
I
would
like
also
to
thank
James
he's
an
awesome
technical
person,
but
also
if
I
he's
got
a
fantastic
sales
documents
of
James
is
amazing
and
also
Camila
I
want
to
mention
her
because
she
didn't
want
to
show
up
in
this.
This
episode,
but
she's
been
the
marketing
and
project
management
person
that
organized
everything
so
Camila.
You
are
great
and
the
whole
team
right.
We
got
contribution
from
bu's
and
and
sales
team
and
Consulting
teams,
so
they
are
fantastic.
B
What's
next,
we
will
focus
more
on
AIML
roads,
openshift,
still
Edge
Computing,
probably
focusing
on
robotics
something
new
or
exciting
and
latest
and
greatest
from
for
manufacturing,
maybe
something
complex
again,
something
that
involves
security,
but
also
scalability
and
data
flow
and
management.
So
we
are
still
brainstorming
we
will
get
back
to
in
in
the
red
hackvest
landing
page,
so
red
hat.com
with
more
news
in
the
near
future.
Thanks.
F
A
Well,
it's
a
pleasure
Andrea,
I'm
and
I'm
looking
I
want
to
share
the
the
the
the
the
website
so
I
think
it's
this
one
I
put
in
the
chat
and
let
me
also
put
a
caption
so
the
people
interested
on
the
access
they
can
go
to
this
website
and
if
there's
a
New
Edition
upcoming
they
they
will
hear
from
from
here
right.
A
Yes,
that's
right,
that's
great!
So
thanks
and
congratulations
to
the
winner,
tassos
Dominic
Christian.
Congratulations
with
his
own
system,
Viada
and
atos.
For
for
this
Edition,
the
implementation
is
super.
Super
cool
and
Technical
I
really
enjoyed
this
session
and
hey
folks.
The
recording
of
this
session
is
still
here
and
the
same
YouTube
link
or
or
twitch
link.
A
So
if
you
want
to
re-watch
the
session,
just
use
the
same
link
and
you
will
find
the
recording
also
in
our
YouTube
openshift
TV
coffee
break
playlist
was
a
pleasure
to
have
all
you
was
a
pleasure
to
have
you
Andrea
James,
and
we
come
back
on
May
the
31
with
another
Edition
we'll
have
Andrea
again,
if
I,
if
I'm,
not
wrong
right,
Andrea.
A
We
are
super
super
technical,
I'm
really
really
happy
for
it.
Thank
you,
sir.
Thank
you.
Thank
you.
Folks,
see
you
the
next
time
ciao.