►
From YouTube: Managing Kong Routes in Multi-Cloud Environments
Description
Join Frederik Nakstad and Adelina Simion from Form3 to learn how they use the Kong Ingress Controller to manage service routes on Form3’s multi-cloud platform. You’ll get an overview of Form3’s platform and environment configuration and how they moved from a legacy ECS solution to running Kong in a Kubernetes-native way with the Kong Ingress Controller. This approach provides useful abstractions and makes it easier for the Form3 service teams to configure routes.
A
Hello,
everyone
good
morning,
good
afternoon,
good
evening
from
wherever
you're
joining
us
around
the
globe,
I'm
very
happy
to
have
all
of
you
here
for
our
July
use.
Your
call,
my
name
is
Dalia
and
I
work
as
a
community
manager.
Here
at
Kong
today
we
have
a
very
special
presentation
for
you
coming
from
form
three
I
will
let
them
introduce
themselves
and
tell
you
more
about
the
company.
We
will
talk,
how
they
manage
Chrome
routes
in
multi-cloud
environments.
We
have
Adelina
and
Frederick
here
with
us.
A
Let's
take
all
of
the
questions
at
the
end,
so
please
use
the
Q
a
function
that
we
have
here
on
the
bottom.
Please
put
all
the
questions
there
and
as
soon
as
they're
done
presenting,
we
will
answer
them
and
with
that
please
take
it
away.
B
Hi
everyone
yeah,
we're
really
excited
to
be
here.
Kong
is
absolutely
crucial
technology
for
our
multi-cloud
platform,
so
we're
super
excited
to
share
our
use
case
and
teach
you
more
about
how
we
run
Kong
and
a
kubernetes
native
way
on
our
multi-cloud
platform.
Hopefully
there
will
be
some
lessons
learned
for
everyone,
and
you
can
take
these
lessons
back
and
Implement
them
in
your
own
projects,
regardless
of
whether
you
run
a
multi-cloud
platform
or
not.
B
So
we
want
to
introduce
ourselves
a
little
bit
more
so
I'm,
Adelina
and
I'm
the
tech
evangelist
at
form
3,
as
I
mentioned
in
the
chat
I'm
based
in
London,
Sunny
London
and
I've,
been
at
Forum
3
since
late
2021.
So
time
flies
when
you're
having
fun
I.
Suppose
I've
got
background
as
a
go
engineer
and
my
current
focus
is
to
share
knowledge
and
practices,
engineering
practices
and
solutions
at
wonderful
events
such
as
these
and
I'm
here
with
the
wonderful
Frederick
so
introduce
yourself
as
well.
Please
Frederick.
C
Yeah
sure
so,
hello,
everyone,
my
name,
is
Frederick
and
I'm.
A
Staff
engineer
at
form.
3.
I've,
been
here
for
a
little
bit
over
two
and
a
half
years,
and
one
of
the
great
things
about
one
three
is
that
it's
fully
remote.
So
even
if
the
HQ
is
in
London,
I'm
able
to
work
out
of
the
countryside
outside
tromheim
in
Norway,
so
I
have
an
application
developer
background.
B
Yeah
I'm
not
jealous
at
all
of
the
awesome
views
that
Frederick
must
have
in
the
countryside
in
Norway.
So
now
that
you
know
us
a
little
bit
we'd
like
to
get
into
the
meat
of
today's
call,
which
is
to
tell
you
about
form
three
and
our
use
user
use
case.
B
We
build
and
run
our
real-time
payments
processing
platform.
So
we're
gonna
look
at
what
that
means
a
little
bit
more
in
the
next
in
the
following.
Slides
and
our
clients
are
a
major
financial
institutions
that
integrate
with
our
platforms
and
our
platform
and
send
payments
to
instructions
from
their
customers
to
us.
Then
we
reliably
and
securely
process
these
payment
instructions
for
them,
and
this
obviously
makes
our
customers
lives
easier,
as
we
take
care
of
the
operational
operational
concerns
of
this
highly
regulated
financial
industry.
B
B
B
So
imagine
that
you
have
two
friends
and
they
have
Accounts
at
different
banks.
So
here
we've
said:
Bank
a
and
Bank
B.
Imagine
that
the
first
customer,
the
customer
at
bank
a
would
like
to
transfer
some
money
via
bank
transfer
to
their
other
friend,
which
is
a
bank
B.
So
how
do
they
go
about
doing
that?
B
Well,
we
can
greatly
and
I
really
underline
the
word-
greatly
simplify
the
process
to
four
steps.
First,
the
initiating
customer
at
bank
a
sends
the
instruction
to
their
own
bank
about
which
account
at
Bank
B.
They
want
to
send
money
to
next
Bank
a
runs
validations
on
the
payment
instruction.
These
could
be
simple,
simple
verifications
like
whether
the
account
exists
or
more
complex
fraud
verifications.
We
won't
go
into
that
detail
for
now
of
all
the
verifications
go.
B
Well
then,
Bank
a
goes
forward
with
transferring
the
funds
electronically
to
bank
B,
then
finally,
step
4,
Bank
B
puts
the
amount
in
the
account
of
the
intended
customer,
which
was
the
friend
of
customer
a
and
the
money
transfer
between
friends,
complete
all
seems
simple
and
straightforward
right.
Just
move
money
from
one
place
to
another.
Well,
let's
think
about
this
process
at
scale.
B
Instead
of
integrating
with
each
other
directly,
as
my
previous
diagram
might
have
shown,
Banks
rely
on
external
payments
infrastructure
to
specify
the
Integrations
and
standards
between
banks
on
this
diagram.
You
see
some
of
the
important
payment
schemes
as
they
are
called
with
in
Europe
and
the
UK,
and
our
platform
supports
these
payment
schemes
and
some
others
as
well.
We
won't
go
into
the
details
of
each
payment
scheme,
but
it's
important
to
understand
that
they
each
have
their
own
message
formats
and
implementation
details.
B
B
B
Okay,
so
now
that
we
have
a
basic
understanding
of
the
payments
ecosystem,
we
can
turn
our
attention
to
the
form,
3
multi-cloud
payments
platform,
but
form
three
wasn't
always
multi-cloud.
So
I'd
like
to
show
you
what
we
call
the
Legacy
platform
so
form
3
was
founded
in
2006
and
was
built
in
the
cloud
from
the
very
beginning.
The
founding
Engineers
wanted
to
build
a
scalable
platform
as
quickly
as
possible,
and
they
believe
that
a
cloud
architecture
was
the
best
way
to
achieve
this.
B
Our
payment
services
were
hosted
on
ECS
and
used
sqs
and
SNS
to
send
messages
at
scale
between
the
payment
services
and
the
payment,
validator
and
translator
services,
and
these
payment,
validators
and
translator
services
do
the
heavy
lifting
of
communicating
with
the
payment
schemes
that
I
mentioned
earlier,
and
we
had
to
run
the
payment
services
that
integrate
with
the
FPS
or
faster
payments
payment
scheme
separately.
In
data
centers
due
to
regulatory
concerns,
now
I've
included
these
in
this
diagram
for
completion,
but
we,
this
won't
make
a
lot
of
difference
to
what
we're
discussing
today.
B
The
star
of
today's
show
Kong
sits
in
front
of
the
payment
services
acting
as
our
API
Gateway
and
directs
Cloud
traffic
to
our
payment
services,
so
it
sits
between
our
customers
and
our
payment
services.
Playing
a
very,
very
vital
role
and
Frederick
will
tell
you
a
little
bit
more
about
that
as
well.
B
So
let's
have
a
look
at
how
the
con
configuration
was
managed
in
this
Legacy
AWS
hosted
platform.
So
we
talk
about
route
management
routes
describe
the
traffic
into
our
environments,
which
can
become
complex
for
multi-tenant
platforms
such
as
ours,
Engineers,
committed
route
configuration
changes
to
a
single
GitHub
repository
that
is
owned
by
the
platform
team.
B
Once
the
change
is
committed.
The
environment-specific
terraform
workspace
monitors
first
changes
and
plans.
The
change
the
terraform
workspace
applies:
the
change
to
the
Kong
admin
API,
which
in
turn
stores
routes
in
Amazon
RDS.
This
approach
is
in
line
with
our
get
Ops
development
practices.
Our
GitHub
repos
are
the
single
source
of
Truth.
For
the
cone
configuration
as
well
for
as
well
as
all
the
other
stuff
that
we
we
work
on.
B
But
as
form
3
grew,
we
needed
to
scale
the
platform
past
operating
in
a
single
Cloud
running
critical
payment
services
means
that
you
are
very
sensitive
to
outages
and
vendor
lock-in.
A
re-platforming
effort
was
kicked
off
where
we
needed
to
address
these
issues
and
give
our
customers
the
option
to
run
in
different
clouds.
B
Two
options
for
growing
the
platform
for
undertaking
this
work
were
considered.
First,
we
could
perform
a
full
rewrite
of
our
platform
in
another
cloud
provider
such
as
gcp,
for
example,
while
the
second
approach
would
be
to
convert
our
existing
services
to
use
cloud
agnostic
Technologies
instead
of
the
AWS
specific
technologies
that
I
showed
you
earlier.
B
B
B
Okay.
So
here
you
can
see
an
overview
of
the
technologies
that
we
built
our
multi-cloud
payments
platform
with
so
first
we
see
that
we
are
relying
on
kubernetes
cockroachdb
and
nuts
across
three
clouds:
AWS
gcp
and
Azure
customers
choose
which
Cloud
they
want
to
connect
to
and
then
pass
their
payments
instructions
to
that
cloud.
B
All
of
our
services
are
now
running
in
kubernetes
and
we
make
use
of
the
managed
kubernetes
offerings
of
each
cloud
provider.
So
this
is
what
the
diagram
is
illustrating.
Kong
is
still
our
route
management
solution,
but
it
is
now
also
running
in
kubernetes
in
each
Cloud.
The
data
centers
connect
to
the
clouds
using
highly
available
secure
connections.
B
B
C
C
C
So,
as
you
can
see
here
as
new,
such
resources
are
added
to
kubernetes
to
the
kubernetes
API.
The
Ingress
controller
will
get
modified
of
an
event
for
changes
to
this
objects
that
it's
interested
in,
and
it
will
in
turn
take
care
of
configuring.
The
complexity
based
on
the
data
it
read
from
those
resources.
C
C
C
Another
thing
to
note
here
is
that
all
the
configuration
is
stored
as
kubernetes
native
resources,
so
it's
backed
up
by
the
hcd
store
of
kubernetes,
and
this
gives
you
the
opportunity
to
run
in
a
databaseless
fashion.
So,
whereas
previously
we
had
to
have
a
a
running
database
instance
that
we
connect
Chrome
to
to
store
the
routes,
we
don't
need
that
anymore
and
we
can
basically
get
rid
of
a
little
bit
of
management
overhead
for
this
database
and
run
database
lists.
C
So
I
mentioned
a
few
of
these
kubernetes
native
resources
like
Ingress
and
so
on.
So
in
addition
to
this
home,
Ingress
controller
provides
a
few
custom
resource
definitions
that
we
can
use
to
configure
Chrome
native
things
such
as
plugins
consumers
and
so
on,
and
these
custom
resource
definitions
can
then
again
be
enabled
on
the
native
kubernetes,
Ingress
or
service
objects
in
order
to
modify
their
behavior.
C
So
in
this
example,
for
example,
in
this
example,
we
see
a
home
plugin
here
or
the
rate
limiting
plugin
it's
based
on
the
rate
limiting
plugin
and
it
is
configured
to
enforce
a
rate
limit
of
five
requests
per
Amendment.
C
C
So
once
you
connect
your
Ingress
to
a
chrome
plugin
like
this,
once
you
have
traffic
coming
in,
for
that,
Ingress
Chrome
will
take
care
of
it.
Invoking
the
plugin
on
on
such
access
on
the
right,
you
see
a
similar
example
for
a
service
object,
or
once
again
we
have
our
plugin
annotation,
pointing
it
at
the
Contour
identified
on
the
upper
right
here,
and
this
is
configured
to
fire.
The
basic
author
plugin
on
access
to
this
service.
C
So
one
thing
to
note
here
is
that
a
code
plug-in
resource
in
this
case
is
actually
referring
to
a
given
configuration
of
a
com
resource
such
as
rake
limiting
plugin.
So,
for
example,
you
could
have
multiple
instances
of
the
rate
limiting
plugin,
with
different
rate
limits
that
are
being
connected
to
different
types
of
ingresses.
C
Another
thing
to
note
is
that
the
com
plugin
resource
is
namespaced,
so
you
also
have
another
custom
resource
definition
called
con
cluster
plugin
and
instances
of
that
is
available
globally
in
the
cluster,
and
that
allows
you
to
have
certain
plugin
configurations
which
are
available
only
in
some
namespace
While.
Others
are
available
globally
in
the
cluster.
C
C
So
it
can
help
you,
for
example,
migrate
from
other
English
controllers
to
Chrome,
if
you
sole
desire,
so
the
Ingress
resource
itself
is
is
a
kubernetes
native
concept
and
you
specify
what
Ingress
controller
you
want
to
handle
it
near
this
Ingress
class
name
attribute
for
a
very
simple
English
like
this.
You
could
easily
switch
out
the
Ingress
controller
with
nginx
or
traffic,
or
what
have
you
and
things
would
still
run.
Fine.
C
One
thing
that
might
complicate
switching
between
different
controllers
is,
of
course,
if
you
use
controller,
specific
Concepts
and
custom
resources
such
as
comp
plugins,
then
you
can't
expect
to
see
the
same
behavior
if
you
switch
to
a
controller,
but
it
gives
you
the
option
to
migrate
more
easily
between
different
controllers,
and
it
also
allows
you
to
run
multiple
Ingress
controllers,
where
you
have
a
subset
of
ingresses,
managed
by
a
chrome,
maybe
a
subset,
managed
by
another
controller.
If
that
fits
your
use
case,.
C
All
right
so
now
that
we
have
a
little
background
here
on
the
computers
controller.
Maybe
it
has
already
become
a
bit
more
clear
how
we
can
leverage
this
as
a
platform
team,
so
just
to
rehash
a
little
bit.
Our
goals
are,
firstly
to
have
the
same
set
of
services
and
English
routes
running
in
multiple
clouds.
C
We
also
want
to
allow
each
application
team
to
run
their
own
services
and
self-serve
Ingress
creation.
We
don't
want
the
platform
team
to
become
a
bottleneck
to
counterbalance
this.
We
still
want
to
allow
the
platform
team
to
exert
some
amount
of
centralized
control
to
find
and
deny
about
Behavior
or
bad
usage
patterns,
and
we
also
want
to
allow
the
platform
team
to
provide
certain
common
resources,
maybe
plugin
configurations
in
a
global
manner
to
all
the
application
tools,
and
that
takes
us
to
this
diagram
showing
our
multi-cloud
configuration.
C
So
we
are
running
our
services
across
multiple
kubernetes
clusters
across
different
Cloud
vendors,
so
depicted
here.
We
have
one
kubernetes
cluster
in
AWS
on
the
top
right
and
we
have
another
cluster
in
gcp
below
and
you
can
imagine
another
one
for
a
sure.
It's
not
depicted
here,
but
you
get
the
point,
so
we
want
to
allow
our
customers
to
connect
to
either
Cloud
here
and
be
presented
with
the
same
API
interface.
C
C
Another
big
part
of
the
puzzle
here
is
that
we
went
with
a
gitops
model,
so
in
the
middle
here
you
can
see
flux
running
in
each
kubernetes.
Cluster
flux
is
constantly
monitoring
changes
to
the
GitHub
repos
shown
on
the
left.
So
when
there's
an
update,
it
will
be,
it
will
basically
pull
changes
and
if
there's
an
update,
it
will
publish
the
updated
manifests
to
the
kubernetes
cluster,
and
this
includes,
of
course,
kubernetes
native
resources,
which
is
deployments
services
and.
C
But
it
can
also
be
custom
resources
such
as
Chrome,
plugins
and
chrome
consumers
and
each
application
team.
Each
GitHub
repo
can
be
configured
to
publish
these
resources
into
their
own
namespace
in
the
cluster
so
that
you
can
actually
look
down
access
with
Kate's.
We
kubernetes
are
back
so
once
these
resources
are
added
the
same
flow
that
we
saw
earlier
with
chrome,
Ingress,
controller
monitoring,
changes
to
them
and
updating
the
Chrome
proxy
in
response
will
happen.
So
we
have
two
quite
nice
reconciliation
groups.
Here
we
have
flux.
C
C
So,
if
we
contrast
this
with
the
previous
solution
or
we
have
to
point
terraform
at
a
running
code
instance,
gitops
is
now
allowing
allowing
us
a
more
pull-based
approach
where
we
don't
need
to
configure
access
into
the
cluster.
For
an
external
agent
like
terraform,
you
have
to
give
read
access
and
plugs
to
the
GitHub
repost
and
you
can
reach
out
and
pull
in
manifests
from
GitHub,
rather
than
the
other
way
around.
C
For
anyone
who
has
a
moderately
complex
telephone
setup,
you
might
also
know
that
we
can
avoid
some
issues
with
complex
interdependencies
between
terrible
workspaces
that
could
lead
to
them
ending
up
needing
to
be
run
in
a
certain
order
or
if
they're
not.
You
know,
they
might
end
up
in
broken
States
during
things
such
as
cluster
bootstrap.
C
C
A
C
Might
also
notice
here
that
both
clouds
are
reading
from
the
same
source
of
truth
from
the
same
set
of
GitHub
repos,
so
that
means
we're
able
to
set
up
the
same
routes
and
API
interface
in
each
cloud
and
also
note
that
we
do
have
tooling
that
allows
you
to
publish
certain
resources
to
only
one
of
the
clouds,
but
this
is
generally
not
used
for
for
Ingress
routes.
Still,
if
you
have
some
exotic
use
case,
you
are
able
to
publish
a
unique
route
to
gcp
if
necessary.
C
C
So
the
model
we
went
with
is
that
platform
team
takes
care
of
running
a
chrome
itself,
so
the
complexity,
the
Chrome
Ingress
controller.
We
take
care
of
making
the
custom
resource
definitions
available
in
the
Clusters,
and
we
also
publish
certain
plug-in
configurations
that
are
useful
to
multiple
teams.
C
At
the
same
time,
if
you
have
a
team
with
a
bit
more
of
a
exotic
use
case
that
may
not
generalize
to
other
teams,
we're
still
able
to
publish
completing
configurations
into
their
own
namespaces
and
refer
to
them
from
their
ingresses
living
inside
the
same
namespace.
So
that
gives
a
little
bit
of
flexibility
to
the
teams
of
still
meeting
their
own
unique
needs,
and
if
it
becomes
more
of
a
cross-coding
kind
of
thing,
you
can
promote
it
to
a
platform
owned
globally
available
cluster
plugin
configuration.
C
C
So,
as
a
few
examples,
you
can
have
a
policy
that
disallows
Chrome
plugins
to
be
sourced
from
Dual
code
stored
in
config
Maps.
Maybe
your
organization
has
infosec
officers
who
are
not
happy
about
that.
You
can
create
a
policy
to
disallow
it
all
the
way
you
might
also
want
to
disallow
or
enforce
the
use
of
certain
host,
headers
or
plugins.
You
can
add
policies
for
that.
You
might
want
to
limit
the
range
of
certain
plugin
parameters.
We
saw
the
rate
limit
plugin
earlier.
C
Lastly,
we
see
that
the
new
githubs
approach
is
very
flexible.
It
allows
us
to
more
easily
scale.
Our
organization
manifests
are
pulled
from
inside
the
cluster
parallel
pushed
in
from
another
component,
improving
our
security
posture
and,
although
we've
had
to
add
some
extra
Tooling
in
order
to
safely
promote
changes
between
environments,
I
think
the
conclusion
is
that
it
was
made
by
deployments
a
safer,
more
predictable
and
easier
to
roll
back,
and
that
brings
us
to
the
end
so
I
will
hand
over
Carolina
for
some
closing
notes.
B
Hi
I
hope
you
enjoyed
our
little
walkthrough.
We've
left
two
resources
here.
If
you
want
to
learn
more
about
us
and
some
of
the
things
that
we're
building
so
our
podcast
is,
has
a
lot
of
like
illustrious
people
that
come
and
talk
about
their
Technologies
and
how
we
use
their
Technologies
as
well,
and
our
blog
is
written.
Our
Engineers
take
a
lot
of
time
writing
about
their
own
project
and
also
about
their
work.
Projects
on
our
blog
so
definitely
check
it
out
as
well.
B
This
is
the
end
of
our
presentation,
so,
like
feel
free
to
ask
any
questions,
and
if
you'd,
rather
not,
if
you'd
rather
not
ask
here,
then
you
can
always
catch
Us
online.
So
thank
you
very
much.
A
B
Them
yeah
shall
I,
I
can
read
them.
C
All
right
so
I
think
we
we
did
a
consideration
of
multiple
Cloud
providers
and
we
needed
some
providers
that
had
good,
manage
kubernetes
offerings
and
I
think
there
was.
It
was
a
longer
process
there
that
we
ended
up
finding
out
that
basically
the
industry
leaders
so
gcp
a
sure
and
AWS
were
the
best
fits
for
us.
B
C
B
C
I'm
not
sure
if
I
understand
the
question
completely,
but
I
think
part
of
the
reason
for
wanting
to
decentralize
some
of
the
management
is
you
know
to
make
it
so
that
each
team
can
manage
to
own
the
Ingress
resources
and
that
that
helps
a
little
bit
right.
So
every
team
should
be
able
to
manage
their
own
English
resources
and,
as
the
platform
team,
we're
able
to
have
policies
that
apply
globally.
If
you
want
to
enforce
certain
ways
of
using
them,.
C
B
C
We
use
so
we
use,
we
have
we're
basically
exporting
Prometheus
metrics
and
we
have
refunded
clouds.
So
we
have
all
of
these
standard
common
metrics
that
we're
using
to
to
create
dashboards
and
set
up
alerting,
and
this
is
running
in
all
different
clouds.
So
you
have
a
a
label
for
each
metric
indicating
which
cloud
and
which,
which
cluster
the
metric
is
from
so
you're
able
to
create
dashboards,
showing
you
know,
read
specific
Cloud,
exactly
the
different
performance
metrics
there.
B
We've
got
a
nice
jet
stream
questions.
What
what
is
your
experience
of
form
3
using
jet
stream.
C
Now
it's
an
interesting
question,
so
this
is
with
another
team
within
four
three,
so
I
can't
give
you
a
great
answer.
I
think
it's
been
a
learning
experience
for
us
as
well.
We
did
a
a
investigation
of
different
options.
We
found
out
that
jet
stream
worked
for
us.
We
knew
we
needed
to
run
a
messaging
provider
across
three
different
clouds.
There's
been
a
few
buttons
along
the
way,
but
we've
been
able
to
work
very
well
with
with
simadia
the
provider
and
I
think
we
both
make
them
to
a
good
shape.
B
And
one
of
the
reasons
why
we
also
chose
jet
stream
I
know
we
haven't
focused
so
much
about
it,
but
it's
those
data
centers,
so
we
needed
to
run
a
very
lightweight
messaging
broker
that
will
not
take
a
huge
memory
footprint
and
jet
stream
was
released
or
not
in
general,
was
really
simple
to
run
the
net
server
and
didn't
take
a
lot
of
memories.
That
was
one
of
the
main
one
of
also
the
main
reasons
that
I
was
chosen.
B
C
A
good
question
for
Chrome
itself,
I
think
I'm
not
actually
sure
if
it
is
completely
open,
source
or
not
I
know
the
Kong
API
Gateway
itself
is
open
source,
but
there's
also
a
few
components
you
need.
B
C
Yeah
so
here
at
the
moment
we're
using
the
the
cloud
provider
specific
ones,
so
we
we
have,
for
example,
mtls
we're
connecting
into
us
and
we
need
to
terminate
that
before
calling.
So
that's
also
where
we
can
walk
and
and
so
on,.
C
We
don't
build
any
custom,
Ingress
controllers,
we
rely
on
the
Chrome
Ingress
controllers,
so
I
think
the
answer
is
no.
Maybe
I'm
misreading
your
the
question
a
bit.
What
we
do
have
is
our
customers
are
able
to
set
up
web
Hooks
and
we're
able
to
notify
the
more
payment
events
out
from
us,
but
the
Chrome
Ingress
controller
is
not
really
involved
in
that
part
of
the
system.
B
A
Yes,
the
recording
will
be
uploaded
to
our
YouTube
channel.
Let
me
give
you
a
link,
really
quick,
so
you
know
where
to
look
for
it.
It'll
be
available
in
like
a
couple
hours
today,
so
here
I
paste
the
link
to
the
YouTube
channel.
You
can
find
all
of
our
recordings
from
all
of
our
events.
There,
all
right
I
think
that's
a
wrap,
Angelina
Frederick.
Thank
you
so
much.
This
was
very,
very
cool,
I.
Think
everyone
very
enjoyed
very
interactive
session
as
well.
Thank
you
guys.
A
We'll
gladly
have
you
again
join
us
sometime
thanks
to
everyone
who
joined,
hope
to
see
you
at
our
next
events
have
a
great
evening,
great
day,
ahead
of
you
and
speak
soon.
Bye-Bye
bye,
everyone
bye.