►
From YouTube: Day 2 Keynote and Welcome
Description
For more great content, visit https://solocon.io
SoloCon 2022:
Day 2 Keynote and Welcome
Speakers:
Lin Sun
Director of Open-Source, Solo.io
Neeraj Poddar
Director of Engineering, Solo.io
Abstract:
Join Solo.io Director of Open Source, Lin Sun, and Director of Engineering, Neeraj Poddar to help us kick off the second day of SoloCon with updates and more.
A
Hello,
welcome
to
solo
count
2022
day
two.
I
am
so
excited
you
are
here
yesterday
we
learned
about
evolution
of
darker
containers
and
microservices
from
solomon.
We
also
learned
about
the
importance
of
surface
mesh
and
how
istio
become
the
dominant
service
match
from
louis
today
we're
going
to
shift
our
focus
to
our
customers.
A
A
B
Journey
really
began
once
upon
a
time
just
using
a
very
basic
open
source,
istio
envoy
configuration
in
kubernetes
clusters
built
in
ec2
with
cops,
and
for
that
to
be
quite
honest,
most
of
the
knowledge
of
istio
at
that
time
and
our
usage
was
very
limited
to
simply
using
istio
as
an
ingress
with
virtual
services
and
that's
about
where
it
ended.
It
was
we
weren't,
utilizing
all
the
features
and
functionalities
of
istio.
We
weren't
doing
anything
in
depth
with
envoy
as
well.
B
It
occupied
the
space
that
was
fairly
opaque,
where
it
was
frustrating
to
both
our
application
development
teams
because
they
were
trying
to
do
things
and
interact
with
it,
and
they
simply
were
not
able
to
get
up
to
speed
on
the
things
that
they
did.
It
also
had
an
organizational
effort
for
from
our
part
about
you
know
the
limitation
was
exactly
who
could
we
devote
to
that
task
and
devote
to
the
building
of
it?
We
can't
devote
half
of
our
team
to
simply
building
this
out
of
the
open
source
means
available.
B
C
Are
in
the
process
of
breaking
apart
our
monolith,
which
is
that
classic
activity
we
all
participate
in,
and
so
as
we
do
that
we
want
to
be
able
to
provide
our
entire
developer
organization,
some
of
the
conveniences
that
they're
used
to
by
operating
on
a
monolith.
But
we
want
to
expand
that
to
say
look
here:
we
have
governance
for
you,
we
have
aggregation
for
you.
We
have
the
ability
to
give
something
at
the
top
that
will
give
some
of
the
conveniences
that
you've
been
used
to
before.
D
We
have
so
many
micro
services
which
customers
at
that
point
before
the
adoption
would
talk
to
individually,
and
we
want
to
give
that
idea
of
a
whole
platform
and
that's
where
we
already
looked
at
a
competitor
a
few
years.
What
really
triggered
us
is
that
we
made
a
sales
platform
where
customers
were
we
could
subscribe
and
on
demand,
get
their
own
environment
and
we
wanted
to
have
them.
Have
that
fresh
experience?
That's
a
really
unified
idea
of
this
is
a
platform
I'm
subscribing
to.
E
Responsible
for
devsecops
practices,
as
well
as
the
infrastructure,
so
my
role
is
to
ensure
that
we
are
providing
to
our
end
customers
and
to
the
developers
contributing
on
the
platform,
the
right
level
of
information,
the
guidance
and
also
the
best
practices
to
ensure
that
their
products
are
the
components
developing
are
secured,
are
available
and
are
working
as
intended
with
the
right
ability
to
debug
and
trace
unlocks.
E
B
We've
expanded
and,
as
we've
grown,
there's
been
a
particular
focus
on
security,
implementations
and
also
in
terms
of
automation
and
management
of
certain
aspects
of,
for
instance,
the
service
mesh
or
service
mesh
components.
Within
our
clusters,
one
of
the
things
that
became
a
very
solid
requirement
for
us
was
to
have
managed
mtls
within
our
clusters
to
be
able
to
encrypt
traffic,
to
also
be
able
to
authenticate
services
and
to
be
able
to
have
machine
to
machine
authorization
as
well
as
a
requirement
for
our
platform.
B
Many
companies
still
rely
upon
password-based
credentials
for
basic
authorization,
and
we
wanted
to
move
away
from
that
and,
in
particular,
with
a
long-term
goal
of
moving
toward
a
zero-trust
environment,
to
be
able
to
ensure
everything
from
regulatory
requirements
to
also
just
better
secure
our
own
environments.
As
we
move
forward.
F
So
worry
is
an
open
source
company
building
out
a
planet
scale,
network
infrastructure,
so
we're
building
a
15
to
20
region,
global
network,
we're
cloud
neutral,
kubernetes,
neutral,
our
kubernetes
native
and
of
course,
we
use
the
ori
open
source
tools
and
as
we
develop
this
network,
one
of
our
challenges
is
figuring
out
pricing
and
basically,
we
charge
our
customers
not
on
the
number
of
ids
or
monthly
average
users.
But
we
do
that
on
the
basis
of
traffic.
So
we
need
a
sophisticated
professional-grade
solution
to
really
manage
our
traffic
and
meter.
It
especially
I.
B
Think,
with
some
of
the
challenges
we
faced
on,
the
organizational
side
is:
who
do
you
have
on
your
end,
to
be
able
to
support
you
moving
toward
that
in
many
ways?
This
is
a
very
new
and
very
highly
advanced
technologies
that
we're
working
with,
and
you
don't
often
find
either
the
depth
of
knowledge
or
the
specific
domain
knowledge
available
within
your
organization
to
sort
of
go
it
alone
and
build
out
your
own
services
and
your
own
systems,
or
if
you
do,
it
takes
a
lot
of
time
and
a
lot
of
effort.
G
Thanks
lynn,
and
to
amazing
customers
for
walking
us
through
the
use
cases
and
challenges
they
were
facing
during
the
digital
transformation
journey,
hello,
everyone,
I'm
nirish,
podar,
head
of
engineering
here
at
solo,
welcome
to
solo
con.
I'm
real
excited
to
be
here
and
talk
to
you
about
the
glue
mesh
product
and
the
enhancements
and
features
we
have
added.
G
By
listening
and
learning
from
our
customers,
we
launched
glue
mesh
beta
in
december
of
2020,
with
the
host
of
enterprise
capabilities
on
open
source
sto,
like
single
pane
of
glass
for
visibility
and
simplified
traffic
management
for
services
deployed
across
different
clusters.
We
also
added
a
way
to
extend
your
mesh
functionality
using
web
assembly
modules.
G
Our
aim
was
to
understand
that
once
use
cases
our
customers
can
solve
using
a
product
at
the
same
time
understand
the
challenges
and
limitations
they
might
be
hitting
into
as
they
onboard
a
large
number
of
applications
team
onto
the
platform
at
scale.
As
we
interacted
and
worked
with
our
customers,
there
were
three
key
challenges
or
requirements
that
emerged.
G
First
was
on
multi-tenancy.
Our
customers
wanted
to
have
strong
isolation,
guarantees
for
configuration
and
traffic
routing
within
and
across
tenants.
Basically,
a
configuration
applied
for
one
tenant
should
not
affect
another
tenant
and,
at
the
same
time,
by
default,
only
services
within
a
particular
tenant
can
talk
to
each
other.
G
A
second
key
requirement
that
emerged
was
around
an
application-centric
approach
to
configuration
apis
and
architecture
instead
of
having
clusters
being
one
of
the
major
principles
of
configuration,
basically
as
our
customers
environment
scale,
and
they
added
more
clusters,
they
did
not
want
to
go
and
change
any
of
the
glue
mesh
resources.
Rather
they
wanted
to
have
more
dynamic
approach
and
think
about
it
in
terms
of
applications.
G
Lastly,
and
most
importantly,
they
wanted
to
have
fine-grained
policies
which
allowed
them
to
have
policies,
mapped
to
various
personas
like
platform,
admins
application
owners
and
operators
and
use
the
kubernetes
primitives
to
apply
access
control
for
them.
In
order
to
meet
these
requirements,
we
had
to
make
some
foundational
changes
to
our
product
and
take
a
fresh
perspective
and
look
at
our
apis
and
architecture.
G
So
today,
I'm
really
excited
on
behalf
of
the
entire
solar
organization
to
announce
blue
mesh
2.0
product,
which
was
specifically
designed
to
meet
these
use
cases
and
solve
the
challenges
for
our
customers.
So
let
me
quickly
go
over
how
we
do
that.
First,
we
have
added
the
concept
of
workspaces,
which
are
the
building
block
for
multi-tenancy
in
our
new
product.
Workspaces
are
a
logical
boundary
that
you
define
for
a
team
or
an
application.
G
G
At
the
same
time,
we
have
added
a
lot
of
flexible
label
based
selection
for
workspaces
and
some
other
api
objects
to
make
sure,
as
you
add,
more
clusters
or
more
application
steam,
you
don't
have
to
redefine
these
blue
objects.
If
you
have
the
right
rejects,
matching
or
the
light
right
label
selection,
the
the
management
plane
will
automatically
pick
them
and
make
sure
you
have
the
right
policies
defined
for
you.
And
lastly,
we
have
broken
up
a
monolithic
api
object
into
small
fine-grained
objects.
G
So
I
can
talk
about
all
the
amazing
enhancements
and
features
we
have
added,
and
I
know
I
can
go
on
for
a
really
long
time.
But
at
this
time
I
would
like
to
ask
scott,
who
is
an
architect
in
our
team
and
one
of
the
brains
behind
this
project,
to
show
us
a
quick
demo
of
the
capabilities
in
the
blue
mesh
2.0
product.
So
scott
over
to
you.
H
So
first,
you
can
see
in
our
dashboard
a
multi-cluster
setup.
I
have
two
clusters
here:
each
with
some
set
of
services
and
gateways
running
in
both
those
clusters
are
actually
partitioned
by
workspace,
which
is
our
primitive
for
multi-tenancy,
as
well
as
grouping
of
applications
or
application
services,
and
we
can
see
here
a
book
info
and
a
gateway
workspace
which
both
claim
a
namespace
in
each
cluster.
H
H
These
ingress
gateways
are
coming
in
and
sending
traffic
to
our
product
page,
which
is
you
have
to
see
there
we
go.
This
is
achieved
through
our
gateway
api,
which
I
won't
take
too
much
time
to
show.
So
we
don't
have
a
lot
of
time,
but
we
can
see
we
are
providing
a
gateway
on
this
3210
and
3220,
which
are
two
different
english
gateways
for
each
cluster.
H
H
Our
pods
in
cluster
2,
our
details,
isn't
a
crash
loop,
so
that
means
the
traffic
cannot
go
there.
We
can
repair
the
situation
by
using
something
called
a
virtual
destination,
because
we
know
we
have
a
working
service
in
one
cluster.
We
can
deploy
a
virtual
destination
that
selects
all
the
services
that
are
alive
and
forwards,
traffic
to
them
load
balances
across
them
with
from
a
global
host
name.
So
that's
all
you
have
to
create
here
and
let's
go
ahead
and
create
it.
H
Sorry
we're
going
to
run
through
this
script,
which
has
already
created
our
route
table
and
gateway,
as
you
can
see
here,
so
it's
allowing
traffic
on
both
clusters
to
come
in
just
a
single
set
of
resources
and
we
are
going
to
apply
yes,
we're
going
to
create
our
details
destination
and
now
we're
also
going
to
create
a
route
table
to
shift
traffic
from
the
internal
coop
dns
name
to
the
virtual
destination
hostname.
H
H
H
Now
both
of
these
services
are
in
the
message
we're
getting
the
http
response
here.
Both
of
these
services
are
in
the
mesh.
However,
this
one
is
not
given
any
kind
of
explicit
permission
to
talk
to
it,
so
what
we're
going
to
do
is
we
are
going
to
update
the
setting
here
on
our
workspace
to
enable
something
called
service,
isolation
which
is
basically
going
to
prevent
services
outside
of
our
workspace
to
talking
services
inside
the
workspace
without
explicit
permission.
H
Yes,
here
it
is
we're
getting
our
back
denied
from
the
server
and
that's
just
happening
automatically,
because
this
service
is
outside
of
the
workspace.
So.
H
A
F
F
As
a
software
company
and
an
open
source
organization
considered
doing
it
ourselves
and
using
the
open
source
route
to
the
team's,
you
know
huge
interest
in
the
whole
networking
and
security
stack,
and
we
also,
you
know,
wrote
a
number
of
mvps
with
projects
such
as
ambassador
traffic,
citrix
and
contour
before
really
choosing
glue
edge
as
our
as
our
solution.
So
our
challenges
are
large
and
also
the
scope
of
the
project
is
also
large
and
we
looked
for
a
partner
that
could
live
up
to
those
challenges.
B
Something
like
linker
d
would
be
fine,
linker
d2,
something
lighter
weight
that
you
can
just
kind
of
work
with
in
a
very
simple
way,
without
too
much
complexity
without
having
to
rely
on
any
of
the
istio
or
envoy
primitives
and
have
really
access
to
them.
You
can
do
something
like
that.
Just
fine.
D
We
looked
at
some
competitors,
maybe
who
maybe
had
their
own
kind
of
proxy
underneath
just
went
through
the
motion
of
fulfilling
our
requirements
and
came
up
with
solo
as
actually
the
only
one
truly
fulfilling
all
our
requirements,
which
were
not
little
to
give
an
id.
D
We
have
this
whole
array
of
microservices,
which
are
multi-tenants,
and
we
have
single
tenant
information
passing
through
that
there
weren't
any
that
maybe
today
is
different
the
landscape,
but
at
that
point
there
were
and
weren't
any
competitors
that
were
able
to
provide
us
information
of
one
tenant
go
talking
to
a
multi-talent
component
and
with
a
bit
of
complex
routing,
we
could
actually
say:
okay,
this
tenant
is
performing.
This
action
could
do
rate.
Limiting,
could
get
information
from
that.
So
that
was
a
big
part
or
solo.
B
If
you're
tightly
integrated
with
aws
and
the
aws
environment,
what
was
an
alternative
that
we
had
attempted
to
implement,
at
least
in
our
non-production
environments?
Initially
was
amazon's
app
mesh
and
aws
app
mesh
seemed
to
us
like
a
great
choice,
because
it
was
simple,
relatively
easy
to
use.
Well
integrated.
The
blockers
that
we
faced
were
really
when
it
came
down
to
some
of
these
interesting
edge
cases
related
to
external
authorization
and
again
those
requirements
for
very
specific
requirements
for
both
on
the
management
plane,
which
is
for
app
meshes.
B
Just
it's
managed
within
aws
some
of
the
specific
requirements
for
either
our
federated
trust
model.
We
were
trying
to
implement
spiffy
inspire
with
with
app
mesh.
We
were
hitting
some
roadblocks
there.
We
also
attempted
to
then
really
utilize
external
authorization
through
envoy
as
an
envoy
filter,
and
we
found
that
we
weren't
getting
the
exposure
that
we
needed
to
envoy
in
order
to
be
able
to
implement
that.
Specifically,
there
were
some
issues
that
we
faced
with
something
like
forwarding:
the
actual
cert
details
from
one
lesson
or
to
another.
D
We
did
an
early
adoption
of
an
envoy-based
proxy
but
noticed
that
architecturally
there
were
some
choices
made
where
I
read
articles
from,
I
believe
your
cto
and
those
were
great
articles
very
concise.
To
the
point
saying:
okay,
we
made
those
architectural
decisions
which,
for
me
felt
like
I
can
understand
this,
although
I'm
not
an
expert
of
envoy
and
otherwise
we
would
have
built
it
ourselves.
We
understood
the
expert
level
that
was
put
forward
in
this
in
these
articles,
and
that
was
the
first
step
that
put
solo
on
the
list
and
glue
in
particular.
D
Documentation
was
very
concise
and
that's
also
something
we
felt
with
going
through
our
list
of
requirements
that
we,
how
quickly
do
we
get
to
that
point
of
fulfilling
that
list
as.
B
I'm
sure
everyone
is
well
aware
in
the
industry
there's
only
so
far,
you
can
go
working
with
a
very
large
company,
that's
on
its
own
roadmap.
That's
on
its
own
pace
that
you
don't
have
a
very
strong
personal.
Almost
I
would
say
relationship
with
as
a
smaller
company
or
even
a
mid-sized
company,
it's
difficult
to
really
have
that
kind
of
strong
relationship
with
the
vendor.
B
Tetrad
service
bridge
was
another
that
we
had
looked
at
and
what
we
found
was
that,
with
solo
itself
and
with
glue
mesh,
the
responsiveness
of
the
support,
the
focus
on
building
these
partnerships
with
the
actual
customers
and
clients
to
see
it
as
an
interactive
relationship.
Rather
than
simply,
you
pay
us
money
and
we
give
you
this
thing
and
maybe
we
implement
the
features
you
need
or
want,
or
maybe
we
don't
and
that
will
change
your
plans
as
well.
We
picked
solo.
C
Because
we
were
looking
for
an
envoy
based
solution
because
we
believe
that
was,
we
believe,
that's
the
future.
I
think
that's
you
have
something
to
do
with
that
future.
So
we
think
that
that
is
really
where
we
should
be
looking
for
a
lot
of
these
tooling.
So
we
did
look
at
other
options.
We
looked
at
all
of
the
envoy
tools,
some
of
the
other
leading
tools
and
we
ended
up
deciding
that
glue
was
going
to
be
the
one
that
best
fit
our
needs.
C
F
Solo,
I
think
it
was
generally
hands
down.
You
know
the
best
support
for
our
requirements
for
rate
limiting
across
multiple
clusters,
and
you
know
in
general,
we
like
the
architecture
of
glue.
We
like
the
product,
maturity
level.
We
like
the
easiness
of
configuration
and
the
whole
management
plane.
We
like
the
support
for
dynamic
rate,
limiting
configuration,
that's
very
important
for
us.
F
Frankly,
we
like
the
sales
team.
I
know
that's
unusual
to
say
we
like
the
attitude.
The
customer
experience
we
got
in
the
very
beginning
was
a
factor.
You
know
we
were
given
access
to
very
good
quality
field
engineers.
I
think
that
compared
to
some
of
the
other,
companies
was
a
major
factor
that
we
had.
You
know
this
direct
contact
that
allowed
us
to
make
a
complete
mvp
and
do
that
on
our
end
and
with
solar
very
quickly.
F
With
that
kind
of
access
to
solar
engineers,
we
were
able
to
develop
the
prototype
very
quickly
and
discuss
any
topic
related
to
glue
and
envoy,
and,
of
course,
that
was
extremely
important
for
us.
We
didn't,
have
you
know
all
the
experience
we
needed,
I
think
in
the
beginning,
and
then
no
surprise.
You
know
pre-sales
turned
into
customer
support
and
even
after
signing
the
contract,
we
we
got
an
improved
quality
of
support.
E
So
we
choose
solo
and
especially
the
glue
age
product
for
multiple
reasons,
because
it's
using
the
next
generation
technologies.
To
be
honest,
the
openness
of
the
platform
and
the
entire
ability
with
existing
solution
like
for
authentication
for
logs
and
tracing
and
the
service
mesh
integration
as
well.
And
the
last
point
is
the
pri.
The
pricing
model
of
the
solution
that
fit
our
business
model,
starting
with
small
infrastructure,
with
the
ability
to
scale.
D
The
biggest
reason
for
us
that
we
adopted
solo
is
because
we
set
some
requirements
for
ourselves
and
with
these
requirements
in
mind,
we
looked
at
envoy-based,
proxies
or
gateways.
I
mentioned
that
also
to
the
people,
I'm
in
contact
at
solo,
that
what
trigger
does
is
and
why
solo
was,
from
the
start,
actually
already
a
step
ahead.
B
It's
based
upon
open
source
components
underneath
true
is
the
own
envoy:
are
open
source
components?
What
value
then?
Do
you
get
from
going
to
enterprise
support
the
way
that
I
approached
that
problem
from
that
cost
perspective
was
simply
put
if
we
took
the
same
amount
of
resources
money
that
we
would
allocate
to
an
enterprise
agreement
and
relationship
with
solo
and
hired
engineers,
would
we
be
able
to
achieve
the
same
product
with
the
same
complexity,
meeting
all
of
our
hard
requirements
meeting
all
of
that
and
as
well?
B
It's
more
of
a
question
of
the
access
that
would
give
you
to
really
domain
specific
experts
in
the
field
to
be
able
to
then
advance
the
product
and
grow
and
the
product.
Knowing
that
the
product
will
grow
with
your
organization
as
well,
rather
than
just
simply
be
a
product
that
you
grab
off
the
shelf
and
try
to
implement.
A
A
A
F
The
conversations
I
had
with
the
solo
team,
with
edith
with
chris,
of
course,
these
are
extremely
impressive
individuals,
but
also
just
great
cheerleaders
for
the
product
and
can
go
very
deep
into
the
technology
and
very
impressive
and
sort
of
similar
to
how
worry
believes
software
companies
should
tick,
and
so
we
look
forward
to
building
on
those
relationships.
We're.
B
Still
fairly
early
in
the
process
of
onboarding,
our
applications
to
the
service
mesh
so
far,
I've
found
solo
to
be
very
responsive.
I
really
appreciate
the
ability
to
have
both
direct
sort
of
in-sync
communications,
as
well
as
asynchronous
communications,
where
we
have
our
own
slack
channel
on
solo
slack.
B
We
also
have
access
to
then
the
other
public
channels
within
that
being
able
as
well
to
reach
out
and
have
engineers
who
are
familiar
with
our
company,
who
are
familiar
with
our
organization
who
we
interact
with
on
a
frequent
basis,
rather
than
sort
of
a
revolving
door
of
engineers,
that's
important,
being
able
to
also
work
with
engineers,
with
specific
use
cases
and
scenarios,
or
even
bugs
that
we
will
find
or
additions
and
improvements
that
we
might
notice.
It's
going.
E
From
internal
platforming
that
we've
got
in
schneider
electric,
where
we
are
deploying
solar
products
to
make
security,
routings
and
api
availability
to
customer
digital
solutions,
kind
of
project,
I'm
working
on
actually
and
where
we
are
using
sort
of
products,
adding
more
value
and
without
disturbing
the
developers
being
ensuring
the
security
and
the
ability
to
share
our
apis
and
product.
With
our
end
customers.
Once.
D
F
I
think
we're
you
know
really
happy
that
we're
able
to
implement
our
requirements
pretty
quickly.
Ori
is
up
and
running
as
a
as
a
global
network.
So
we
did
this
more
or
less
on
the
fly,
so
it
was
important
for
our
team.
We
had
no
downtime,
you
know.
Support
offered
by
solo
was
extremely
valuable
and
it
really
allowed
us
to
progress
quickly,
not
only
in
the
development
but
also
in
the
whole
deployment,
and
I
think
we've.
B
In
particular,
because
of
the
way
that
our
environment
is
set
up,
we've
hit
quite
a
few
interesting
edge
cases
that
it's
been.
You
know
unique
experience
being
able
to
interact
with
solo
engineers
to
be
able
to
resolve
that
being
able
to
work
with
us.
On
that
also,
the
absolute
you
know,
depth
of
knowledge
and
domain
specific
knowledge,
that's
available
with
the
the
engineers
at
solo
is
just
incredible.
B
C
Been
really
strong,
we
are
thrilled
that
we
are
part
of
an
open
source
community,
both
in
the
envoy
side
and
on
the
glue
side,
and
we
have
felt
that
the
support
we've
gotten
from
the
solo
team
has
been
exceptional.
Very
quick
to
reply
very
expert
in
what
they're
accomplishing
and
very
accommodating
and
understanding
of
the
challenges
that
we're
facing.
B
We're
using
gloomish
enterprise,
one
of
the
things
that
we
also
appreciate
with
that
is
blue
mesh
gateways
that
are
part
of
that,
because
we
didn't
need
all
of
the
functionality
of
blue
edge,
but
we
definitely
can
use
some
of
the
functionality
of
it
and
that
functionality
from
blue
edge
is
inherited
within
blue
mesh
gateway
as
part
of
the
enterprise
glue
mesh
service
mesh.
Our
main
priorities
and
our
main
focus
for
service
mesh
itself
is
to
be
able
to
secure
our
internal
network,
make
sure
that
we
can
have
appropriate
service
to
service
authentication
and
machine-to-machine
authorization.
E
F
We've
had
great
support
for
the
global
rate,
limiting
requirements
that
we
have.
We,
I
believe
that
that's
a
real
differentiator
in
how
glue
and
solo
put
that
together
as
an
example,
we
have
support
rate
limits
per
host.
We
can
go
to
different
tenants
or
or
endpoints
or
paths,
and
those
are
also
shared
between
clusters.
So
that
gives
us
a
lot
of
flexibility.
F
It
simplifies
the
way
we
set
up
our
sort
of
proxy
infrastructure
because
it
doesn't
require
us
to
do
that
in
multiple
places.
So
that's
also
very
good.
The
rate
limiting
support
for
persistence
is
very
important
for
us,
as
we
scale
clusters
and
deploy
kubernetes
in
a
larger
way,
they're
points
that
make
it
extremely
beneficial
to
work
with,
with
solo
and
especially
with
blue
in
this
particular
type
of
network
deployment.
So
I
think
that's
a
good
summary
of
our
experience
so
far.
Oh.
E
So
far
we
really
enjoy
working
with
the
solo
team
and
their
product
from
the
sales
team
to
the
technical
engineers.
We
are
supported
by
skilled
person,
with
the
ability
to
be
guided
and
to
take
some
decision.
A
fun
fact
is
that
we
started
a
an
evaluation
with
a
poc
with
solo
products,
and
now
we
are
on
a
trend
where
most
of
our
big
digital
projects
are
getting
solo
product
in
so
it's.
It
means
that
we
so
far
we
are.
We
are
much
more
than
enjoying
the
product
overall.
B
It's
been
a
very
positive
experience
so
far,
a
very
responsive
team.
At
solo,
I'm
looking
forward
to
continuing
to
work
with
solo
thinking
of
it
again
as
building
our
own
organization
in
kind
of
a
symbiotic
way
with
the
service
mesh
as
a
platform,
and
you
know
seeing
where
it
can
go
from
that.
So
I'm
excited
and
it's
been
a
very
a
very
helpful
healthy
and
you
know
functional
relationship.
B
D
Keep
doing
what
you
are
doing,
I'm
looking
forward
also
to
the
future.
We
talked
already
that
we
wanted
to
adopt
or
look
into
service
mesh
and
the
advantages
of
service
mesh,
we're
currently
really
focused
or
we're
focused
on
the
edge
api
gateway,
but
we're
also
looking
into
ebbf
istio
and
in
that
story
I
already
feel
that
solo
will
bring
us
again
some
answers
to
questions
we
have
so
that's
always
nice.
B
I
deeply
value.
I
deeply
appreciate
the
fact
that
solo
is
willing
to
see
the
value
in
smaller,
more
dynamic
companies,
leaner
companies
that
are
out
there
doing
really
amazing
things
and
really
trying
to
push
the
envelope
rather
than
just
saying.
Well
we're
just
going
to
focus
our
entire
model
on
you
know,
according
these
larger
agencies
or
organizations.
G
Thank
you
all
for
joining
us
today.
On
this
keynote
we
have
an
action-packed
agenda,
lined
up
for
you
with
exciting
sessions
and
workshops,
so
see
you
all
there.
I
also
look
forward
to
continued
engagement
with
you
all
on
a
community
slack
github
and
the
various
events
and
workshops
we
have
planned
throughout
the
year
and,
more
importantly,
see
you
all
next
year
at
solocon
2023.