►
Description
With the ever growing popularity of Kubernetes, the k8ssandra open source project is intended to bring Cassandra's advantages to the cloud and help simplify operations.
Website: https://www.datastax.com/
Organized by @Microsoft @kubermatic7173 @SysEleven
Thanks to our sponsors @CapgeminiGlobal, @gardenio, @sysdig, @SUSE, @anynines, @redhat, nginx, serve-u
A
So
essentially,
what
I'm
gonna
do
here
is
just
show
you
a
demo,
and
there
is
a
story
behind
the
demo.
Basically,
the
story
starts
at
kubecon
North
America
last
year,
so
this
was
in
October
2021.
Something
like
that
right.
So
I
wanted
to
show
a
Kate
Sandra
demo,
which
is
our
kubernetes
version
of
Cassandra
Cassandra
running
in
communities.
A
Right
I
wanted
to
show
a
demo,
but
I
really
wanted
to
show
it
not
a
plain
old
Cassandra
demo
right,
Kate,
Sandra
demo
I
wanted
to
show
a
multi-vision
demo
in
in
kubecon
North
America,
so
I
was
able
to
kind
of
get
gke
running
because
gke
networking
is
probably
the
easiest
to
do
to
set
up
and
all
that,
especially
from
a
community's
perspective
and
I
am
not
a
networking
geek
by
any
means.
You
know
the
layer,
4
layer,
7
I,
have
no
idea
right.
A
I'm
an
application
developer
I
just
want
to
make
it
as
simple
as
possible
and
and
make
the
application
work
right
and,
and
many
of
you
might
be
in
the
same
camp.
A
Some
of
you
might
be
networking
experts,
but
you
know
I'd
like
to
talk
to
you.
If
you're
a
networking
expert,
but
that's
how
the
you
know,
the
whole
thing
started
after
that
fast
forward
to
AWS
re
invent,
which
happened
in
November
2021.
A
Basically,
what
I
wanted
to
do
was
kind
of
use
the
same
demo,
but
obviously
I
could
not
do
it
on
gke
right,
which
wouldn't
be
received.
Well
in
a
you
know,
in
a
AWS
kind
of
conference,
so
I
I
kind
of
repurposed
it
for
eks,
but
in
eks
you
know
it.
It
was
a
lot
difficult
to
do
the
networking.
Luckily,
for
me,
I
found
a
project
called
eks
Cube
fed
and
if
you
used
Cube
fed
Cube
fed
as
kind
of
veined
in
popularity,
but
each
is
Cube
fed
is
fantastic.
A
It
does
everything
for
me:
VPS
VPC,
networking,
routing,
whatever
transits.
All
that
you
know
it's.
It's
automatically
taken
care
of
for
me,
so
so
I
use
that
and
and
did
this
demo,
but
in
the
meantime,
as
a
company,
we
were
also
realizing
the
need
for
doing
something.
Multi-Cloud
and
realizing
that
our
operator,
which
was
the
cache
operator
or
called
the
Cassandra
operator,
had
some
limitations
in
terms
of
doing
multi-cloud
right.
A
So
you
have
the
you
know,
kind
of
similar
to
what
kubernetes
providers
do
you
have
a
concept
of
control
plane
and
then
you
can
install
your
you
know
your
clusters
and
different
data
planes
and
so
on
right
and
the
nice
thing
about
Cassandra.
Is
that
once
you
enable
networking
it's
able
to
talk
to
itself,
you
know
talk
to
the
other
clusters
and
build
a
bigger
cluster,
and
you
know
what
I
did
was
I
went
and
got
some
help
on
networking.
A
You
know
essentially
set
up
the
networking
and
I'm
able
to
do
it
repeatedly
thanks
to
a
company
called
aviatrix
I,
don't
know
if
any
of
you
use
that
you
know
pretty
cool
you
like
them
yeah.
So
basically
they
wrote
me
some
terraform
scripts
and
I'm
good
at
running
terraform
scripts.
I'm,
not
good
at
running
networking
right,
so
I
wanted
something
which
was
repeatable
and
and
once
I
do
that
you
know
very
simple
to
set
up
the
multi-cluster.
A
So
what
what
is
the
point
I'm
trying
to
make
here
is
that
you
know
you
may
have
a
a
multi-cloud
kubernetes
cluster,
but
unless
your
your
application
can
take
advantage
of
it,
like
you
know
like
being
on
the
cloud
right,
it's
really
of
no
use
right,
but
Cassandra,
on
the
other
hand,
is
able
to
do
that
automatically
for
you
and
I
will
talk
about
the
journey
that
maybe
about
six
month
journey
in
about
next
15
minutes.
Okay,
so
I
am
a
developer.
A
Advocate
I
go
by
Rags,
because
a
lot
of
people
around
the
world
have
a
tough
time
pronouncing.
My
name
and
I
have
no
idea.
Why
so
just
go
by
Rags.
A
I
am
a
mechanical
engineer
by
education,
but
that
was
a
lot
of
years
ago,
but
you
know
the
one
thing
I
can
associate
with
with
software
is
that
you
know
I
I
like
to
see
code
in
motion,
so
you
know
I
like
to
do
demos
I
like
to
see
player
with
Hands-On
labs
and
I
work
for
a
you
know,
great
group,
and
all
that
we
do
is
you
know,
try
to
level
up
each
developer.
A
We
hey,
we
run
free
workshops
every
week
and
and
so
on,
but
I
love
to
teach
and
communicate.
I
am
a
big
believer
of
the
inner
loop
of
development.
Have
you
heard
of
the
inner
loop
of
development?
A
You
know
basically,
where
you
kind
of
keep
doing
the
same
thing
over
and
over
again
edit
profile,
debugated
profile,
debug
and
you
know,
and
then
finally,
when
you're
reasonably
happy
with
it,
then
I,
then
you
do
a
send
it
to
the
CI
CD
Loop
right,
I
I
came
from
the
cloud
Foundry
background
and
you
know
kubernetes
is
not
there
yet,
but
there
are
many
tools
that
make
it
easy.
A
In
fact,
I
do
another
talk
which
is
running
quarkus
and
containerizing
and
running
it
on
kubernetes,
which
is
pretty
cool,
which
is
the
other
thing
that
I
like
to
do
as
well.
We
are
actually
a
pretty
big
group
believe
it
or
not,
because
we
run
quite
a
few
workshops.
A
So
if
you're
interested
in
basic
information
of
what
is
nosql
I'll
point
you
at
that
and
so
on,
but
quite
a
few
of
them,
the
agenda
and
I
hope
to
cover
it
in
the
next
20
minutes
or
so
is
intro,
because
once
I
tell
a
story,
I
think
I
can
capture
your
attention.
That's
my
goal,
but
I'll
go
fast
forward
with
this.
The
term
no
SQL.
A
B
A
Yeah
so
referred
to
as
not
only
SQL,
okay,
as
opposed
to
the
tradition,
Oracle
model,
where
it's
about
scaling
up
up
and
up
right,
vertical
scaling,
it's
really
about
horizontally
scaling
and
that's
where
noiseql
shines
right
and
and
it's
kind
of
cloud
native-
it's
it
was
Cloud
native
even
before
there
was
a
thing
called
a
cloud
right
because
it
started
really,
if
you
think
about
it
in
2009,
when
the
cloud
was
really
very
much
in
its
infancy.
A
A
So
it's
really
about
horizontal
scaling,
but,
as
we
know,
anything
comes
with
a
price
right.
Whenever
you
have
horizontal
scaling
commodity
Hardware
right
You
have
to
sacrifice
one
of
these
properties.
Okay,
these
are
key
to
any
distributed
systems
right.
All
of
those
are
working
fine.
When
the
system
is
working,
fine,
meaning
there
is
no
failure,
but
the
moment
there
is
a
failure
you
have
to
give
up
either
availability
or
partition,
tolerance
or
consistency.
You
can
have
only
two
of
the
three
at
any
point
of
time.
That's
what
it
really
means.
A
Okay,
this
is
also
referred
to
as
the
Brewers
conjecture.
It's
never
been
formally
proved,
but
you
know
it.
It
is
practically
proven
right.
Basically,
what
it
means
is
always
response.
That's
what
availability
means.
Consistency
means
every
read
receives
the
most
recent
write
right
and
partition.
Tolerance
is
the
ability
to
operate
in
in
the
presence
or
despite
Network
failures.
You
know,
which
are
inherent
in
any
distributed
system
right.
So
what
Cassandra
does
is
it
picks,
availability
and
partition
tolerance,
and
there
are
other
nosql
databases
which
kind
of
do
different
things.
Okay,
it's.
A
Perspective
and
really
from
a
nosql
database
perspective
partition
tolerance
is
even
more
evil
than
giving
up
on
consistency,
because
you
know
partition
tolerance.
If
it
happens,
I
mean,
if
you
give
up
on
partition,
tolerance
right,
then
the
consistency
models
are
are
going
to
be
so
broken
that
even
my
grandmother
can
recognize
that
this
is
not
consistent.
Okay,
but
in
eventual
consistence
model,
it's
okay,
if
I'm
not
in
the
leaderboard
for
the
first,
you
know
20
seconds
or
you
know
in
the
last
20
seconds,
I
was
on
top
of
the
leaderboard.
A
It
also
does
a
masterless
peer-to-peer
architecture,
which
also
makes
it
very
suitable
for
multi-cloud,
because
you
know
there
is
no
such
thing
as
a
master.
You
know
there
is
election
of
a
leader,
obviously,
because
that's
how
consistency
is
achieved,
but
that
is
very
Dynamic
and-
and
you
know
you
can
refer
to
all
of
this
earlier-
I
mean
you
can
you
can
refer
to
all
of
this
in
some
resources
that
I'm
going
to
point
to,
but
essentially
what
it
does?
A
Is
it
distributes
data
across
multiple
nodes,
right
and
and
and
it's
good
to
go?
You
can
set
the
consistency
level.
Is
we
also
call
it
as
configurably,
consistent,
okay
and
and
I?
Think
I
already
talked
about
this
and
really
I
mean
the
the
goal
behind
this
slide
is
not
to
say
that
you
know
these
big
logos
are
using
Cassandra.
The
point
is
it's
really
about
scale
and
I've
talked
to
some
of
these
customers
who
say
that
they
have
not
rebooted
their
Apache
Cassandra
for
about
eight
years.
A
A
I
really
came
from
the
Microsoft
Word,
where
I
had
to
reboot
pretty
much
quite
quite
some
time,
and
then
you
know
even
the
Macs
are
not
reliable
anymore,
but
but
in
any
case
the
point
I'm
trying
to
make
here
is
that
it's
really
about
scale
and-
and
you
know,
the
it
really
originated
in
in
kind
of
like
Facebook
and
apple
is
still
one
of
the
biggest
contributors
data
Stacks.
A
We
have
the
main
maintainers
of
Apache
Cassandra,
so
this
person,
Chris
Christopher
Bradford,
wrote
a
paper
which
basically
poopooed
running
databases
and
communities.
You
know
this
was
the
very
early
days
of
running
databases
and
kubernetes,
but
long
story
short.
He
now
works
for
data
stacks
and
works
as
a
product
manager
and
actually
talks
about
how
it's
good
to
run,
databases
and
communities
right.
A
A
You
know
you
know
we
took
the
option
that
you
know
once
it's
kubernetes
it
can
really
run
anywhere
and
and
since
Cassandra
from
an
application
has
minimal
requirements.
Like
I
said,
you
know,
as
long
as
the
the
nodes
are
able
to
talk
talk
to
each
other,
they
can
form
a
cluster,
no
problem,
okay,
and
they
have
this
similar
concept
of
what
is
called
as
a
rack
and
a
data
center.
A
You
know
kind
of
similar
to
what
you
would
see
in
a
in
a
cloud
provider
right
and
really
you
can
install
it
pretty
much
anywhere
you
want.
You
know.
I
have
I
myself
on
this.
You
know
at
the
very
moment,
I
think
I
have
I,
have
access
to
pretty
much
clusters
running
on
all
of
these
kind.
I,
don't
think.
I
have
K3
mini
Cube,
of
course
sibo,
and
if
you
use
sibo,
no
sibo
is
a
kubernetes
provider
from
London
I.
A
Believe,
and
you
know
they
have
a
pretty
good
kubernetes
provider
that
you
know
willing
to.
If
you
want
to
give
it
a
try,
then
of
course
gke,
eks
and
AKs.
A
So
what
we
do
underneath
is
we
install
all
the
components
for
you
as
a
operator
called
cache
operator,
and
you
can
see
here,
the
cache
operator
is
the
one
that
essentially
does
all
of
this.
The
Stargate
is
the
unifying
API.
Then
we
have
Prometheus
grafana
Reaper
for
repair
Medusa
for
backup
and
restore
because
these
are
the
things
that
most
operations
have
to
deal
with
this,
that
you're
installing
it
on
the
cloud
whether
you're
installing
it
on
time.
A
This
has
to
be
done,
and
this
is
automatically
done
for
you,
okay,
Mineo,
for
backing
up
your
data
and
so
on,
but
you
know
very
simple
right,
Helm
repo,
add
and
then
Helm
repo
update
and
then
Helm
install
as
simple
as
that
right.
But
the
problem
with
that
approach
is
that
it
works
well
for
a
single
cloud
provider.
A
Region,
multi-region
multi-cloud
multi,
whatever
right,
it's
not
easy
to
do
that.
We
realize
that
you
know
we
pushed
Helm
to
the
limit.
So
what
we
built
was
we
built
an
operator
on
top
called
the
kubernetes
operator
or
called
the
Kate
Center
operator
primarily
for
these
reasons,
Cassandra
is
designed
for
multi-region
it
in
a
what
it
does
is
each
node
in
the
cluster
maintains
the
full
topology.
You
know
it.
A
It
talks
to
each
other,
it
gossips,
and
not
only
does
it
say
it's
alive
well
and
all
that
it
also
says
you
know
this
is
kind
of
some
information
about
the
topology
so
that
if
that
particular
node
goes
down,
the
information
is
still
available
for
the
rest
of
the
cluster
and
it's
able
to
kind
of
self-heal.
A
So
data
is
automatically
asynchronously
replicated.
The
cluster
seems
like
it's
a
homogeneous
cluster,
so
even
though
it
may
not
be
right
and
clients
can
be
configured
to
automatically
root
traffic
to
the
local
data
center,
because
you
know
all
the
information
is
available
right,
you
know
it
constantly,
updates
it
and
so
on.
A
So
sometimes
people
call
it
as
very
chatty,
but
but
you
know
it's
it's
chatty,
but
it's
optimized
chatty,
you
know
so
kubernetes
was
not
designed
for
multi-region.
There
are
a
number
of
reasons
for
that.
I
recently
had
a
talk
at
the
kubecon
Valencia
about
you
know,
networking
talk
and
Tim
Hodgkin,
who
was
one
of
the
original
designers
of
networking
for
kubernetes
from
Google.
A
You
know
who
he
and
I
used
to
work
at
Sun
Microsystems,
but
he,
you
know
basically
mentioned
one
of
the
biggest
constraints
that
you
know
the
networking
group
took
initially
was
to
provide
the
individually
addressable
parts
right,
so
each
part
should
be
individually
addressable
and
as
long
as
it's
individually
addressable,
no
everything
is
fine
right.
But
if
it's
not
now,
how
do
you
deal
with
this
for
multi-region?
A
How
do
you
deal
with
this
for
multi-cloud
and
all
that
that's
entirely
up
to
up
to
you
right
and
I'll
and
I'll
show
you
how
you
can
do
that,
but
I
may
have
to
do
a
lot
of
this
myself.
Instead,
if
there
was
an
operator
that
was
available
to
make
it
happen,
then
I
would
use
that
that's
the
case.
Under
operator,
it's
support
for
multi-region
multi
data
center.
The
control
plane
creates
and
manages
the
object
and
the
control
plane
can
be
in
one
cluster
and
then
you
know
install
the
data
plane.
A
A
And
what
you
do
here,
is
you
inject
the
context,
so
you
inject
the
context
of
east
and
west
into
the
gate
center
cluster
and
that's
pretty
much
how
the
entire
cluster
is
formed.
You
know
the
bigger
cluster
is
found
as
long
as
there
is
networking
enabled
between
all
of
these
and
you're
out.
You
know
you're
good
good
to
go
okay,
so
the
operator
creates
a
Cassandra
data
center
for
each
object
and
essentially
the
context
tells
the
operator
which
cluster
to
create
the
objects
right.
A
So
this
is
the
Kate
Sandra
cluster
client,
config,
okay
and
then,
essentially,
what
we
do
is
we
inject
the
config
into
the
gate
center
cluster
and
then
it's
good
to
go
and
we'll
see
the
demo
in
a
second.
So,
during
my
journey
on
eks
I
used
eks,
Cube
fed,
and
here
what
I
had
to
do?
Is
it
set
up
everything
for
me?
It
set
up
a
Bastion
host
on
us
here,
East
tool,
okay,
it's
172.20.0.0
and
then
it
set
up
a
cluster
one
at
EU,
West,
one
so
EU
West
one
was
probably
Dublin.
A
I.
Believe,
okay,
and
that
is
172.21.0.0,
okay,
I,
don't
know
how
to
control
all
that,
but
I'm
I'm,
not
a
you
know.
Networking
guy
and
I
was
not
doing
this
for
production.
So
what
the
heck
you
know
go
ahead,
install
it
anywhere
right.
A
Cluster
2
was
at
U
Central,
one
which
I
believe
is
Berlin
or
Frankfurt.
Okay
and
it's
172.22.0.0
16.,
okay,
and
it
set
up
everything.
You
know
the
VPC
peering
all
the
stuff
that
I
didn't
have
to
worry
about
at
all
right
and
it
and
it
just
works.
A
Okay
and
I'll
show
you
how
this
works
in
a
second,
and
this
is
the
the
same
gate
center
cluster
I.
Don't
know
if
you
guys
can
see
it.
Basically,
what
I'm
doing
is
I'm,
injecting
the
context
of
the
two
clusters
into
this
Kate
Center
cluster
crd
right.
You
know,
which
essentially
does
all
of
this
for
me.
Okay,
so.
A
B
B
B
B
B
A
B
B
All
right,
so
what
are
they
going
to
say?
Yeah.
A
So
there
are
two
clusters
here:
okay,
this
is
part
of
the
Federated
cluster.
Okay,
one
is
fed
to
one
and
the
other.
One
is
fed
two
two
okay,
and
currently
we
are
in
fed
two
one:
okay,
so
what
I'm
gonna
do
is
I'm
gonna,
get
nodes
and
see
what
how
they
look?
Okay,
so
you
can
see
here:
I
had
EU
West,
one
EU,
West,
1
and
us
one.
So
basically
it's
all
running
on
a
US1,
okay
and-
and
you
can
see
somewhere
here
yeah.
This
is
the
one
that
I
was
more
interested
about.
A
172.21.
You
know
remember
that
from
the
you
know,
from
the
original
diagram
right.
So
what
I'm
going
to
do
is
I'm
gonna
set
context
to
two
and
I'm
going
to
show
you
the
same
thing
again:
okay,.
B
B
A
So
now,
if
I
do
this,
I
should
see
172.22
here
right.
So
this
is
EU
Central
and
this
is
Frankfurt,
and
all
of
this
is
automatically
done
for
me
via
you
know:
UK's
Cube
thread
very,
very
cool,
really,
okay,
now
from
a
Cassandra
perspective,
so
these
are
the
nodes.
What
I
can
do
is
I
can
just
show
you
a
cube
cuddle
get
parts
right,
because
we
have
to
do
that
right.
A
And
you'll
see
that
you
know
all
of
these
are
running.
You
know
these
are
all
the
the
different
DCS
on
rack.
One
rack,
two
rack
three
and
so
on,
and
all
that
I
need
to
do
to
look
at
this
Cassandra
cluster
is
run.
This
thing
called
node
tool.
Okay,
this
is
something
that
most
Cassandra
admins
will
know
in
a
cube.
Cuddle,
exec
and
I
specify
the
the
DC
okay,
and,
in
this
case,
since
I
am
in
two
I
could
either
go
back
to
two
or
I
could
go
back
to
one.
A
Where
am
I
I'm
lost
here,
dc2?
Okay,
let's
try
that
so,
basically,
what
it's
showing
is
it's
showing
you
know
the
data
center
one
and
data
center
two
and
it's
giving
a
status
for
each
of
those,
whether
it's
up
or
down
and
what
state
it
is.
So,
as
you
can
see
here,
2121
or
172.21
is
all
about
EU
West
172.22
is
all
about
Central,
okay
and
and
that's
pretty
much
it
okay.
A
A
A
So
you
can
see
here,
I
have
actually,
let's
do
this,
makes
it
a
lot
easier,
as
you
can
see,
I
have
too
many
things
going
on,
so
there
are
three
clusters:
one
is
called,
one
is
an
AKs,
the
other
is
an
eks
and
the
third
one
is
in
gke.
Okay
and
the
cool
thing
is
these
three
are
going
to
be
part
of
one
big
cluster
actually
is
only
two
are
going
to
be
part
of
the
big
cluster
Azure
is
where
I'm
going
to
install
into
gke
and
eks.
A
Okay,
and
all
that
I
needed
to
do
was,
you
know
essentially
set
up.
The
kids
handle
cluster
okay,
so
to
be
able
to
visualize
this.
You
know
we
can
just
go
and
drop
into
lens,
because
all
of
this
is
locally
available
for
me
right
and
if
you
can
see
the
Azure
cluster.
Okay,
you
can
see
the
Cassandra
control.
Plane
is
true
here,
okay,
so
if
I
do
the
same
thing
with
gke,
for
instance,
you
can
see
the
Cassandra
control
plane
is
false.
A
B
B
A
As
false
okay,
so
this
is
very
chaos
except
Azure,
which
is
the
data
plane
and
which
is
the
control
plane.
The
others
are
all
the
data
planes.
Okay,
so
I
could
really
have
this.
You
know
as
big
as
I
want,
so
the
one
thing
we
probably
want
to
take
a
look
at
is
the
Kate
Center
cluster
itself.
Okay,.
A
There
is
the
different
contexts:
the
config,
and
there
is
the
client
config
for
the
AVX
cluster
for
the
eks
and
the
gke
cluster
in
my
AKs
cluster,
because
that's
where
I'm
using
the
control
plane
and
installing
it
into
these
different
data
planes
and
again
what
I'll
do
is
a
final
I
mean
it's
pretty
cool
because
you
know
just
from
one.
A
The
AKs
cluster,
okay
and
hopefully
the
control
plane
value-
is
going
to
be
coming
back
as
false
right.
It
has
to
because
that's
exactly
what
we
saw
just
now
and
then
I
have
already
set
this
up
and
I'm
going
to
run
my
node
tool
again.
A
Okay,
can
you-
and
you
can
see
here
that
my
eks
DC
and
my
gke
DC
are
right
here,
okay
and
and
thanks
to
aviatrics,
basically,
what
it
does
is
it
segments
it
into
10.1,
10.2
and
10.3,
and
that's
kind
of
how
I
maintain
the
the
networking,
keep
it
flat
and
being
able
to
talk
to
each
other
and
all
that
you
know
things
that
kubernetes
expects
right.
So
one
final
thing:
just
let's
take
a
look
at
all
the
parts,
and
this
will
kind
of
give
you
an
idea
of
how
this
is
working.
B
A
Quite
sure,
end
of
the
day,
so,
let's,
let's
end
with
a
successful
note,
so
let
me
run
this
command
again
and
you
can
see
here
that
DC
gke,
okay
and
eks
are
part
of
the
bigger
cluster.
So
all
that
I
needed
to
do
was
inject.
The
context
right
and
I
did
not
have
to
do
anything
else
manually.
You
know
in
the
cube
fed
option
if,
if
that's
still
up
there,
I
had
to
do
some
things
manually
right
so
I
had
to
do.
A
So
what
I
had
to
do
was
I
had
to
install
the
seeds
myself,
and
this
is
how
the
gossip
protocol
Works
remember-
you
know
I
had
to
set
up
172.22
and
the
other
one
I
could
have
done
the
other
way
around.
You
know
it
didn't
make
any
difference.
I
had
to
do
this
manually,
but
with
the
Kate
Center
operator.
I,
don't
have
to
do
any
of
that.
It
automatically
does
that
for
me
and
that's
kind
of
the
way
we
are
kind
of
movie.
A
I'm
going
to
skip
all
this,
we
have
a
bunch
of
videos.
You
know
on
K10,
especially
if
you've
never
heard
of
Kate
Sandra.
You
want
to
know
a
little
bit
about
Cassandra
you
can.
You
can
get
acquainted
with
that,
but
we
are
also
doing
a
lot
on
application
development.
You
know
like
spring
and
whatever
you're
interested
in
right,
mainly
Cloud
native,
we
also
hand
out
badges
because
we
grade
your
homework.
Okay.
So
it's
it's
kind
of
cool,
a
lot
of
people
like
the
badges
and
they
put
it
on
LinkedIn.
A
So
you
know
if
you're
like
me,
who
want
to
do
it
just
put
it
on.
You
know,
attend
our
of
our
workshops
and
do
the
homework
and
with
that
feeling
dunk.
Thank
you.