
►
From YouTube: Knative community meetup # 3
Description
In this meeting, we hear working group updates., and partake in 2 short demos: “Image processing pipeline” and "BigQuery Covid19 pipeline", led by Mete Atamel, developer relations at Google.
A
Okay,
so
I'm
starting
the
recording
of
this
meeting
hello,
everyone,
my
name-
is
Maria
Cruz
I
am
a
program
manager
in
the
Google
open-source
program
office,
and
this
is
the
committee
of
community
meter
I
think
we
are
going
to
go
ahead
and
start
with
working
group
updates.
The
first
one
is
an
update
from
the
outer
scaling
working
group
and
I'm
gonna
be
sharing
the
agenda
here
in
the
chat
box.
So
if
anybody
knew
well
same,
please
make
sure
that
they
can
access
it.
B
So
we
send
out
a
few
emails
to
the
mailing
list
asking
if
people
are
actively
currently
using
HPA,
+,
concurrency
or
requests
for
second
metrics
and
there.
So
please
pick
up
in
those
emailing
frets,
because
if
we
don't
get
any
signal
of
usage,
it
will
be
used.
It
will
be
deleted
in
the
O
60
babies.
It's
all
for
quality
and
stuff,
like
that.
So
we
go.
A
Okay,
the
next
update
is
from
the
a
documentation
working
group
and
as
an
update
that
I'm
sharing
with
you
all.
We
just
finished
the
blogging
guidelines
which
are
in
the
process
of
getting
published
to
the
github
repository,
and
these
blowing
guidelines
are
supposed
to
serve
us
yeah.
Basically,
a
guide
on
what
kind
of
content
we
would
like
to
see
you,
okay
native
that
blog.
So
if
you
have
any
any
blogs
that
you
would
like
to
share
there,
you
are
most
welcome
to
submit
it
through
that
process.
C
A
D
D
A
A
E
D
G
G
Here
we
go
I'm
here.
Can
people
hear
me
yes,
yeah
awesome,
essentially,
that
date
is
where
we're
kind
of
playing
around
with
two
options.
Right
now,
one
of
them
is
breaking
down.
Steering
composition,
see
it's
based
on
contributions
and
the
second
one
is
based
off
of
Elections,
so
we're
just
figuring
out
what
the
trade-offs
are
between
the
two
options.
I
think
maybe
one
thing
I
would
actually
pose
for
the
folks
here
is
like.
Is
there
an
option?
Y'all
would
like
to
see
one
way
or
the
other
good.
F
E
G
E
E
See
a
question
in
the
chat
that
is
simple,
so
I'll
just
answer
it
quickly.
The
question
is:
are
the
steering
meetings
open?
The
ones
that
we
have
on
Monday
are
are
not
open,
but
there
is
there's
no
reason
that
we
can't
can't
have
an
open
one
if
at
another
time,
if
people
have
an
interest
in
that,
is
that
something
that
folks
would
be
interested
in
like
one
of
the
main
takeaways
I've
had
personally
recently
is
that
more
frequent
communication
and
closer
communication
with
the
community
is
something
that
would
probably
be
good
for
everybody.
E
If,
if
folks
would
like
to
do
that,
like
please,
please
let
us
know
in
terms
of
sometime
like
that.
I
have
just
sent
an
invite
to
K
native
dev
for
Monday
afternoon
to
talk
about
the
scope
proposal
for
K
native
project
in
the
functions
working
group
retrospective.
That
was
identified
as
something
that
perhaps
could
be
improved
in
different
ways.
So
I
will
definitely
be
there.
I
think
other
folks
from
steering
will
be
there
and
there's
time
set
up
to
talk
about
that
on
Monday.
A
A
So
it
sounds
like
the
Stevenson
plus
one
in
you.
I
am
new
to
being
active
rather
than
just
a
user,
so
trying
to
understand
the
inner
workings
okay.
So
it
sounds
like
what
you're
proposing
poll
is
open,
steering
committee
meetings
open
to
the
community
or
more
communication
channels,
or
is
it
feedback
on
a
specific
segment
by
I
I.
E
Am
saying
we
have
time
set
up
Monday
to
talk
about
the
proposal
for
K
native
project
scope
and
I'm
I'm,
also
like
receiving
the
message
from
this
MA
time.
That,
like
folks,
would
be
interested
in
having
at
least
one
public
steering
meeting
so
in
the
COC
steering
questions
Channel
I
about
to
start
a
thread
asking
like.
When
do
people
want
to
have
that?
And
what
do
you
all
want
to
talk
about?
So
we
can.
We
can
take
the
rest
of
that
like
set
up
for
that
offline
into
TOC,
steering
questions
channel
and
slack
yeah.
A
Everybody
we
have
a
survey
for
this
meeting
and
it
helps
us
to
make
it
better
so
make
sure
that
you
take
that
and
if
I
see
it
here
on
the
chat-
and
this
is
where
you
can
tell
us
what
you
think
about
this
meeting
and
any
new
ideas
as
well.
So
the
next
person
there
is
a
four
person
thing-
is
method
with
to
shoot
the
mouse.
I
All
right,
let's
first
make
sure
that
everyone
can
see
and
hear
me
is
that:
okay,
yes,
okay,
cool
hello,
everyone
I
see
some
names
in
the
chat
that
I
recognize.
But
for
those
of
you
who
don't
know
me,
my
name
is
Matata
Mel
I'm,
a
developer
advocate
at
Google
in
Google,
developer
relations,
I'm
based
in
London
and
I,
don't
know
I
guess.
I
So
thanks
everyone
who
worked
on
Kay
native,
it
definitely
enabled
me
to
write
more
cool
demos
and
every
time
I
showed
that
people
might
seem
to
like
what
they
see,
but
unfortunately,
since
March
I've
been
pretty
much
at
home
and
on
the
flip
side,
this
gave
me
time
to
build
some
cool
demos.
So
I
just
want
to
go
through
like
couple
of
the
demos
that
I
do
today.
I
A
I
Cool
now,
first
shameless
plug
I
have
this
K
NATO
tutorial
on
github,
where
I
basically
try
to
keep
it
up
to
date.
I
think
the
latest
version
of
K
native
that
I
updated
this
tutorial
form
was
0.14
I'm,
well
aware
that
there's
a
new
version
15
and
I'm
going
to
do
a
bit
back,
probably
tomorrow
or
next
week
to
that
I'm.
I
Basically,
in
this
tutorial,
I
show
some
basic
use
cases
for
K,
disturbing
connective,
eventing
and
connective
event
in
Google
clouds
and
detect
on
pipelines
which
used
to
contain
a
developed
so
feel
free
to
check
it
out.
But
today
I
want
to
talk
about
couple
of
these.
The
first
one
is
image
processing
pipeline
and
the
second
one
is
a
big
query
processing
pipeline.
I
So
let's
just
look
at
the
image
processing
pipeline
first,
so
here
I
mean
with
both
of
these
pipelines.
What
I
wanted
to
do
is
I
want
to
build
some
kind
of
a
processing
pipeline
using
some
kind
of
a
balancing,
Google
cloud
and
I
I
wanted
these
services
to
be
kind
of
change
but
chained
in
a
way
that
they're
kind
of
you
know
independent
of
each
other.
So
I
can
add
them
and
remove
them
as
I
as
I
wanted
them.
I
So
the
first
so
first
of
these
pipelines
is
image
processing
pipeline,
and
the
idea
here
is
that
the
user
end-users
will
save
some
images
into
a
cloud
storage
bucket,
which
is
cloud
storage.
It's
a
for
those
who
don't
know
it's
like
a
Google.
Cloud
object,
storage
service,
so
you
say
some
image
in
there
an
input
market
and
then
you
would
get
some
those
images
processed
and
saved
to
an
output
bucket.
It's
something
simple
and
then
I
had
some
requirements
like.
I
First
of
all,
when
the
user
saves
the
image
I
wanted
it
to
be
filtered
I
didn't
want
any
kind
of
images
floating
in
the
pipeline,
so
this
filter
service
it
uses
vision,
API
to
determine
whether
the
image
is
safe
or
not,
and
I
will
I
will
talk
about
what
safe
means
as
we
go
through
the
code.
So
once
the
image
goes
to
the
filter,
then
the
filter
sends
a
message
out,
but
I
guess
first
I
should
describe
in
detail
like
what
happens.
So
the
user
saves
image
in
the
bucket.
I
Then
I
set
up
clad,
Google
Cloud
storage
source,
so
this
is
a
an
event
source.
That's
part
of
the
key
native
GCP
project
that
enables
you
to
listen:
Google,
Cloud,
storage
events.
So
when
the
user
says
'hey
the
the
file
that
generates
an
event
and
that
event
gets
pulled
into
the
decay
net
cluster
with
big
cloud
storage
source
and
then
I
just
make
that
to
pass
the
message
to
broker
and
I
have
a
default
broker
in
a
name
space.
I
So
the
the
message
it
ends
up
in
the
name
space
and
then
the
filter
service
has
a
trigger
that.
Will
you
receive
that
message
and
then
it
will
basically
know
which
image
that
has
been
saved
and
then
it
will
make
a
call
to
vision,
API
and
in
vision,
API.
You
can
basically
say
you
know,
given
this
image.
I
Can
you
tell
me,
what's
the
likelihood
of
this
image
being
a
violent
image,
for
example,
and
or
what's
the
likelihood
of
this
image
being
an
adult
image,
so
you
it
gives
you
the
likelihood
or
like
four
or
five
different
metrics,
so
I,
just
look
at
that
and
I
just
say:
okay
as
long
as
this
message
message,
this
image
is
not
any
of
these
likelihood,
so
it's
not
likely
that
it's
it's
a
bio
image.
It's
not
like
whether
it's
another
image
and
so
on
and
so
forth
and
I
will
let
it
through.
I
I
That's
what
one
of
the
things
that
I
like
about
K
native
is
this
whole
model
of
like
brokers
and
the
triggers.
So
the
broker
is
kind
of
like
the
backbone
of
the
whole
event
in
pipeline,
and
then
you
can
receive
messages
from
it.
You
can
receive
certain
messages
by
applying
filters
on
your
trigger,
but
then
you
can
also
reply
to
messages
which
makes
it
really
easy
because
you
know
when
they
filter
bcz
the
client
event
from
cut
sword
source.
I
It
can
reply
with
a
custom
message
to
broker
and
then
worker
figures
out
where
to
route
it,
which
is
nice.
It
makes
it
really
easy
to
to
write
the
code.
So
then,
this
K
native
samples,
final
event
is
being
listened
by
to
other
services,
resized
rear
and
way
blur
so
do
resizer.
You
will
receive
the
image,
usually
it's
big
and
it
will
resize
it
using
an
image
sharp.
My
services
are
in
c-sharp,
so
I
use
image
sharp,
which
is
kind
of
like
image
magic
both
for
C
sharp.
I
I
Similarly,
the
way
blur
will
listen
or
the
pilot
uploaded
message,
and
then
it
will
use
vision,
API
to
extract
the
labels
out
of
the
image,
and
so
what
this
image
is
about,
and
then
it
will
save
those
labels
as
a
text
file
to
the
output,
so
give
it
a
single
image.
We
will
basically
end
up
with
three
different
files,
and
these
files
are
the
resize
image,
the
resize
image
with
the
watermark
and
then
the
labels
of
the
image
in
the
text
file.
I
I
I
Let's
see
what
else
we
need
to
do
so
we
inject
the
broker
in
the
namespace
by
like
labeling
the
default
namespace.
Then
I
go
through
each
services,
a
filter
service,
so
the
service
and
the
trigger
resizer
and
and
then
the
water
marker
and
then
the
white
blur.
I
guess
I
can
show
you
one
of
the
codes.
So
let's
look
at,
for
example,
let's
say
filter.
I
So
if
we
look
at
the
filter
code
so
the
program-
but
this
is
a
c-sharp-
it
just
listens
on
port
8080,
but
everything
basically
happens
here.
So
when
we
receive
a
post
request.
I
have
this
event
reader,
which
is
which
basically
it's
just
the
client
event.
So
I
agreed
the
claude
event.
Then
I
have
this
bucket,
but
could
even
take
a
read
here.
So
we
have
classes
for
everything
in
c-sharp.
I
So
so
this
bucket
event
later
either
just
looks
at
the
data
over
the
cloud,
a
band
and
then
extracts
the
bucket
name
and
the
object
name
from
from
there.
Basically,
we
can
ignore
this
code
because
I'm
running
this
in
cloud
run
as
well
and
in
current
when
there's
a
strange
bug
where
I
had
to
check
whether
the
bucket
means,
what
I
expect,
what
it
is
not
relevant
and
then
from
from
the
bucket
name
and
object
name,
I
created
source
URL,
then
I
pass
this
to
a
method
called
picture
safe,
and
this
is
picture.
I
Safe
is
basically
using
vision
in
clients.
So
this
is
the
client
to
top
division
API
and
then
it
will
call
the
text
safe,
search,
async
and
it
will
pass
in
the
image
URL
and
then
this
will
return
like
LaHood.
So
as
long
as
my
image
is
not
possibly
adult
medical,
racist,
coupon,
violent,
then
I
say:
okay,
this
picture
is
safe
and
then,
if
it's
not
safe,
I
just
don't
return
anything,
but
if
it
is
safe,
then
I
create
an
object.
So
this
object
is
the
bucket
and
the
name
so
I'm.
I
Basically
so
this
is
gonna,
be
the
big
body
of
the
cloud
event
and
in
the
body
I'm
just
saying
this
is
the
bucket.
This
is
the
name
of
the
object,
but
that
you
should
care
about
and
then
I
write
this
as
a
custom
element.
So
this
is
what
the
code
does
here.
So
the
even
brighter
is
the
thing
that
kind
of
takes
care
of
the
body
and
converts
into
a
cloud
event
and
then
writes
it
out,
but
we
don't
have
to
look
at
the
details
of
that.
I
So
this
is
the
filter
and
the
other
ones
are
pretty
much
the
same
kind
of
setup
except,
for
example,
if
you
look
at
the
resize
here,
I
can
receive
the
requests
raise.
The
cloud
event
gets
a
bucket
name
bucket
and
his
name,
and
then
it
downloads
the
image.
Then
it
does
some
image
magic
or
or
whatever
the
library
I'm
using
magic
to
resize
it,
and
then
it
just
uploads
it
to
the
bucket,
and
then
it
sends
another
cloud
event
or
water
market
to
take
on.
I
I
But
then,
if
you
look
at
the
labeler,
for
example,
it's
pretty
poor
the
way
blur
that
will
filter
on
file
uploaded,
because
this
is
the
event
that
has
generated
by
the
filter
and
it
will
only
look
for
those.
So
that's
how
you
can
kind
of
make
different
services
get
different
kinds
of
events,
and
this
one
is
also
getting
get
file,
uploaded
event
and
so
on
and
so
forth.
So
let
me
just
show
you
how
this
works
I'm
in
my
Papa.
This
is
big
enough.
So.
I
Yes,
I
was
worried
for
a
second
something's,
not
working
but
yeah.
This
source
is
created
and
then
let
me
make
sure
that
there's
a
broker
unity
for
broker
yeah,
there's
a
default
broker
and
if
I
do
get
cloud
storage
source.
Hopefully
that
should
be
up
and
running
yes.
So
now
we
are
getting
the
events
from
Google,
Cloud
storage
and
to
the
broker.
Now
we
need
to
actually
create
our
services.
So
let
me
apply
the
create
the
service
in
the
create
a
trigger.
This
is
for
filter
and
then
let's
do
the
same
for
labeler.
I
Let's
do
the
same
for
resizer
a
nice
to
do
the
same
for
watching
my
channel
yeah
and
then,
if
you
look
at
services,
well,
there's
a
few
other
services,
but
the
ones
that
we
care
about
is
the
filter
label
resize
your
motor
maker.
They
seem
to
be
running
and
then,
if
you
do
trigger,
you
can
see
that
all
the
triggers
are
ready.
So
I
think
it's
gonna
work.
I
Look
at
the
pods
yeah!
We
have
some
pods
running
as
well.
So
now
let
me
go
to
Google,
Cloud,
console
and
I
mean
I,
mean
Google,
Cloud
storage
area
and
then
I
have
my
bucket
here.
Tentative
images,
input
and
I
already
have
an
image
here.
So
let's
just
upload
that's
about
the
same
image.
So
this
is
a
picture
that
I
look
at
out
of
nowadays,
because
I
cannot
be
at
places
like
this
I'm
stuck
in
my
apartment,
but
basically
you
can
see,
hopefully
can
see
it's
mountains
and
sunshine
and
the
beach.
I
I
I
Yeah,
you
can
see
the
little
of
the
filter
that
it
received,
a
Claddagh
vent
and
the
source
is
storage,
and
this
is
a
type
and
this
is
the
actual
data
that
we
kind
of
care
about
and
then
it's
fun
to
start
URL
determined
that
this
picture
is
safe.
Then
you
fight
back
with
another
cloud
event
and
the
data
is
what
I
described
like
the
bucket
and
the
beach.
So
we
pass
this
on
and
then,
if
you
look
at
other
pods,
let's
say:
wait:
yeah,
let's
get
labeling,
for
example,.
I
Yeah
so
the
labeler
received
the
custom
card
event
and
then
it
made
a
call
to
vision,
API
and
then
it
says:
okay,
this
picture
is
labeled
Scottie,
but
I
ordered
C,
major
cost
so
on
and
so
forth,
and
then
it
uploaded
this
to
the
output,
folder
right
and
yeah.
You
don't
have
to
look
at
all
the
pods,
but
the
other
ones.
They
already
did
their
work
resizing
em,
others
they
are
terminating.
But
if
everything
worked,
let
me
go
back
to
clear:
let's
go
to
our
output,
folder
yeah,
you
can
see.
I
This
one
is
the
resize
picture
with
Google
cloud
platform
as
the
watermark
and
then
the
beach
labels
is
the
weibull's
that
I
show
you
as
the
labels
appear
in
idk
extracted,
so
it
was,
which
is
always
a
good
thing
when
you
damn
or
something
alright,
so
I
will.
Let
me
let
me
pause
here
for
a
second
I.
Don't
know
how
much
time
I
have
do.
I
have
more
time.
I
Yeah,
okay
and
the
second
pipeline
I'm,
going
to
show
and
I'll
go
quickly
with
this
one
is
that
this
is
a
big
query.
Processing
pipeline
I,
don't
know
about
you,
but
when
I
started
working
from
home-
and
there
were
a
lot
of
cases
like
covet
cases
in
London
and
I
was
obsessive,
with
checking
for
the
news
every
day
at
some
point
like
I
decided,
okay
after
like
a
couple
of
weeks,
I'm
like
I'm,
not
gonna,
check
news
anymore,
because
it's
not
productive.
So
what
I
would
do
is
I'd
like
every
day
around
like
five
o'clock.
I
I
would
just
go
to
this
website
to
look
at
some
stats
about
UK
and
my
parents
today.
What
they
also
live
in
Cyprus
and
I
would
check
this
that's
from
Cyprus
as
well,
but
then
once
I
check
it
I
would
agree
and
start
reading
the
news,
so
I
was
still
not
being
productive.
So
what
I
did
in
this
pipeline
is
kind
of
find
a
way
to
get
the
news
without
having
to
check
it
myself.
I
So
I
built
a
pipeline
that
would
carry
the
cogut
19
data
for
the
countries
that
I
care
about,
and
it
will
send
me
an
email
notification
every
day
around
5:00
p.m.
with
the
data.
So
the
way
this
works
is
that
we
have
cloud
scheduler
that
creates
a
job
to
to
basically
call
a
service,
and
then
this
service
will
is
called
query
runner
and
it's
a
Canadian
service.
It
will
basically
go
to
bigquery
and
bickering
has
many
public
datasets
and
one
of
them
is
now
called
Corbett,
19
dataset.
I
So
the
disservice
will
go
to
this
public
dataset
and
it
will
basically
run
a
query
and
extract
the
Corbett
cases
in
the
last
30
days
or
so
for
the
country
that
I
specified
right
in
this
case
at
UK.
It
will
get
the
data
and
then
it
will
save
it
to
a
temporary
bigquery
table
and
then
once
this
say,
coroner
will
send
a
customer
custom
Clyde
event.
It
will
be
received
by
a
chart
creator
and
Chuck
crater.
I
It's
a
Python
app
that
will
simply
read
this
table
and
then
use
math
lib
to
do
a
simple
chart
of
cases
in
the
country
and
then
once
the
chart
is
generated,
it
will
save
it
to
a
chart
packet.
It's
a
storage
bucket
on
Google
Cloud
and
then
it's
notify
your
service
will
listen
for
notifications
from
this
bucket
and
when
and
when
in
the
chart
is
saved
here
you
will
get
a
notification
and
then
it
will
use
SendGrid
to
send
an
email
to
the
end
user
in
this
case
me
right.
I
So
this
is
what
I
set
up
I.
Guess
we
won't.
We
don't
have
to
go
into
much
detail,
but
a
couple
things
to
mention
is
that
I
I
used
class
scheduler
source
again
this
is
another
event
source
on
Kenya
native
GCP
project.
To
do
this
scheduling,
job
setup
and
all
that
kind
of
stuff,
then
I
use
custom
events
to
send
a
message
from
here
to
here
and
then
check
criteria
was
used.
I
As
my
flip,
even
though
I
don't
know
Python
that
much
it
wasn't
that
difficult,
yeah
and
then
I
use
cloud
storage
source
to
get
notifications
here
and
then
Center.
It
was
really
easy
to
use.
Actually
I
was
pleasantly
surprised,
so
I
used
sangria
to
send
an
email.
So
all
the
details
are
here
how
to
set
it
up,
but
I
just
want
to
show
you
how
it
looks
like
in
the
end.
So
when
this
works.
I
So
I
had
this
charts
pocket.
You
see
that
once
the
child
is
created,
you
can
see
like
chart
Cypress
and
charging
charging
at
the
kingdom.
So
these
are
the
charts
that
I
created.
You
can
see,
it
says
cold
cases,
United
Kingdom,
it
just
gives
you
some
numbers
and
then,
if
everything
is
set
up
with
SendGrid,
you
basically
get.
I
Let
me
see.
I'll
show
you
one
of
my
things,
so
you
basically
get
like
an
email
like
this.
That
says
a
new
chart
from
bigger
pipeline,
and
then
you
get
one
full
of
site
with
a
month
or
anything
so
I
get
one
for
p.m.
for
Cyprus
one
by
p.m.
for
yet
to
Kingdom,
and
that's
it.
That's
all
Mike
Corbitt
nineteen
news
source
nowadays,
which
helps
know
what,
in
terms
of
staying
sane
and
all
that
kind
of
stuff,
yeah
yeah,
that's
why
I
want
to
share
today.
Hopefully
this
was
useful
and
yeah.
I
A
D
It
looks
like
meant
a
built,
a
general
purpose
when
an
object
is
uploaded
email
it
to
you,
know,
email
it
to
me
function,
which
seems
like
that's
a
nice
thing
that
you
could
reuse.
If
you
had
other
stuff
that
generated
a
chart-
or
you
know
other
information
daily,
so
I
just
wanted
to
call
out
that
there
might
be
a
piece
there
that
you
can
reuse,
even
if
you
don't
want
the
whole
pipeline.
Yes,.
I
I
So
basically
cloud
run
running
on
google
kubernetes
engine
version,
which
is
pretty
much
same
as
K
801.
But
the
way
you
set
things
up
is
a
little
bit
different
because
in
cloud
1
on
gke,
they
recently
augmented
the
g-cloud
command
line
tools
to
kind
of
they
converted
yamo
files
into
a
Jew
Kyle
come
on
Ryan
told
us
basically.
I
So
that's
why
the
setup
is
a
little
bit
different
and
so
I
have
that
as
well
and
then
there's
a
version
for
cloud
run
managed
so
there's
this
original
current
run
called
managed
cloud
run,
which
is
kind
of
like
a
neato,
but
it
runs
on
Google's
infrastructure,
so
I
created
one
for
that
as
well.
I
will
share
a
link
here
for
Mike
in
a
tutorial
and
then
in
there
you
will
see
the
different
versions
of
this
app
and
then
you
can
play
with
them.
If
you
like.
I
A
A
Vincent,
you
didn't
see
any
any.
Anyone
in
your
breakout
room,
yeah
some
breakout
rooms
were
some
people
didn't
respond
to
the
invitation
to
join
their
own.
Sorry
that
you
are
alone
in
your
room.
I
try
to
put
at
least
one
two
people
in
one
room,
but
it
didn't
always
work.
So
just
another
reminder
to
please
fill
out
the
survey
and
I
think
everybody
awesome,
I,
think
everybody's
back
from
the
rooms
and
I
hope
you
all
had
a
chance
to
connect
and
to
get
to
know
each
other
a
little
bit
better.
A
We
are
going
to
continue
to
experiment
with
other
ways
of
connecting
I
just
link
to
survey
again.
So
if
you
have
any
feedback
or
any
ideas
about
this
event
and
how
to
make
it
better
and
more
useful
to
all
of
you,
please
do
share
it
over
there
and
thank
you
all
for
joining
and
you
will
see
you
next
month.