►
From YouTube: Adaptive Cards community call November 2020
Description
This month, hosted by Matt Hidinger, featured a deep dive on the new Adaptive Cards "pic2card" feature + demo by Imaginea.
For more information, please visit https://adaptivecards.io
Stay connected
Twitter https://twitter.com/microsoft365dev
YouTube https://aka.ms/M365DevYouTube
Blogs https://aka.ms/M365DevBlog
A
Hi
everyone,
thanks
for
joining
the
november
2020
edition
of
the
adaptive
cards
community,
call
we're
almost
at
two
full
years
of
this
call
running,
which
still
just
blows
my
mind.
I
can
still
remember
when
dana
sent
us
the
request.
Like
hey,
do
you
want
to
do
this
thing
today?
I
was
just
joking
that
I
am
like
the
teacher
who
comes
in
the
day
and
they
wheel
in
for
those
of
you
that
are
you
know,
90s
babies,
that
wheeled
in
with
the
tv,
cart
and
said
we're
going
to
watch
a
documentary
today.
A
So
I
don't
have
anything
to
do
today.
I
am
going
to
hand
this
off
to
our
friends
from
imagineer,
vassanth
haridis
and
a
handful
of
people
who
have
been
working
on
a
feature
that
we
were
really
hoping
to
roll
out
to
production
today,
but
we
hit
some
accessibility
stuff
that
we
didn't
catch
with
high
contrast
mode,
so
they're
going
to
demo
it,
and
I
just
want
to
reiterate
very
clearly
that
this
will
be
available
on
adaptivecards.io
very
soon,
probably
next
week.
A
The
final
test
pass
with
accessibility
is
being
done
right
now,
so
the
feature
works.
It
works
great.
We
just
want
to
make
sure
it
works
great
for
for
everyone,
including
low
vision,
people,
so
that's
kind
of
the
last
piece
so
they're
going
to
show
you
what
that
feature,
even
is
quick,
summary
is
it's.
A
It's
you
know
picked
a
card
is
kind
of
the
the
catchy
name,
but
it
is
the
a
really
cool
addition
for
people
who
already
have
design
skills
or
they've
designed
a
card
in
a
graphical
program
and
they've
got
a
bitmap
of
their
card
and
you
can
feed
it
through
this,
this
this
tool
that
they've
built
and
it
will
output
an
adaptive
card
based
on
that
image.
So
hopefully
that
wets
your
appetite
a
little
bit.
It's
a
really
neat
feature
that
we're
going
to
bring
out
to
everybody
very
soon.
A
So
thanks
you
guys
for
coming
in
and
talking
about
it,
and
with
that
I
will
feel
free
to
take
over
sharing
the
screen
and
kick
it
off.
B
Thanks
mike
yeah
quick
introduction,
I'm
vasanth
I'm
from
imagineer
technology,
I
had
the
engineering
party
engineering,
the
engineering
director,
and
I
have
my
colleagues,
my
fellow
folks,
haritas
and
then
keep
joining
me
for
this
presentation.
Let
me
quickly
share
the
screen
and
then
we
are
going
to
start
from
there.
B
Okay,
okay,
so
the
agenda
for
the
call
is
pretty
much
like
explaining
about
the
picture
card,
how
it
helps
that
update
card
community,
how
it,
how
it
tells
basically
that
required
we'll
show
you
a
quick
demo.
We
want
to
start
with
the
demo,
but
we'll
give
a
quick
intro
and
let's
start
with
the
demo
and
then
we'll
deep
dive,
a
little
bit
deeper
about
how
we
how
we
did
what
we
did
and
then
we
are
happy
to
take
question
at
the
end.
B
So
I
want
to
start
the
day
with
a
with
a
thought
for
an
idea.
Okay,
so
usually
when
people
used
to
say
right,
when
you
have
time,
do
it
yourself
right,
that's
a
great
way
of
doing
it,
but
if
you
want
to
do
it
faster,
probably
you
need
to
get
get
it
done
by
others
right.
So
if
you
want
to
do
everything,
you
cannot
done
it
faster
because
you
are
going
to
take
your
own
time.
B
So
if
you
want
to
do
it
faster,
do
it
from
the
other,
but
you
want
to
do
it
fastest
by
all
means,
just
leave
it
to
the
computer
right
leave
it
to
the
mission.
That
is
the
whole
world
is
driving
towards
that.
That
is
the
reason
why
the
world
has
taken
that
as
of
today,
if
you
see
it,
the
whole
world
is
taking
a
step
for
a
first
approach
right,
so
we
are
also
taking
aafs
approach
to
create
a
adoptee
card.
B
B
So
the
adapter
card
platform
we
know
about
like
it
is
a
black
command-
is
simple:
snippet
for
ui
in
a
simple
word,
ui
as
a
code
which
is
exchangeable,
and
then
it
renders
it
using
natively.
In
all
the
platform
works
across
all
the
class
platform.
It
can
work
in
mobile,
it
can
work
in
browser,
it
can
work
in
windows,
mac
or
if
you
name
it,
it
has
the
renderer
for
it.
B
It
also
provides
a
beautiful
designer
interface,
it
looks
fabulous
and
it
is
easier
and
then
you
can
easily
design
an
adapter
card
right,
but
still,
if
you
see
it
the
native
designer
right,
if
they
come
and
then
use
the
adaptic
card
designer,
there
is
a
bit
of
a
learning
curve.
They
have
to
learn
it.
They
have
to
learn
the
tool,
they
have
to
learn
the
code,
how
it
works
and
then
how
to
set
the
property
and
then
different
bindings
everything
they
have
to
learn.
B
It
is
not
like
people
will
come
on
the
day
and
then
learn
right.
There
is
a
little
bit
of
learning
curve
associated
right
and
then,
if
you
want,
if
you
are
designing
for
the
first
time,
you
will
find
a
bit
bit
hard
to
get
it
right.
Okay,
even
though
we
have
put
up
the
best,
the
designer
tool
is
best
for
both
designer
developer,
but
it
will
take
a
little
bit
of
learning
how
to
to
learn
it.
B
So
we
thought
like
okay,
what
if
we
can
convert
the
code,
we
can
generate
the
code
from
an
image:
okay,
a
native
designer,
usually
they
design
the
card
in
their
one
tool
right
they
go
and
then
design
the
photoshop
or
sketch
or
you
you
would
have
seen
in
a
a
beautiful
card
in
a
girl.
C
B
A
you
browse
through
a
pupil,
and
then
you
found
a
beautiful
card.
You
want
to
quickly
generate
the
adapter
card
out
of
it
for
your
own
use.
You
can
definitely
use
using
the
pick
to
card.
So
here
comes
what
is
a
crypto
card.
Pictocard
is
basically
you
input
as
a
image,
and
it
will
generate
an
adapter
card
without
taking
over
much
of
expectation.
Let
me
quickly
show
the
demo
and
then
we'll
go
from
there.
A
And
I'll
actually,
while
you're
doing
that
paste
the
url
into
the
chat,
so
people
can
play
with
it
and
yeah
this.
This
url
will
go
away
soon,
it'll
be
on
adaptive
cards.
B
B
So
this
is
loading
from
our
pr
link.
Basically,
if
you
see
it,
this
is
the
latest
pr
which
will
definitely
in
it
one
or
two
days
it
will
hit
the
production.
So
as
soon
as
you
see
here,
we
are
given
a
sample
cards
for
you
guys
to
try
out.
You
can
try
out
any
one
of
this,
I'm
going
to
try
out
one
of
these
and
then
I
will
also
show
you
a
live
demo.
Okay,
how
we
will
randomly
pick
one
image
and
then
see
how
it
works?
B
B
Okay,
so
it
has
actually
converted
the
whole
image
into
the
adapter
card.
This
is
the
live.
Autocad,
it
is
actionable,
it
is
everything
understood
it
understood
the
picture.
It
understood
the
textual
property.
It
understood
that
this
is
the
action
set
and
then
created
a
beautiful
adaptive
card
function.
B
If
you
want
to
enhance
this
adaptive
card,
you
can
go
and
then
drag
new
element
and
then
start
adding
new
new
things
to
this,
or,
if
you,
if
you
are
good
with
this,
you
can
straight
away
use
this
in
your
part
of
your
phone.
So
you
have
the
code
snippet
here
you
can
copy
paste
and
then
use
it
as
adopted
card.
As
usual.
B
Let's
pick
another
action:
okay,
one
more
example:
okay,
so
here
we
have
provided
option
to
create
template.
If
you
see
if
the
the
adapter
card
2.0
support,
template
binding,
where
in
which
you
can
also
bind
your
api
with
that
card,
payload,
okay,
so
I'm
going
to
enable
this:
let's
see
what
it
does,
okay,
so
it
does
as
usual,
it
has
generated
that
optic
card
and
then
it
has
given
you
the
code,
the
json
format.
B
B
A
B
B
B
It
has
converted
into
a
card
and
then
this
you
can
go
and
then
it
we
can
definitely
edit
the
property.
If
you
don't
like
the
the
size,
for
example,
I
want
to
make
it
like
a
small
okay.
I
can
go
and
definitely
make
it
as
small
or
medium
for
that
same
and
then,
whatever
we
want
to
give
right.
We
can
definitely
alter
the
property
from
here.
Okay,
so
yeah.
Let
me
go
back
to
the
slide
or
we
can
see
one
more
demo.
Let's
see
a
complex
card,
how
it,
how
it
does.
A
And
the
song,
will
you
go
into
details
on
the
image
extraction
at
all
or
that
the
that
it
pulls
them
out?
Okay,
or
do
I?
I
don't
want
to
get
ahead,
but
I
it's
such
a
cool
part.
I
want
to
make
sure
people
see
that.
B
It
is
converted
into
adaptive
card
and
then
we
can
edit.
We
can
enhance
this
card
from
here.
It
will
give
you
a
base
phone
for
you
to
play
instead
of
starting
from
empty
card.
We
can
definitely
start
from
a
beautiful
adapter
card
which
is
already
converted
from
the
image,
and
then
we
can
start
actually
improving
this
from
here.
B
Okay,
so
let
me
go
back
to
the
my
slide.
Okay,
how
did
we
do?
Okay?
How
did
it
looks
like
little
magical
right?
We
uploaded
an
image,
we
downloaded
some
random
image
from
the
web
and
then
it
magically
converted
into
adaptive
card,
and
then
we
are
able
to
use
that
as
adapting
card.
What
we
are
doing
here.
We
are
applying
computer
vision
and
then
to
extract
all
the
elements
we
to
understand.
What
is
the
object
in
that
particular
image?
Is
it
a
image
view?
B
Is
it
a
text
view,
or
is
it
a
radio
button
or
all
the
elements
we
are
extracting
using
the
computer
vision
and
then
we
also
extract
the
test
textual
property.
We
destruct
the
test
text
using
the
ocr
and
then
we
also
have
a
custom
heuristic
to
estimate
the
property
like
what
is
the
font?
What
is
the
type
of
font?
What
is
the?
What
is
the
important
of
the
font?
Is
it
like
bold
light
medium?
B
B
B
I
will
call
upon
my
colleague
haridas
to
give
a
much
detailed
explanation
about
how
we
do
what
are
the
different
algorithm
we
have
used
and
then,
even
when
we
used
the,
for
example,
we
have
used
a
faster
rcn.
We
did
a
comparative
study,
we
tried
at
least
four
five
algorithm,
and
then
we
picked
the
best
out
of
it.
So
probably
we'll
give
some
of
the
ideas.
What
what?
What
are
the
testers?
We
have
done,
what
is
the
output
and
how
it
performs?
I
will
call
upon
haridas
to
give
that.
D
Hi,
hello,
so
just
a
quick
intro,
I'm
a
data
science
architect
for
imagineer,
so
yeah
quickly.
We
got
idea
how
things
looks
like
right,
so
how
we
did
it
at
the
under
the
hood,
so
this
slide
shows
a
high
level.
What
are
the
stages
involved
on
that
right?
So
here
we
have
object,
detection.
D
There
is
a
one
ml
model
or
we
can
say
a
deep
learning
model
that
will
feed
the
image
and
it
will
produce
find
out
the
important
parts
in
the
image
and
then
it
outputs
that-
and
we
have
a
set
of
post
processing
pipeline
that
will
crunch
that
and
extract
the
text
and
start
the
phone
extract.
The
alignments
I
mean
it's
left
to
right
or
center
and
also
placing
it
in
the
right
places
so
that
it
does
all
that
and
then
export
into
a
target
format.
D
Here
it
is
a
adapter
card,
so
we
have
even
we
can
export
that
to
a
html.
Also,
that's
a
that's
a
high
level
idea,
so
here.
So
if
you
see
what's
that
pipeline
looks
this
is
the
we
can
say,
architecture
of
that
cryptograph
pipeline.
So
it
gets
the
image
and
the
model
side.
What
actually
main
part
is
a
cnn
model.
You
can
say
convolution
neural
network,
so
that's
a
generally
a
backbone
model
that
will
gets
the
image
and
it
creates
a
intermediate
representation.
D
So
then,
from
that
there
is
a
b
box
prediction
here.
The
b
box
means
bounding
box.
So,
for
example,
in
adaptive
card
we
have
like
a
text
box
radio
button
image.
It's
in
our
case,
it's
an
image
inside
image.
That's
right!
We
are
inputting
image,
but
there
is
image
inside
there
is
another
image,
so
we
have
to
find
the
image,
also
action
buttons.
These
are
the
different
types
of
items
or
objects
in
the
image,
so
this
be
box
prediction.
D
What
it
will
find
is
the
find
the
exact
coordinates
and
they
find
the
bounding
box
in
that
image
and
it
the
model
will
gives
that
as
output.
The
next
stage
is
the
based
on
that
coordinates.
I
mean
the
bounding
box
position.
The
layout
generator
creates
a
data
structure,
internal
data
structure,
maintaining
this
structure
so
that
we
can
the
entire
that
information
can
be
used
for
generating
the
authority
card.
So
that's
a
layout
generator,
that's
one
of
the
core
part
and
the
next
is
the
each
for.
D
If
you're,
if
you're
getting
a
text
object,
we
have
to
extract
the
ocr,
we
have
ocra,
we
have
to
find
the
font
weight
found
height.
We
have
to
find
the
font
color
and
that's
a
kind
of
general
properties
and
and
also
another
thing
is
the
relative
positioning
and
the
the
even
the
found
weight
and
font
size
is
a
little
tricky
thing.
D
We
are
doing
a
lot
of
iterations
to
get
a
decent
level
because
it
involves
the
overall
aspect,
ratio
of
the
card
or
aspect
ratio
of
the
image
based
on
that
the
phone
may
look
like
a
boulder,
but
we
after
extracting
may
look
like
lighter.
So
we
have
to
take
care
of
that.
So
those
are
the
key
property
extraction.
Then,
with
this
layout
information
properties,
we
merge
it
into
a
common
data
structure,
then
export
to
an
article.
So
that's
what
we
are
seeing
it
at
the
final.
D
So,
let's,
let's
look
at
the
model
side,
specifically
a
bit
more.
So,
as
wasn't
mentioned,
we
tried
a
bunch
of
models,
so
these
are
kind
of
state-of-the-art
models
available
in
the
object
detection.
That's
object
of
one
of
the
like
image
net
problem.
You
you
might
have
heard
about
the
imagenet
competition
right
similar
to
that
object,
detect
objection,
it
has
its
own
benchmark.
D
There
is
a
models,
are
competing
for
a
general
data
set,
it's
called
a
coco,
and
so
we
it's
there,
maybe
recent
three
years,
three
or
four
years
it
become
more
better,
more
become
better
at
predicting
objects
or
finding
the
bounding
box
of
any
objects.
If
you
train
it,
so
we
pick
the
lattice
four
models
I
mean
based
on
the
benchmark.
The
faster
rcn
is
the
one
of
the
I
mean
pioneer,
you
can
say
it's
set.
D
Each
one
sets
a
sorter
at
particular
point
state
of
art
at
that
stage
and
is
a
recent
model
from
facebook.
It
it
uses
the
transformers,
end-to-end
transformer
based
encoder
recorder
model,
so
I'm
not
going
internal
into
the
model
architecture,
but
I'm
showing
these
are
the
models
used
and
even
a
recent
is
the
efficient
debt.
So
that
is
it's
a
family
of
model.
D
Again,
one
zero
two,
seven
actually
so
it
it
works
well
for
a
particular
data
set,
but
in
our
case
we
we
have
to
see
I'll,
explain
that
how
this
model
performed
in
our
case
our
case,
each
data
set,
will
some
models
will
behave
in
a
different
way
for
a
particular
data
set.
So
we
have
to
pick
the
best
one
for
our
scenario.
D
Okay,
so
let's
see
the
performance
of
these
models,
how
for
our
data
set,
how
these
models
performed?
So
this
is
the
metric
metric
here
also
we
are
using
the
the
common
standard
metric
that
used
by
the
benchmark.
I
mean
public
publicly
available
open
source
benchmark.
These
are
that
the!
If
you
are
not
aware
of
this,
this
ap
ap
50,
that's
okay!
It's
a
object,
detection
standard
conventions.
D
So
using
that
these
numbers
are
we
got
from
our
data
set.
It's
like.
We
have
a
maybe
assume,
like
a
200
images,
and
from
from
that
we
will
train
using
maybe
fun
150
images.
We
drain
it
50
image,
we
keep
it,
keep
it
aside,
and
so
after
training
we
will
feed
this
50
images
and
check
the
metric.
D
We
will
evaluate
the
model
performance
how
for
these
15
images
generally,
it
is
called
a
test
images.
Okay,
so
this
metrics
are
from
the
test
image
from
this.
You
can
see
that
the
highest
is
the
better
here,
so
the
faster
rc
still
performing
better.
Even
actually
the
pasta
has
seen
one
downside
generally.
It
came
maybe
three
years
back,
but
it
still
performed
better
for
certain
use
cases,
but
it
is
not
good
for
real
time.
D
If
you
want
a
latency
low,
latency
prediction,
it
is
not
a
good
one,
but
in
our
use
case
latency
is
not
a
very
critical
thing.
Main
thing
is
accuracy
is
the
main
important
thing,
so
we
can
compromise
that
actually,
so
the
other
reason
models
are
tackling
this
problem,
like
reducing
the
latency
without
compromising
the
accuracy.
So
that's
a
recent
innovation
coming
in
this
space.
Okay,
so
the
right
side
is
the
class
specific
accuracy,
the
map
and,
for
example,
the
actual
the
ap
ap
50.
D
Here
ap
means
the
map,
actually
it's
a
short
representation.
So
here
we
took
that
just
to
show
that
each
class,
how
it
performed
so
the
text
box
how
action
set
action
said
how
what
is
the
numbers?
We
got.
Okay,
so
yeah.
So
this
is
the
matrix.
So
we
main
idea
I
want
to
highlight
is
we
are
sticking
to
the
public
available
metrics
that
conventions
to
ensure
we
are
keeping
up
updated
with
what
is
happening
in
this
space
and
following
the
standards?
D
A
D
B
Let
me
take
that
question
man
so
when
he
said
that
it
works
perfect
for
the
real
time
he's
referring
to
like
you,
you
have
a
camera
feed
outside,
and
then
you
want
to
detect
something
right.
That
is
the
performance.
He
is
talking
about
something
like
an
embedded
system
where,
in
which
you
have
to
direct
a
thief,
for
example
right
in
from
the
live
feed.
That
is
what
he
is
referring
still
we,
the
faster
rcn,
performs
good
the
latency.
If
you,
if
you
see
that
right,
we
are
upload
that
image
within
a
couple.
B
Get
the
response
from
the
from
the
back
end,
so
the
latency
in
our
case
is
like
maybe
three
to
four
minutes
based
on
the
model
based
on
the
complexity
of
the
card,
also
based
on
our
network,
but
the
ml
model
itself
take
like
maybe
a
three
seconds
or
four
seconds
time
or
so
yeah
for
another
case
faster
than
your
rcn
is
best.
But
if
you
want
to
do
a
live
thing,
as
I
said
right
from
a
camera
fit,
then
we
have
to
try
out
a
different
movement.
D
D
So,
let's
see
our
data
set
that
what
we
collected
for
our
training.
So
this
is
the
we
collected
around
200
images.
This
is
not
a
very
huge
amount,
it's
a
very
starting
only
we
can
say
starting
in
a
sense
reasonable
size
only
we
we
are
not
a
training
from
the
scratch.
Okay,
so
it's
kind
of
transfer
learning
called
as
a
transfer
learning,
so
existing
object,
detection
models.
We
take
it's
trained
on
a
general
objects
like
90
objects,
90.
D
It
includes
animals,
a
car
jeep
or
like
a
90
general
objects.
It
trained
okay.
So,
but
in
here
we
want
to
detect
a
text
box
image
inside
image
or
a
radio
button,
that's
a
kind
of
requirement.
So
we
have
to.
We
need
that
knowledge
of
the
existing
model,
and
but
we
feed
a
few
more
data,
and
we
are
we
trained
that
model
to
focus
on
our
problem
so
that
that's
kind
of
a
crooked
way
of
saying
transfer,
learning.
Okay.
D
So
here
we
are
using
a
200
images
that
we
collected
from
adaptive
cards
and
also
internet.
Then
we
labeled
it
so
from
this
numbers.
You
can
see
that
there
is
a
skewness
right.
Excuse
me
in
the
sense
or
imbalance,
so
all
types
doesn't
have
equal
number.
That
is
expedited,
because
if
you
take
a
ten
cards,
majority
of
them
will
be
a
text
box
and
so
that
distribution
you
can
expect.
So
that's
a
real
life
that
is,
imbalance
will
be
there.
D
So
the
model
has
to
know
that
to
predict
the
in
our
roadmap.
We
are,
we
want
to
increase
the
size
to
get
the
model
resilience
more
in
the
real
life
scenarios.
So
that's
in
our
that's
kind
of
in
the
roadmap
also,
but
with
this
itself
it
we
are
getting
the
prediction
side
decent,
very
good.
D
Actually
we
can
say
it
is
another
very
good
accuracy
we
can,
but
we
can
still
improve
on
maybe
the
low
quality
images
or
tilted
very
poor
images
like
that,
so
we
can
improve
from
that
this
current
stage.
Also,
okay,
so
I
want
to
explain
about
the
model
side
and
how
internally,
what
is
happening
in
the
model,
so
I
think
the
next
day
the
key
part
is
the
layout
what,
after
the
model,
model
pumps,
the
bounding
boxes
and
the
type
it
says.
D
This
is
the
coordinates
of
a
text
box,
and
this
is
the
type
of
it
it's
a
text
box
or
image.
That's
a
job
of
the
model.
After
that
we
have
to
do
the
remaining
job,
so
just
kind
of
positioning
it
extracting
correct
things.
So
that's
what
the
remaining
pipeline
does.
So
I
think
I'm
inviting
kitana
to
take
from
this
point.
C
I'm
this
software
engineer
in
imagineer
for
about
a
year
and
a
half
the
next
step
in
our
pipeline
prediction
flow
is
the
car
layout
generation.
So
this
is
like
a
key
stage
like
hari
said,
because
it
bridges
the
gap
between
the
model
predicted
output
to
the
adaptive
card
json.
So,
basically
it
is,
we
can
say
it
has
been
subdivided
into
two
tasks
which
happens.
Parallelly
one
is
the
property
extraction,
so
talk
about
property
extraction
in
that,
like
hari,
say,
the
model
gives
only
two
things:
the
type
of
the
elements
and
the
spatial
details.
C
With
this
two.
We
have
to
extract
the
relevant
properties
of
every
element.
For
example,
we
can
see
the
grand
buffer
spread
text
box
on
top
of
the
image.
So
what
are
the
properties
related
to
that
text
box?
The
first
thing
will
be
extracting
the
text
out
of
it.
We
do
that
using
the
tesseract
tessa
ocr,
which
gives
us
the
bounding
box,
values
of
a
particular
character
wise
also,
and
the
next
property
will
be
the
extracting
color.
C
Okay,
the
next
property
will
be
like
the
font,
size
and
weight
like.
But
harry
said,
this
is
like
most
tricky
property,
because
it's
it's
values
depends
upon
the
aspect:
ratio
of
the
entire
image
and
its
aspect-
ratio
of
the
particular
design
area.
C
So
the
initial
approach
we
use
something
is
called
edge,
detection
and
contour
building.
So
what
it
basically
does
is
like
it
tries
to
find
the
image
level
pixel
intensities
of
each
and
every
alphabet
and
tries
to
build
a
rectangle
around
it.
We
can
say,
and
what
we
do
is
like
basically
take
the
average
of
the
width
and
the
heights
of
the
bounding
box
or
the
rectangles
inside
the
alphabets,
and
we
can
take
the
average
of
it
and
fit
it
in
the
font,
size
and
weight.
C
But
the
main
issue
in
this
approach
that
we
saw
was
what,
if
two
or
more
alphabets
get
grouped
inside
our
edge,
so
that
will
be
skewing
up
our
font,
size
and
weight
rate.
So
we
did
a
couple
of
methods
considering
the
height
and
weight
ratios
with
respect
to
the
design
area,
but
that
didn't
work
out
right
now
we
are
using
a
approach
called
morphological
approach.
C
This
approach
basically
does
something
called
skeletonization
of
the
image.
So
when
I
say
skeletonization,
it
is
nothing
but
drawing
a
outline
of
each
and
every
alphabet
considered
like
an
outline
has
been
drawn
on
top
of
each
and
every
alphabet.
So
when
the
outline
is
detected
now
all
we
have
to
do
is
to
take
the
ratio
of
the
area
inside
the
alphabet
with
respect
to
the
design
area.
C
This
property
has
a
major
challenging
in
terms
of
understanding.
How
does
an
alignment
affect
the
entire
element's
position
versus
how
the
alignment
affects
the
arrangement
of
the
text
inside
it?
We
can
say
maybe,
for
example,
we
can
see
the
image
here
under
the
grand
buffer
spread.
So
if
I
say
the
center
line,
the
whole
image
is
center
aligned.
C
So
what
if,
in
that
place,
there
is
a
text
box
having
three
to
four
lines
of
text,
so
when
I
say
center
align,
it
affects
the
property
of
the
text
content
inside
the
element
also,
so
we
have
to
differentiate
between
these
two.
This
was
our
like
major
challenge
like
this.
We
have
a
couple
of
properties
for
each
and
every
element.
So
in
this
stage
the
property
extraction
will
be
getting
a
list
of
design
elements
with
this
related
properties
extracted.
So
that
will
be
in
this
stage.
C
Basically,
what
we
do
is
one
thing
we
try
to
group
the
elements
into
its
respective
containers
when
I
say
that
we
can
see
here,
we
have
three
radio
buttons,
so
the
model
gives
us
three
radio
buttons
and
the
properties
will
be
extracted
right
now.
We
have
to
fit
them
into
a
choice
set
having
three
choices.
So
how
do
we
group
all
these
things
together?
How
do
we
group
certain
elements
into
a
column
set
all
this
decision
will
be
made
here
and
how?
Where
do
we
fit
this
toy
set
in
the
design
hierarchy?
C
That
is
also
made
here.
We
do
this
with
the
help
of
the
spatial
details
from
the
model.
We
build
certain
heuristics
with
that
details
and
we
arrange
the
layout
structure.
So
this
is
a
example
of
our
layout
skeleton.
Here
we
can
see
the
each
and
every
element
is
being
represented
as
set
of
coordinates
and
properties.
C
Once
we
build
this
layout
structure,
the
final
step
will
be.
We
have
to
map
each
and
every
element
in
the
layout
structure
to
its
corresponding
adaptive
card
format,
so
which
will
give
us
the
final
adapter
card
json.
So
this
is
the
end-to-end
flow
of
how
a
pictocard
works.
C
Structure,
we
have
all
our
service
related
stuffs
and
service
related
scripts
and
sample
images
are
under
the
app
folder
and
we
also
provide
certain
command
line
utilities
for
a
prediction
and
for
training
purposes.
All
the
command
line
utilities
will
be
inside
our
command,
folder
and
yeah
see.
We
can
see
the
command
entities,
everything
and
the
data
structure.
C
B
So
the
picture
card
has
become
our
playground
and
then
we
we
had
a
lot
of
fun
with
it
playing
around
with
a
lot
of
different
models,
trying
out
different
things,
because
it's
really
encouraging
and
then
the
output.
What
we
have
seen
right.
It
is
really
you
you
guys
have
seen
that
the
the
power
of
pick
to
card,
so
it
is
really
encouraging
for
us
to
try
out
something
like
this
and
then
we
really
loved
it,
and
then
it
has
been
like
almost
like
a
six
to
eight
months
of
project.
B
We
have
been
trying
out
with
the
different
data
source,
different
ml
models.
We
have
been
trying
out
with
the
different
data
sets
and
then
we
we
compare
ourselves
how
we
can
constantly
improve
on
it.
Okay,
we
love
the
code.
We
encourage
everybody
who
want
to
contribute
towards
this
right.
There
are
multiple
ways
of
contributing.
Take
a
look
at
the
source
code.
You
can
file
a
bug
or,
if
you
like,
like
extend
this
to
support
other
cards
or
other
elements.
You
can
do
that.
You
can
fork
it
and
then
raise.
A
B
Pr
we
are
happy
to
review
it
and
then
merge
it
part
of
our
source
code
or
for
if
you
find
a
good
bug
or
if
you
find
an
issue,
please
share
with
us,
we
will
definitely
fix
it.
Okay,
you
can
raise
it
as
part
of
the
github
github
issue,
so
yeah
there
are
great
ways
to
contribute.
So
what
is
there
in
our
roadmap
so
immediately
we
are
going
to
hit
the
production.
Probably
you
can
expect
in
a
week
time.
This
is
there
in
the
production
as
part
of
the
designer
adaptive
car
designer.
B
So
after
we
released,
we
already
made
a
lot
of
improvement
from
what
what
is
there
currently.
So
we
are
quickly
going
to
release
some
of
the
new
layout
generator,
which
is
going
to
improve
even
from
what
we
have
shown
today
and
then
it
is
going
to
improve
all
the
alignments
and
then
font
understanding
the
font
better
right,
because
when
we
upload
a
image,
we
want
to
create
that
epic,
as
close
as
the
pixel
perfect
as
the
image.
That
is
what
we
are.
B
We
are
trying
to
do
here
and
then
we
are
also
going
to
provide
if
the
user
can
give
the
feedback
like
if
there
is
a
bug
or
if
they
want
to
comment
out.
If
they
want
to
recommend
something
for
us,
we
can
definitely
do
that.
Also,
you
guys
can
upload
some
of
your
beautiful
design
if
it
is
a
free
design
and
then
you
you
guys
like
to
design
so
that
you
can
improve
the
ml
pipeline.
Definitely
we
welcome
you
to
do
that.
B
We
will
quickly
give
the
interface
so
that
you
guys
can
share
the
the
design,
okay
and
then
the
model.
Whatever
we
have
built,
it's
a
reinforced
learning,
so
it
is
going
to
learn
from
whatever
is
going
to
getting
upload.
So
it
is
going
to
keep
on
learning
and
then
it
is
going
to
keep
on
better
itself
and
then
we
are
working
very
aggressively
and
then
we
will
fix
any
loose
ends
or
any
boxes
which
is
already
there,
and
we
are
also
committed
to
support
any
new
element
which
is
going
to
come.
B
B
Sure
so
this
is
a
kind
of
like
last
slide
before
my
q
a
so
I
want
to
take
a
moment
to
give
credits
and
for
the
contributor
the
major
contributor
for
this
right.
So
we
have
breakthrough
card
on
the
right
hand,
side.
We
have
kitana
spoken
with
us.
We
other
people
like
myself,
has
contributed
towards
this
project.
We
are
also
one
of
the
active
multimeters
we
have.
We
are
completely
won
the
react
native
renderer
for
adaptiq
card.
These
are
the
people
who
keep
the
contribution.
B
Probably
one
of
one
other
community
call
will
present
that
as
well.
It
is
also
available
as
an
open
source.
Please
take
a
look
and
then
give
us
the
feedback
over
there.
What
do
you
mind?
I'm
happy
to
take
any
question
from
the
currency.
A
Yeah
super
cool,
so
yeah,
so
definitely
let's
we
can
take
any
questions.
I
I
have
a
couple
just
to
kick
it
off,
but
yeah
feel
free
to
either
click
the
hand
raise.
I
tried
to
help
people
along
in
chat
as
it
went
and
some
other
folks
from
imagineer
were
chiming
in
as
well,
but
one
question
I
had
so
there
was
an
image
I
had.
A
A
So
what
tactical
steps
are
we
going
to
take
to
minimize
the
that
feedback
loop
between
someone
tries
an
image?
They
see
some
things.
You
know
how
what's
what's
our
approach
going
to
be
for
how
do
we
actually
get
feedback
where,
if
people
see
something
they
didn't
expect
and
then
how
do
they?
How
do
they
file
that
okay.
B
So
as
of
today,
if
you
see
it,
we
have
only,
I
won't
say
minimal:
we
have
a
decent
amount
of
data
set.
We
have
roughly
around
200
different
design
right
during
different
design
of
the
adaptic
cut.
So
as
the
data
set
grows
it
the
model
is
going
to
perform
outperform
like
it's
going
to
improve
itself,
because
we
have
built
a
reinforced
model,
we
can
definitely
input
the
whole
image
and
then
that
is
going
to
keep
learning
from
it.
So
that
is
one
way.
B
Definitely
the
model
is
going
to
learn
and
then
it's
going
to
improve
itself,
and
then
we
will
provide
an
interface
where,
in
which
people
who
like
to
contribute
to
us,
they
think
that
okay,
this
is
a
great
way
of
creating
the
adaptiq
card.
We
will
give
a
interface
which,
which
will
allow
them
to
upload
their
image
or
give
them
the
feedback
if
they
find
that
ninety
percent
was
working
and
then
some,
for
example,
there
is
an
element
which
which
it
is
not
directing
perfectly.
B
They
can
quickly
crop
and
then
upload
or
give
us
the
comment
in
that
way
that
we
will,
we
will
take
care
of
it.
We
will
input
that
as
part
of
the
machine
learning
algorithm
or
the
as
part
of
the
model
itself,
so
that
it
will
learn
from
the
mistake
and
then
it
will
better
mandate
itself.
A
A
So
out
of,
like
total
curiosity
as
the
one
example,
I
was
trying
to
find
the
image,
but
it
was
like
a
restaurant
card
very
similar
to
the
one
you
showed
and
the
image
along
the
top
was
at
a
cafe,
but
it
kind
of
had
a
dark
shadow
in
the
middle,
and
so
what
happened
when
I
uploaded
it
through
picked
a
card.
Is
it
it
split
those
into
two
images
it
interpreted
them
as
as
two
separate
images,
because
that
that
divider
kind
of
so
at
a
super
high
level?
A
B
Detected
as
a
two
different
image
is
our
mission
learning
algorithm
the
input
what
we
are
given.
It
has
not
seen
a
similar
kind
of
image,
I'm
not
telling
victor
if
it,
for
example,
if
there
is
like
a,
we
are
trying
the
whole
image
with
lot
of
background
noise
and
then
background
shadows,
it
will
try
to
classify
itself.
Okay,
it
will
understand.
The
model
itself
will
understand
that.
Okay,
this
is
a
shadow.
This
is
like
the
noise
which
he
needs
to
remove
right.
It
needs
to
remove
during
the
output.
B
So
we
have
the
input
we
have
to
enhance
our
training
data
set,
and
then
we
have
to
input
these
kind
of
images
so
that
it
can
do
better.
That
is
one
approach
we
also
doing
denoising.
When
we
take
the
image,
even
though
we
show
the
pipeline,
there
is
lot
of
other
work
which
actually
happens.
So
when
we
take
the
image
we
do
kind
of
like
clearing
it
denoising
it
right.
So
we
can
definitely
improve
in
that
part,
and
then
it
is
going
to
better
itself.
Yeah.
A
A
You
know
we
walked
through
it
and
then
hopefully
yeah.
I
always
hate
to
promise
timelines,
because
who
knows
what
happens,
but
we
are,
we
think
we
have
a
good
idea
of
what
the
accessibility
stuff
is.
I
think
we're
all
fixed
we're
just
waiting
on
final
sign
off,
so
I
think
it's
it's
pretty
highly
likelihood
that
this
will
be
available
on
adaptivecards.io
this
month.
A
B
All
right,
I
think,
that's
all
we
have
for
for
the
day
and
thanks
man
thanks
for
inviting
us
for
the
great
show,
and
then
I
hope
everybody
likes
it
and
then
it's
it's
a
little
magical
right.
So
I
want
you
to
just
hide
out:
give
us
the
feedback,
we'll
keep
on
improving
and
then
it's
going
to
ease
out
and
then
it
it
is
going
to
make
the
the
whole
design
experience
itself
into
a
different
level.
Yeah
yeah
for
sure.
A
And
I'll
take
a
second.
So
if
you,
I
know
it's
late,
where
you
are
so
feel
free
to
jump
off.
If
you
need,
I
will
scroll
through
the
questions
real,
quick.
I
think
there
are
some
kind
of
unrelated
ones
that
I'll
just
go
through
in
the
chat.
So
what's
the
best
way
to
find
out
progress
on
issues
I
was
trying
to
paste.
I
don't
know
what
the
heck
happened
on
my
chat,
aka,
nms,
ac
roadmap
for
the
latest
features,
so
ac
roadmap
is
where
you're
going
to
want
to
go
to
see.
A
What's
in
development,
stay
tuned
to
that
launched
tab,
that's
anything!
We've
recently
launched
and
once
this
goes
to
production,
you'll
find
the
pictocard
announcement
right
there.
So
that's
a
good
way
to
track
features.
As
far
as
bugs,
though
I
think
there
was
a
comment
on
input.number,
not
accepting
floats
or
something
possibly
more
bug,
level
or
minor
feature
requests.
That's
going
to
be
tracked
on
our
github
so
and
I
kind
of
lost
that
question,
but
basically
yeah
the
the
github
is
where
really
you're
gonna
find
a
lot
of
a
lot
of
that
stuff.
A
So
hopefully
that
helps
and
power.
Virtual
agents
thanks
tomash
for
answering
that
one
really
excited
for
the
opportunity
to
get
adaptive
cards
in
the
hand
of
more
power
platform
developers
with
power,
virtual
agents,
so
we'll
keep
working
with
them
and
unlocking
them
whatever
they
have
and
and
hopefully
we'll
get
that
out
to
you
and
then
shalini.
Thank
you
for
helping
answer
the
question
on
when
we're
gonna
get
microsoft
teams
updated
to
the
latest.
A
Maybe
there
is
nothing
lots
of
discussion
on
the
different
machine
learning
algorithms.
Back
in
the
beginning,
all
right,
perfect.
A
And
then
it
looks
like
one
person
said
they
have
a
question
potentially
coming
one
query:
okay,
a
panel
of
launching
an
api.
A
I
assume
that's
asking
for
maybe
the
imagineer
folks,
okay
yeah,
so
is
there
an
api
that
they
could
use
to
programmatically
access
this
and-
and
I
I
want
to
preface
that
answer
with-
even
if
there
is
do,
we
want
to
encourage
it,
because
I
think
microsoft
is
currently
putting
the
bill
for
that
service,
and
I
don't
honestly
know
how
expensive
it
is.
B
So
the
source
code
is
available.
Even
we
archive
the
model
and
then
we
we
are
saving
it
to
the
part
of
the
source
code.
So
you
can
create
a
quick
lambda
function
and
then
expose
it.
It's
not
going
to
cost
you
much.
So
it's
not!
The
entire
machine
learning
is
going
to
run
right.
You
can
just
deploy
the
model
and
then
expose
a
simple
api
as
a
simple
lambda
function.
Right.
A
Yeah,
that's
a
great
call
out,
and
is
that
currently
like?
Is
that
documented
in
the
pictocard
readme
at
all,
like
the
self-hosting
stuff
that
we
can
just
say,
hey
like
look
if
you
want
to
self-host
this
service
and-
and
you
know
put
it
into
your
own
website-
likes
a
lot
of
people
embed
the
adaptive
card
designer
a
lot
of
our
partners
have
embedded
the
card
designer
to
make
it
closer
to
their
workflow,
and
I
can
imagine
them
wanting
this
functionality
in
their
own,
their
own
products
as
well.
A
Embed
the
designer
and
enable
the
pictocard
support-
and
I
know
you've
built
that
in
into
the
code
where
you
can
just
point
it
to
the
service
url.
So
they
would
take
the
code,
deploy
it
to
either
an
azure
function
or
any
kind
of
you
know
any
kind
of
cloud
provider
really
point
to
that
endpoint
and
then
they
would
get
that
integration
all
on
their
own.
Is
that
fair
to
say.
B
A
And
yeah
any
thoughts
on
that.
You
know
reach
out
to
these
folks
on
github
they're,
just
as
active
as
us,
microsoft.
Folks,
on
on
that
github,
it
has
been
such
a
cool.
It's
been
for
me,
the
the
most
open
source
project
and
the
true
spirit
of
it
that
I
that
I've
ever
worked
on.
You
know
we're
getting
to
work
with
a
kind
of
wide
array
of
folks,
in
this
case,
with
vastly
different
expertise
than
I
have.
So
that's
that's
very
cool.
A
So
one
question
from
ryan.
So
is
there
a
work
in
progress
that
allow
components
of
adaptive
cards
to
be
tappable,
clickable
other
than
buttons?
So
ryan?
I
think
that's
actually
possible.
Today
you
can
use
select
action
either
on
columns
or
images,
so
that
might-
and
I
can
try
and
find
the
docs
for
that
real
quick.
If
that's
what
you
mean.
B
A
Cool
well
very
nice
work.
Everyone
thanks!
Everyone
for
jumping
on
the
call,
we'll
see
you
in
a
couple
weeks
december
10th
is
the
next
call
second,
thursday,
of
every
month.
9
a.m.
Pst!
Thanks
for
an
engaging
chat.
I
I
imagine
I
have
folks
with
santa
ritas
chris.
Let
me
make
sure
I
got
her
name
right.
Sorry,
I
need
to
find
it
through.
Oh,
I
apologize.
I'm
messing
people's
names
up.