►
From YouTube: Octant Community Meeting - March 10th, 2021
Description
Octant community meeting is held weekly. We discuss and talk about the current state and future of Octant, demo upcoming features and releases, and preview new ideas we are considering for Octant.
Meeting agenda: https://hackmd.io/CzaPxtmXT_SW8nEpdwvGzw?view
A
All
right
welcome
everybody
to
the
what
is
today
10th
march
10th
octant
community
meeting.
I
will
get
the
notes
pulled
up
and
I've
got
a
few
things
to
cover
today,
mostly
just
around
kind
of
a
recap
of
last
week,
since
that
meeting
got
lost
in
the
ether
really
like
it
was
recorded,
but
only
got
one
half
the
audio
and
we
didn't
have
a
local
backup.
A
So
and
I
I
made
an
attempt
to
to
go
narrate
over
the
top
of
it
and
every
time
I
tried
it,
my
dog
would
start
barking
at
something
and
that's
how
I
gave
up
even
with
the
I
was
using
the
nvidia
voice
tool
to
like
stop
it
from
happening
and
and
still
just
like
every
time
it
would.
It
would
still
like
come
through
a
little
bit
and
then
just
like.
I
was
like
all
right.
I
give
up
so
so
yeah.
A
So
today
we
will
focus
on
kind
of
recapping
and
then
talk
about
some,
some
current
open
issues
and
and
where
things
are
going
so
last
week
we
mentioned
that
we
were
officially
starting
the
sprint
and
targeting
march
17th,
we
discussed
some
of
the
cpu
issues
that
were
going
on
around
mac
os,
some
of
the
things
that
we
had
found
front
end
versus
back
end
where
time
was
being
spent
different
strategies,
ideas,
thoughts,
there's
a
good
conversation
between
between
the
team.
A
Ultimately,
we
decided
that
there's
a
couple
spots
where
we
can
save
some
cpu
time
and
that
there
was
also
just
a.
We
would
probably
benefit
from
having
a
more
formalized
approach
to
how
we
want
to
address
performance
issues
in
the
future.
Essentially,
how
like?
What
metric
are
we
going
to
use
to
test
them?
How
are
we
going
to
benchmark
it?
A
What
data
set
are
we
going
to
use
for
that
benchmark
so
that
everyone's
kind
of
on
the
same
level
field
when
we're
looking
at
these
things
on
different
types
of
systems
and
different
architectures?
And
so
we
haven't
taken
any
action
on
that
yet,
but
we,
I
think
so
far,
everyone
on
the
team
is
generally
using
the
same
remote
cluster
that
we
have
up
in
in
gcp
to
do
testing
with
so
we'll
we'll
draft
up
some
formal
thing
around,
like
here's.
A
Here's
how
you
use
like
local
content
for
doing
performance
testing
or
something
something
that
I'd
affect
so
expect
to
see
that
coming.
A
The
other
thing
is
we
we
mentioned
that
we
had
some
code
walkthroughs
one
was
produced
by
me,
which
was
a
walkthrough
of
octan
and
how
it
generates
content.
A
couple
others
were
made
by
sam
for
how
to
add
new
components
in
or
do
new
pl
do
plugins
and
adding
a
new
component
with
like
getting
everything
going
through,
go
and
storybook
and
and
the
whole
thing.
So
I
think
the
I'm
probably
mixing
up
some
of
that
timeline,
because
sam
has
been
working
on
this
stuff
last
weekend
this
week.
A
A
So
I'll
pause
there
just
to
see
if
anyone
wants
to
add
something
I
might
have
missed.
I
know
that
that
meeting
was
like
a
40-minute
meeting
and
I
just
summarized
it
in
in
five
minutes
so
feel
free
to
add
something
that
you
think
might
have
been
important.
That
was
missed.
B
Yeah,
I
suppose
like
since
we're
talking
a
lot
about
videos.
Maybe
the
line
here
to
think
about
is
what
are
we
missing
like?
I
know
some
people
like
to
consume
information
in
the
form
of
videos.
That
being
said,
they
aren't
readily
edited
and
they
can
go
ahead
a
day
quickly,
as
mentioned.
B
So
maybe
we
should
figure
out
if
there
is
something
that
the
community
wants
or
even
internally,
that
we
should
have
in
video
form.
Let's
start
churning
these.
A
Out
yeah,
that's
a
good
call
out
so
yeah
any
videos,
you'd
like
to
see
any
blog
posts.
You'd
like
to
see.
We
already
know
that
example.
Code
is
something
that
people
have
been
asking
for.
Multiple
requests
for
the
kind
of
like
end-to-end
examples
of
like
how
do
I
manage
state?
That's
happening
outside
of
the
plug-in
since
plugins
are
on
this
polling
loop,
or
how
do
I
communicate
with
external
sources
of
data
and
then
bring
that
into
my
plug-in?
A
A
Now
I
think
we're
going
to
put
some
put
some
good
effort
into
that
after
this
current
release,
because
this
current
release
we're
adding
in
a
bunch
of
new
components-
and
we
are
kind
of
bringing
in
some
of
the
more
interactive
workflows
that
you
could
expect
to
to
want
to
do
as
a
plug-in
author,
rich,
like
rich
handling
of
forms,
stepping
through
stages
of
a
form
sam
put
up
a
pr
recently
at
the
timeline
component,
which
is.
A
Here
it's
here
there,
it
is,
can
show
that's
related
to
this
issue,
but
the
timeline
component
and
clarity-
is
this
one
if
you're
not
familiar
with
it
right,
so
we're
adding
more
rich
components
that
will
have
steps
and
stages
to
to
what
they're
doing
where
you
might
want
to
take
action
on
it
or
and
respond
to
that
act
to
that
event,
and
things
like
that,
so
I
think
the
the
focus
for
0.18
will
be
to
add
in
these
new
components
and
and
create
a
richer
ecosystem
around
the
capabilities
of
components
and
then
heavily
present,
that
with
examples
and
and
and
documentation,
I'm
maybe
so.
A
I
brought
this
up
in
an
issue
or
in
the
in
the
pr
that
sam
created-
and
we
can
talk
about
it
here
as
well.
A
But
we
had
talked
about
this
for
a
long
time
of
of
having
better
examples,
and
I'm
wondering
if,
as
part
of
these
new
components
as
they
come
in,
I
think
we're
adding
three
or
four
do
we
want
to
extend
the
existing
sample
plug-in
to
just
have
an
example
of
these
new
components
as
as
like
the
standard
going
forward
of
like
if
you're
adding
a
new
component
it
it
gets
a
storybook
entry.
It
also
gets
a
concrete
example:
plug-in
entry.
B
Yeah,
so
I've
started
thinking
about
this
and
I
think
the
what
we
eventually
want
to
go
towards
is
more
than
one
example
plug-in.
Currently
we
have
just
that
one
and
it's
starting
to
feel
a
little
busy,
and
I
actually
had
a
conversation
with
vikram
last
week
and
he
was
a
little
confused
with
the
sample
plug-in
because
it
adds
additional
information
into
a
pod
view
and
it
created
a
separate
module,
and
that
was
confusing
to
him
because
he
expected
it
to
just
be
a
module
and
he
wasn't
looking
at
the
pods
for
the
sample
plugin.
B
So
over
time,
as
we
continue
to
sprawl
out
a
single
plug-in,
it
might
get
confusing
on
where
to
look
for
specific
features.
So
what
I'm
going
to
do
is
to
have
that
install
plug-in
command
and
our
go
run
build
script
to
when
it
runs.
It'll,
actually
look
for
a
series
of
go
files
in
that
opt-in,
sample,
plug-in
folder
and
we'll
just
break
them
apart.
We
will
have
a
little
menu
that
that
pops
up
in
the
terminal
and
it
would
be
like
which
plugin
do
you
want
sample
plugin?
B
Do
you
want
to
install
and
we'll
just
break
this
up,
and
I
can
add
one
for
the
timeline
and
we
can
create
like
a
dozen
or
so
of
examples.
A
Okay,
cool
yeah-
I
think
we
have
the
there
was.
There-
was
a
thought
of
doing
this
on
a
kind
of
a
broader
sense
which
was
we
have
a
repository.
I
believe
for
this
exact
thing.
A
Yes,
there
is
this
very
much
neglected.
Octant
example
plugins
repo,
so
I
think
lifting
the
current
example
plugin
out
and
into
here
and
then
having
that
build
command.
A
We
could
do
this
like
as
a
sub
module
so
that
the
build
command
can
can
check
this
out
and
and
build
plug-ins
from
here.
But
I
think
that
this
is
probably
the
place
we
want
to
start
to
centralize
centralize
around
that.
A
B
A
A
A
So
we
were
talking
about
performance
issues
last
week
and
we
addressed
a
couple
things
that
did
help
with
those
this
week,
one
of
them
one
of
the
ones
that
had
the
biggest
impact
it's
not
merged
yet,
but
it
will
be
soon.
We
just
want
to
do
probably
maybe
a
little
bit
of
restructuring,
but
the
the
the
idea
here
is
is
sound,
which
is
we
were
and
milan.
If
you
want
to
speak
to
this,
you
go
ahead
since
you,
you
created
the
pr
for
it.
C
It's
been
a
while,
I'm
not
sure
I
remember,
but
you
know
the
core
of
it
is
just
that
you're
cashing.
Basically,
we
every
every
time
we
try
to
calculate
bring
back
the
navigation
items
for
crds
every
time
we
were
collecting
them
and
that
that's
pretty
expensive
operation
on
my
mac.
It
would
take
500
milliseconds
and
it
would
run
twice
a
second,
so
yeah,
that's
that's
a
lot
of
cpu
cycles.
So
basically
the
idea
that,
to
prove
that
I
just
created
a
memory
cache
and
it's
a
globe.
Currently
it
was
a
global
memory
care.
C
So
I
think
they
made
a
really
good
point
that
probably
shouldn't
be
globally.
We
should
try
to
figure
out
better
way
to
do
that,
so
I
think
that
that
will
improve
the
performance,
at
least
on
the
mac,
for
maybe
15
percent.
That's
what
average
large
cluster
that
I've
seen
it's
happening.
So
it's
it's!
It's
a
good
performance
improvement,
but
we
still
need
to
figure
out
a
little
better
way
to
both
test
it
and
to
to
structure
it.
C
I
mean.
Ideally,
I
was
thinking
about
this
moving
and
ideally
I
kind
of
feel
that
we
need
to
better
way
to
handle
memory,
caching
in
general,
to
have
some
kind
of
so
so,
if
you
think
think
about
it.
Ideally,
when
you
let's
say
you
want
to
collect
the
data
that
needs
to
be
presented
on
screen,
for
example,
navigation.
C
C
A
Yeah,
so
I
the
I
can
look
at
this
this
code,
but
the
call
to.
A
There's
a
list
call
in
here
somewhere:
that's
that's,
that's
fetching
the
the
the
custom
resources.
It
is
it's
in
this
custom
resource
definitions,
code
somewhere
yeah
this
right
here,
so
this
list
call
is
backed
by
a
by
a
memory
cache.
A
A
Where's
that
cache
sitting
is
that
sitting
on
I'd
have
to
go
look,
but
so
I
think
I
think,
there's
two
things
that
might
be
happening
one.
It
may
not
be
using
the
cache,
as
we
expect
it
to
be
doing
because
it
should
be.
The
other
thing
is
that
potentially
just
this
operation
yeah
here
is
is
expensive.
A
Right,
like
like
casting
all
of
this,
like
reflecting
all
of
this
content
and
marshalling
and
unmarshaling,
and
then
sorting
that
list
like,
if
that's
the
expensive
part
of
this
right
like
once
this.
This
is
expensive
until
it
is
cached
until
it's
synced
once
once
that
is
done.
If
this
becomes
the
expensive
part,
then
that
is
not
like
that's
something
that
just
kind
of
needs
its
own
cash
for
the
function.
C
B
C
C
So
yeah
yeah,
yep
and
120.,
but
it's
it's
amazing
that
you
know
roughly
one
third
of
the
time
on
in
that
call
is
spent
marshall
and
one
third
on
one
on
marshall's
two-thirds.
I
don't
know
how
much
time
switching
back
to
when
when
thought
converts
will
save
us,
because
you
know,
maybe
maybe
it's
doing
behind
the
scenes
exactly
the
same
things
we
think
we
do
here,
which
is
you
know,
sequentially
calling,
martial
and
unmarching.
So.
A
So
no
I
mean
it's
it's
it's
adding
the
reflecting
type
into
the
converter,
so
it
knows
how
to
deal
with
it.
It
was
just
missing
from
the
from
the
switch
statement,
essentially
right,
like
here's,
the
the
switch
statement
that
it's
running
through
string,
bool
and
it
just
didn't-
have
float
so
that
is
way
less
expensive
than
having
to
run
it
through
a
second
pass
of
our
marshall
and
on
marshall,.
A
All
right
and
then
I
I
made
a
switch
that
actually
introduced
a
bug
which
made
me
sad,
because
our
tests
obviously
need
better
coverage
because
they're
all
passing
when
this
happened,
but
so
we
we
switched
to
jason
itter,
which
is
a
third-party
package
for
doing
json
handling.
It's
it's
faster.
It
avoids
some
of
the
overhead
of
that.
The
built-in
library
has
by
avoiding
some
of
the
reflection
the
there
there
there's
a
pretty
decent
benefit
to
using
it
on
octan.
A
We
do
a
lot
of
jason,
marshalling
and
unmarshalling,
and
on
our
content
response
data
set
on
my
local
machine
for
our
testing
cluster.
With
in
the
default
namespace,
I
was
seeing
on
average
100
millisecond
reduction
in
the
total
time
of
our
of
our
generate
call
just
by
switching
the
library
over.
Sadly,
there
was
some
sorting
that
didn't
get
passed
into
this
that
we
weren't
testing
for
that
we
were
depending
on
for
front-end,
rendering
like
we.
A
A
The
front
end
was
saying:
oh,
this
structure
has
changed
with
every
single
request,
even
though
it
was
just
like
labels
moving
or
some
or
the
query
parameters
in
the
response
moving.
It
would
then
re-render
the
whole
the
whole
json
response,
so
we've
opened
an
issue
for
that,
and
I
know
milan
and
luis
had
had
both
encountered
around
the
same
time
and
I
think
sam
encountered
it
a
couple
days
earlier
and
yeah
posted
it
up.
A
But
then,
when
luis
and
milan
did
some
investigation,
they
found
that
was
related
to
this
jason
inter
change.
So
there
was
a
conversation
on
slack
that
you
all
might
have
saw.
I'm
milan.
Did
you
try
the
sort
keys
thing.
C
A
Yeah
I
mean
in
my
testing
the
every
single
I
tested
three
different
modes
for
json
editor.
All
of
them
were
faster
and
like
like
the
fastest
config.
One
is
only
like
I
mean
it's
a
single
digit
percentage
point
faster
than
the
the
compatible
with
standard
library.
One
right,
it's
like
it
is
it
is
all
of
them
are
faster
than
the
built-in
library.
So
I
I
don't
know
that
it's
worth
spending
too
much
time
trying
to
squeeze
out.
You
know
a
millisecond
of
performance
from
it.
A
And
then
so,
just
a
general
update
the
this
last
week,
so
we
announced
like
yeah.
Last
week
we
were
like
oh
it's
official
kickoff
for
the
sprint
this
week
after
that
and
early
this
week,
we've
been
kind
of
just
like
in
the
weeds
on
some
of
these
issues
around
like
performance
and
rendering
layouts,
and
things
like
like
to
sam's
point
last
week
right.
A
These
issues
can
have
tails
that
go
on
forever,
so
like
scoping
them
to
a
reasonable
amount
of
time
and
providing
reasonable
conditions
in
which
we
should
execute
our
testing
scenarios
under
is
all
things
that
we
ought
to
do
before.
We
really
like:
hey
we're,
gonna,
go
and
address
these
performance
things
so
they've
been
a
bit
of
a
distraction,
a
positive
one
in
in
many
ways
right,
like
we've
made,
I
think
we've
made
good
improvements
to
octant's
core,
even
just
with
kind
of
these
surface
level
of
like
let's
do
a
memstat.
A
Let's
do
a
cpu
stat.
Let's
look
at
the
output
of
those,
let's
find
where
a
lot
of
time
is
being
spent
and
let's
just
reduce
the
amount
of
time
in
those.
Obviously,
in
like
hot
spot
areas,
like
some
quick
wins,
I
wouldn't
say
it
was
like
a
holy
scientific
approach,
but
it
was
definitely
like.
We
got
some
wins
out
of
it.
The
crd
caching,
for
the
navigation
was
a
big
one.
A
Jason
knitter
was
a
little
one
and
I
think
there
was
another
one
in
there
along
the
way
that
I
might
have
missed,
but
like
overall
performance
has
greatly
improved.
So
with
that
said
that
bit
of
a
of
distraction
on
performance
stuff,
I
just
went
ahead
and
said
you
know
what
let's
target
this
for
march
24th
last
week
I
did
say
17th,
maybe
the
24th
this
week,
I'm
just
going
to
say
like
we'll
target
this
for
the
24th.
A
You
know
the
fact
that
we
I
think.
Last
week
we
had
21
21
issues
in
the
in
the
backlog
under
to
do,
and
this
week
we
have
21
issues
in
the
backlog
under
to
do
means,
it's
probably
safe
to
say
we
need
to
bump
it
now,
instead
of
waiting
until
the
last
minute.
I
I
think
one
of
the
problems
we've
gotten
into
in
the
past,
and
this
is
totally
my
fault-
is
like
we're
like
yep.
A
This
is
the
date
and
then
like
something
like
this
happens,
and
I
don't
adjust
the
expectation
I
just
leave
it
and
then
we
get
to
that
date
and
it's
like
next
week,
so
we'll
just
adjust
the
expectation.
Now
we
will
target
the
24th
what
we-
and
I
will
say
this.
What
we
have
ready
on
the
24th
is
what
we
will
release
on
the
24th
the
there
we
will
not
do
any
of
like
this
is
blocking
for
the
next
release.
A
We
will
just
we'll
cut
and
if
there
are
things
that
people
are
like
wow,
I
wish
this
really
was
a
note
at
18.
Then
we
will
do
an
0.19
a
week
later,
even
whatever
like,
but
the
24th
will
be.
I
want
to
start
having
this
on
a
kind
of
like
a
set
a
date
release
on
that
date.
A
What
we
have
is
what
we
have
the
reason
I
decided
to
push
this
one
a
week
is
because
what
we
we
spent
our
time
doing,
performance,
stuff
and
other
things
that
basically,
we
didn't
make
a
whole
lot
of
progress
on
sprint,
specific
things.
So
that's
why.
A
B
Yeah,
it's
posted
there
they're
processing
on
youtube
right
now,
probably
another
hour
before
they
are
online.
A
So
yeah,
so
those
videos
will
be
online.
This
is
this
was
the
third
part
which
is
like
adding
these
component
to
storybook.
I
believe
trying
to
find
it
is
that
right,
sam.
A
And
I
think
with
that,
we're
to
the
open
q,
a
so
does
anyone
have
anything
they
wanted
to
bring
up
before
we
call.